Showing posts with label minor questions. Show all posts
Showing posts with label minor questions. Show all posts

Tuesday, June 24, 2014

Liberty or Equality?

The Founding Fathers knew that you can’t have both
by Myron Magnet
With the fulminating on the left about inequality—“Fighting inequality is the mission of our times,” as New York’s new mayor, Bill de Blasio, summed up the theme of his postelection powwow with President Barack Obama—it’s worth pausing to admire anew the very different, and very realistic, modesty underlying Thomas Jefferson’s deathless declaration that all men are created equal. We are equal, he went on to explain, in having the same God-given rights that no one can legitimately take away from us. But Jefferson well knew that one of those rights—to pursue our own happiness in our own way—would yield wildly different outcomes for individuals. Even this most radical of the Founding Fathers knew that the equality of rights on which American independence rests would necessarily lead to inequality of condition. Indeed, he believed that something like an aristocracy would arise—springing from talent and virtue, he ardently hoped, not from inherited wealth or status.
In the greatest of the Federalist Papers, Number 10, James Madison explicitly pointed out the connection between liberty and inequality, and he explained why you can’t have the first without the second. Men formed governments, Madison believed (as did all the Founding Fathers), to safeguard rights that come from nature, not from government—rights to life, to liberty, and to the acquisition and ownership of property. Before we joined forces in society and chose an official cloaked with the authority to wield our collective power to restrain or punish violators of our natural rights, those rights were at constant risk of being trampled by someone stronger than we. Over time, though, those officials’ successors grew autocratic, and their governments overturned the very rights they were supposed to protect, creating a world as arbitrary as the inequality of the state of nature, in which the strongest took whatever he wanted, until someone still stronger came along.
In response, Americans—understanding that “kings are the servants, not the proprietors of the people,” as Jefferson snarled—fired their king and created a democratic republic. Under its safeguard of our equal right to liberty, each of us, Madison saw, will employ his different talents, drive, and energy, to follow his own individual dream of happiness, with a wide variety of successes and failures. Most notably, Federalist 10 pointed out, “From the protection of different and unequal faculties of acquiring property, the possession of different degrees and kinds of property immediately results.” That inequality would be a sign of the new nation’s success, not failure. It would mean that people were really free.
The democratic republic that the American Revolution brought into being, however, contained the seeds of a new threat to natural rights, Madison fretted. Yes, the new nation will operate by majority rule, but even democratic majorities can’t legitimately overturn the fundamental rights that it is government’s purpose to safeguard, no matter how overwhelming the vote. To do so would be just as grievous a tyranny as the despotism of any sultan in his divan. It would be, in Madison’s famous phrase, a “tyranny of the majority.” As Continental Congressman Richard Henry Lee put it, an “elective despotism” is no less a despotism, for all its democratic trappings.
How would such a tyranny occur? Almost certainly, Madison thought, it would center on “the apportionment of taxes.” Levying taxes “is an act which seems to require the most exact impartiality, yet there is perhaps no legislative act in which greater opportunity and temptation are given to a predominant party, to trample on the rules of justice. Every shilling with which they overburden the inferior number is a shilling saved to their own pockets.” How easy for the unpropertied many to expropriate the wealth of the propertied few by slow erosion, decreeing that they should pay more than a proportionate share of the public expenses. How seductive for the multitudinous farmers to levy taxes on the much smaller number of merchants or bankers or manufacturers, while exempting themselves. How tempting for the majority who have debts to transfer money secretly and silently away from the minority who are their creditors by debasing the currency, so that the real value of what they owe steadily shrinks, as Madison well remembered from the ruinous inflation of the Revolutionary War years. And a year before the Constitutional Convention, Madison recalled, the debt-swamped farmers of western Massachusetts had cooked up, in Shays’s Rebellion, still more “wicked and improper” schemes for expropriating the property of others: trying to close the courts at gunpoint to prevent foreclosures on their defaulted mortgages and even demanding the equal division of property.
So as chief architect of the Constitution, designed to give the federal government sufficient power to protect citizens’ basic rights—above all, the power to tax, whose lack under the Articles of Confederation had made the Revolutionary War longer and grimmer than it would have been if Congress had had sufficient means to buy arms and pay soldiers—Madison proceeded with his heart in his mouth, fearful that such augmented power made a tyranny of the majority all the more possible. His main challenge at the Convention, as he saw it, was to guard against precisely that outcome. So while four of the 18 specific powers that Article I, Section 8 of the Constitution gives Congress concern the levying of taxes and the borrowing and coining of money to “provide for the common Defence and general Welfare of the United States”—taxes that “shall be uniform throughout the United States”—eight of the rest deal with spending only for military and naval purposes, while the “general welfare” powers extend only to building post offices and post roads; establishing federal courts; protecting intellectual property by copyrights and patents; and regulating bankruptcies, naturalization, and interstate and foreign commerce. Not content merely to limit and define explicitly the federal government’s power, Madison made sure that the Constitution divided it up among several branches, limiting the power that any single individual or official body could wield and putting each jealously on guard against any other’s attempt to seize a disproportionate share. Moreover, all these officials (except the judges) were elected representatives of the people: they were the agents through whom Americans, who had no rulers, governed themselves.
But why was the liberty that Madison so mightily struggled to protect so precious? Americans knew how grievous its opposite was, both from the enslaved blacks they saw all around them as well as from their knowledge that their own forebears had fled British persecution of non-Anglican Protestants or European persecution of all Protestants, denied even freedom to express their own beliefs. They knew what man could inflict on man. But of all the Founders, Treasury secretary Alexander Hamilton gave the most positive, eloquent, and inspiring answer to that question—though, in fact, he thought that he was answering a question about economics. Illegitimate, orphaned, and poor, Hamilton dreamed big dreams as a teenage clerk, sitting on his countinghouse stool on a small West Indian island whose only business was sugar and slaves. Ambition burned within him, along with a keen but unformed sense of his own talent. He knew he could be something other than he was. But what or how, he didn’t foresee. Maybe a war would come, he daydreamed, and give him his chance.

Read more at : http://www.city-journal.org/2014/24_2_liberty-or-equality.html

Friday, January 17, 2014

The West's Catastrophic Defeat in the Middle East

The West's double failure, incapable of building a common strategy, is a sign of a now 'post-American' region.
By Dominique Moisi
Bashar al-Assad is still in power in Damascus and al-Qaeda's black flag was recently waving above Fallujah and Ramadi in Iraq. Not only has the process of fragmentation in Syria now spilled over to Iraq, but these two realities also share a common cause that could be summarized into a simple phrase: the failure of the West.
The capture, even though temporary, of the cities of Fallujah and Ramadi by Sunni militias claiming links to al-Qaeda, is a strong and even humiliating symbol of the failure of the policies the United States carried out in Iraq. A little more than a decade after the overthrow of Saddam Hussein's regime - and after hundreds of thousands of deaths on the Iraqi side and more than 5,000 on the American side - we can only lament a sad conclusion: All that for this!
In Syria, the same admission of failure is emerging. Assad and his loyal allies - Russia and Iran - have actually emerged stronger from their confrontation with the West. Civilian massacres, including with chemical weapons, did not change anything. The regime is holding tight, despite losing control of important parts of its territory, thanks to its allies' support and, most importantly, the weakness of its opponents and those who support them.
In reality, from the Middle East to Africa, the entire idea of outside intervention is being challenged in a widely post-American region. How and when can one intervene appropriately? At which point does not intervening become, to quote the French diplomat Talleyrand following the assassination of the Duke of Enghien in 1804, "worse than a crime, a mistake?"
When is intervention necessary? "Humanitarian emergency" is a very elastic concept. Is the fate of Syrian civilians less tragic than that of Libyans? Why intervene in Somalia in 1992 and not inSudan? The decision to intervene reveals, in part, selective emotions that can also correspond to certain sensitivities or, in a more mundane way, to certain best interests of the moment.
Intervention becomes more probable when it follows the success of some other action; or, on the contrary, a decision to abstain that led to massacre and remorse. The tragedy of the African Great Lakes in 1994 - not to mention the Srebrenica massacre in Bosnia in 1995 - certainly contributed to the West's decision to intervene in Kosovo in 1999. In reality, the intervention of a given country at a given time is typically driven by multiple factors: the existence of an interventionist culture, a sense of urgency, a minimum of empathy towards the country or the cause justifying the intervention, and, of course, the existence of resources that are considered, rightly or wrongly, sufficient and well-adapted.
A French example
But more than "when," it is a question of "how" - the two being often inextricably linked. Intervening alone can have many benefits, including the rapidity of execution, which often leads to efficient operations. The French army was not unhappy to end up alone in Mali. On the other hand, although it can slow down the operations schedule, forming a coalition gives the intervention more legitimacy, and helps share the costs and risks between the various operators.
It is likely that France, which after the Mali operation has engaged in the Central African Republic in a much more uncertain conflict, would now prefer having some support - for reasons related to costs and resources as well as geopolitics. No one wants to share success, but no one wants to end up alone in a potential deadlock either.
America's failure - in Iraq and in Syria - should be considered the West's failure as a whole, even though Washington's share of responsibility is unquestionably the largest.
Failure is generally the result of the interaction between three main factors that are almost always the same: arrogance, ignorance and indifference. Arrogance leads to overestimating one's capacities and to underestimating the enemy's capacity for resistance. It is all too easy to win the war but lose the peace.
"Democracy in Baghdad will lead to peace in Jerusalem," a slogan of the American neo-conservatives, took a disastrous turn in Iraq.
Arrogance is almost always the result of ignorance. What do we know about the cultures and histories of the populations we want to save from chaos and dictators? Yesterday's colonial officers, who drew lines in the sand to create the borders of the new empires and states, turned their nose up at the local religious and tribal complexities. Today, the situation may be worse still. Sheer ignorance prevails.
Finally, there is the sin of indifference. Of course, the ISIL (Islamic State in Iraq and the Levant) is worrying Washington, thus leading to closer ties between the U.S. and Iran regarding Iraq. But the starting point was, in Syria, the U.S.'s refusal to take its responsibilities.
The result is clear: a double defeat, strategic and ethical, for the West. Washington has brought a resounding diplomatic victory to Moscow and has allowed Bashar al-Assad to stay in power.
Read more at:

Friday, January 10, 2014

Leading from Behind

Third Time a Charm?
In his reluctance to brandish America’s world leadership credentials at every turn, President Obama is tapping into an interesting if frustrating strain of American history—and it just might help America learn the wisdom of great power prudence and humility.
By OWEN HARRIESTOM SWITZER
A Washington adage holds that someone commits a “gaffe” when he inadvertently tells the truth. This seemed to be what a U.S. policymaker did two decades ago when he mused about the limits to U.S. power in the post-Cold War era. On May 25, 1993, just four months into the Clinton Administration, a certain senior government official—the new Undersecretary of State for Political Affairs and a former president of the Council on Foreign Relations—spoke freely to about fifty journalists on condition that they refer to him only as a “senior State Department official.” Gaffe or no gaffe, Peter Tarnoff’s frank remarks at the Overseas Writers Club luncheon set off serious political turbulence in the foreign policy establishment.
Tarnoff’s message was that, with the Cold War over, America should no longer be counted on to take the lead in regional disputes unless a direct threat to its national interest inhered in the circumstances. To avoid over-reaching, he warned, U.S. policymakers should define the country’s interests with clarity and without a residue of excessive sentiment, concentrating its resources on matters vital to its own well-being. That meant Washington would “define the extent of its commitment and make a commitment commensurate with those realities. This may on occasion fall short of what some Americans would like and others would hope for”, he recognized. The U.S. government would, if necessary, act unilaterally where its own strategic and economic interests were directly threatened, but it would otherwise pursue a foreign policy at the same time less interventionist and more multilateral. 
President Clinton’s deferral to European demands on the Bosnian crisis, Tarnoff added, marked a new era in which Washington would not automatically lead in international crises. “We simply don’t have the leverage, we don’t have the influence, we don’t have the inclination to use military force, and we certainly don’t have the money to bring to bear the kind of pressure that will produce positive results anytime soon.” 
At first glance, there was nothing new here. As far back as the Nixon Doctrine, U.S. officials had spoken of more voluble burden-sharing, of asking allies to do more on their own behalf, and of a variable-speed American foreign policy activism that could be fine-tuned to circumstances. And then, within a year of the Soviet Union’s collapse, Bill Clinton won a presidential election in part because he promised to “focus like a laser” on domestic issues. Neither during Nixon’s tenure nor in 1993 did anyone use the phrase “to lead from behind”, but this new locution is consonant with the basic thinking of those earlier formulations. In some ways, “leading from behind” is the third coming of a seasoned and generally sensible idea.In some ways, “leading from behind” is the third coming of a seasoned and generally sensible idea.
Nor was Tarnoff saying anything outside the implicit consensus of presumed foreign policy “wise men” at the time. Many dedicated Cold Warriors and leading foreign affairs experts, Republicans and Democrats alike, had been arguing for the previous three years that, having just won a great victory, it was time for America to embrace a more restricted view of the nation’s interests and commitments. “With a return to ‘normal’ times”, Jeane Kirkpatrick argued in The National Interest in 1990, “we can again become a normal nation—and take care of pressing problems of education, family, industry and technology. . . . It is time to give up the dubious benefits of superpower status and become again an . . . open American republic.”Nathan Glazer proposed that it was “time to withdraw to something closer to the modest role that the Founding Fathers intended.” William Hyland, editor of Foreign Affairs at the time, wrote, “What is definitely required is a psychological turn inwards.” And according even to Henry Kissinger, the definition of the U.S. national interest in the emerging era of multipolarity would be different from the two-power world of the Cold War—“more discriminating in its purpose, less cataclysmic in its strategy and, above all, more regional in its design.” 
Notwithstanding all this, and no doubt to his own surprise and chagrin, Tarnoff’s remarks started a firestorm of fear and indignation almost the moment reports of his background briefing hit the press. As one Australian newspaper correspondent observed at the time, “the reaction to his words could scarcely have been more dramatic if he had stripped naked and break-danced around the room.”1
Talking heads denounced not just Tarnoff but the new President for whom he spoke as “isolationist” and “declinist”; some beheld a “creeping Jimmy Carterism” with an Arkansas accent. Foreign embassies went into overdrive as diplomats relayed the news back home. The White House quickly attempted to distance itself from what its press secretary dismissed as “Brand X.” The Secretary of State, Warren Christopher, stayed up all night making personal phone calls to journalists and appearing on late-night television to reassure the world that America’s global leadership role was undiminished. In a hastily rewritten speech, Christopher pointedly used some variant of the word “lead” 23 times. Meanwhile, rumors swirled that the official (only later identified as Tarnoff) was about to lose his job. Yet for all his allegedly neo-isolationist sins, the hapless official remained employed. No apology or explanation was forthcoming.
Read more at:

How Can I Possibly Be Free?

Without baggage, there would be no content
By Raymond Tallis
This essay is an attempt to persuade you of something that in practice you cannot really doubt: your belief that you have free will. It will try to reassure you that it is not naïve to feel that you are responsible, and indeed morally responsible, for your actions. And it will provide you with arguments that will help you answer those increasing numbers of people who say that our free will is an illusion, or that belief in it is an adaptive delusion implanted by evolution.
The case presented will not be a knock-down proof — indeed, it outlines an understanding of free will that is rather elusive. It is of course much easier to construct simple theoretical proofs purporting to show that we are not free than it is to see how, in practice, we really are. For this reason, the argument here will take you on something of a journey.
That journey will provide reasons for resisting the claim that a deterministic view of the material universe is incompatible with free will. Much of the apparent power of deterministic arguments comes from their focusing on isolated actions, or even components of actions, that have been excised from their context in the world of the self, so that they are more easily caught in the net of material causation.
There is another challenge arising from a deeper argument, which seems to hold even if the universe is not deterministic — namely, that unless we are self-caused, we cannot be held responsible for what we do. To answer this challenge, we must find the key to freedom in first-person being — in the very “I” for whom freedom is an issue, the “I” who is capable of orchestrating the sophisticated intentions, choices, and actions required to, for instance, publish an essay denying its own freedom. The demand for complete self-causation places impossible requirements upon someone before he can count as free — requirements, what is more, that would actually empty freedom of its content and hence of any meaning.
Central to the defense of freedom against the challenges of determinism and the requirement for total self-determination will be to see how it is that we are, rather, self-developing — as when we consciously train the mechanisms of our own bodies to carry out our wishes even without conscious thought — so that we are able to make natural events pushed by natural causes the result of human actions led by human reasons.
We must start by characterizing the freedom that we are concerned with. First, if I am truly free, I am the origin of those events I deem to be my actions. Consequently, I am accountable for them: I have ownership of them; I own up to them. Second, they are expressive of me, in the sense that they cannot be separated from that which I feel myself to be. In this regard, they are connected with my motives, feelings, and expressed aims. My actions can be made sense of biographically.
But it is not enough that my actions originate with, and are expressive of, me. I would not be free if all my willing just brought about what was already inevitable. A truly free act is also one that deflects the course of events. So I am free if, as a result of many actions that are themselves free to deflect the course of events, and of which I am the origin, I have an important hand in shaping my life. This is what is meant by “being free.”
Freedom, Determinism, and Moral Responsibility
There are many versions of the deterministic argument against free will, but the most straightforward one is as follows. Since every event has a cause, actions, which are simply a subcategory of events, also have causes. Furthermore, the causal ancestry of actions is not confined to what we would regard as ourselves, because we ourselves are the products of causes that are in turn the products of other causes ad infinitum. The passage from cause to effect is determined by unalterable laws of nature. For a determinist, even intentions are simply another means by which the laws of nature operate through us. In short, we are not the origins of our actions and we do not deflect the course of events, but are merely conduits through which the processes of nature operate, little parishes of a boundless causal web arising from the Big Bang and perhaps terminating in the Big Crunch.
Most philosophers, then, think that physical determinism is incompatible with free will. The incompatibilists fall into two camps: the libertarians who save freedom by denying determinism, and the skeptics who affirm determinism and so deny freedom. As we will see, however, there is reason to believe that determinism and free will are compatible, since determinism applies only to the material world understood in material terms.
The traditional deterministic arguments against free will have recently been dressed up in some very fancy clothes. Evolutionary theory, genetics, and neuroscience have been invoked in combination to create what we might dub “biodeterminism.” According to biodeterministic thinking, our behavior originates in the evolutionary imperative of survival: it is the unchosen result of the fact that we, and in particular our brains, are so designed as to maximize the chances of replicating our genome. Primarily through their phenotypical expression in our brains, it is our genes, not we, that call the shots.
The attacks on free will that arise from neuroscience go beyond evolutionary psychology, and any adequate account of them would require far more than the space of this essay. But there is one particular set of observations that has captured the deterministic imagination and deserves special scrutiny: those made by the late University of California, San Francisco neurophysiologist Benjamin Libet on the relationship between intention and action. For a long time, it has been known that the mental preparation to act is correlated with a particular brain wave — the so-called “readiness potential.” In Libet’s experiment, the action studied was very simple. Subjects were asked to flex their wrists when they felt inclined to do so. They were asked also to note the time on a clock when they experienced the conscious intention to flex their wrists. Libet found that the readiness potential, as timed by the neurophysiologist, actually occurred before the conscious decision, as timed by the subject. There was a consistent difference of over a third of a second.
The interpretation of these findings has been a matter of intense controversy, much of it over the methodology. Some have argued that, since the brain activity associated with certain voluntary actionsprecedes the conscious intention to perform the actions, we therefore do not truly initiate them. At best, we can only inhibit ongoing activity: we have “free won’t” rather than “free will.” But many others have denied even this margin of negative freedom and have seen Libet’s experiments as confirming what we feared: that our brains are calling the shots. We are merely the site of those events we call “actions.”
Another attack on the notion of free will, from Galen Strawson, a professor of philosophy at the University of Reading, goes beyond the arguments from determinism and purports to prove the inherent impossibility of freedom and moral responsibility so long as we are not self-caused. Strawson’s basic argument, articulated in numerous articles and books, can be understood as a syllogism: First, in order to be truly morally responsible for one’s actions, one would have to be causa sui, the cause of oneself. Second, nothing can be causa sui, the cause of itself. Therefore no one can be truly morally responsible. Performing acts for which one is morally responsible requires, Strawson argues, that we should be self-determining — but this is impossible because the notion of true self-determination runs into an infinite regress.
Strawson’s argument is flawed, as we shall see, because its premises are flawed. But it is nevertheless useful because it clarifies the underlying force of deterministic arguments: that whatever I am has been caused by events, processes, and laws that I am not — and that in order to be free, I have to escape having been caused. Strawson’s argument is the reduction to absurdity of deterministic assumptions, for in the end such arguments require that in order to be free, I have to escape being determined, and in order to escape being determined, I have to have brought myself into being — but in order to have brought myself into being, of course, I have to be God. If I am to be responsible for anything that I do, I have to be responsible for everything that I am, including my very existence. Since I cannot pre-exist my own existence so as to bring my existence about, this is a requirement that cannot be met.
This argument from self-determination will be dealt with by looking a little harder at the question of whether or not a self is causa sui, and, closely related, at whether a self’s actions can be seen as expressing itself. A self is certainly not the cause of itself overall and ultimately — but it is the cause of itself in a way that is sufficient to underpin free will.
The Origins of Actions in the Contents of Consciousness
The case for determinism will prevail over the case for freedom so long as we look for freedom in a world devoid of the first-person understanding — and so we will have to reacquaint ourselves with the perspective that comes most naturally to us. Recall that, if we are to be correct in our intuition that we are free, the issue of whether or not we are the origin of our actions is central. Seen as pieces of the material world, we appear to be stitched into a boundless causal net extending from the beginning of time through eternity. How on earth can we then be points of origin? We seem to be a sensory input linked to motor output, with nothing much different in between. So how on earth can the actor truly initiate anything? How can he say that the act in a very important sense begins with him, that he owns it and is accountable for it — that “The buck starts here”?
Read more at:


Thursday, January 9, 2014

Facing the Future

Mitigating a Liquidity Crunch 

by Nicole Foss
Despite the media talking up optimism and recovery, people are not seeing the supposed good news playing out in their own lives. As we have discussed here many times before, the squeeze continues on Main Street, while QE has generated asset bubbles at the top of the financial food chain. Complacency reigns, but this is the endgame. Increasingly delusional collective optimism, based on illusory wealth for the few, has been the driving force for 2013, even as the smart money has been selling everything not nailed down for most of the year – cheerfully handing the empty bag to a public that demands it. It’s been a five year long party, where, demonstrably, no lessons were learned from the excesses preceding the previous peak, and the consequences that followed from it.
Now, as a result of throwing caution to the wind again (mostly with other people’s money of course), we face another set of consequences, but this time the hangover will be worse. Timely warnings are rarely credible, as they contradict the prevailing wisdom of the time, but it is exactly at this time that warnings are most needed – when we are collectively irrationally exuberant on a grand scale. We need to understand the situation we are facing, in order to see why this period of global excess will resolve itself as a global credit implosion, what this means for ourselves and our societies, and what we can hope to do about it, both in terms of preparing in advance and mitigating the impact once we are confronted with a new, sobering, reality.
We are facing an acute liquidity crunch, not the warning shot across the bow that was the financial crisis of 2008/2009, but a full-blown implosion of the house of cards that is the global credit pyramid. Not that it’s likely to disappear all at once, but over the next few years, credit will undergo a relentless contraction, punctuated by periods of both rapid collapse and sharp counter-trend rallies, in a period of exceptionally high volatility. The primary impact will stem from the collapse of the money supply, the vast majority of which is credit – a mountain of IOUs constituting the virtual wealth of the world.
This has happened before, albeit not on this scale. Since humanity reached civilizational scale we have lived through cycles of expansion and contraction. We tend to associate these with the rise and fall of empire, but they typically have a monetary component and often involve a credit boom. Bust follows boom as the credit ponzi scheme collapses. Mark Twain commented on one such episode in 1873:
“Beautiful credit! The foundation of modern society. Who shall say that this is not the golden age of mutual trust, of unlimited reliance upon human promises? That is a peculiar condition of society which enables a whole nation to instantly recognize point and meaning in the familiar newspaper anecdote, which puts into the mouth of the speculator in lands and mines this remark: 
— ”I wasn’t worth a cent two years ago, and now I owe two million dollars.””
Few recognized at the time that the ensuing financial panic of 1873, at the culmination of a period of speculative excess, was going to lead to a long and grinding depression. The signs were there, as they are today, but few connected the dots in advance and understood what was about to unfold and why. Few ever do at comparable points in time.
Unfortunately, humans are not good at remembering, let alone learning from, and applying, the lessons of history. The information is available for those who care to look – far more information than people had access to at previous junctures – but not in the mainstream media. The media’s role is to reflect and amplify the mood of the time, spinning events in accordance with it in a self-reinforcing feedback loop. Real information – the kind we need if we are to face a future more challenging than anything most of us have ever experienced – is found elsewhere, with independent voices contradicting received wisdom when it most needs to be contradicted. That has been our task at The Automatic Earth for the last six years. We cover the events of the day, placing them in the context of the bigger picture we have developed since January 2008.
We aim to make complexity comprehensible, so that people can identify the most immediate and most significant threats and prepare themselves to face them. At the present time, the threat people most need to appreciate is a liquidity crunch, hence this is a major focus of our most recent Video Download release – Facing the Future. It is well underway in some parts of the world already and many more countries will find themselves affected in the not too distant future.
Read more at:

Wednesday, January 8, 2014

Machiavelli for Moms

The people desire neither to be commanded nor oppressed by the great, and the great desire to command and oppress the people
By RITA KOGANZON
Niccolò Machiavelli, the 16th-century Florentine political advisor and philosopher, has been credited with founding the modern "realist" school of international relations, the modern conception of the state, and even modernity itself. What he is most famous for, however, is founding a new approach to politics that emphasizes deception and effectiveness over virtue and morality. In his best-known work, The Prince, he advises the politically ambitious to eschew genuine virtue for the mere appearance of it and to accept that the aims of a true leader justify his means, whatever they may be. "For a man who wants to make a profession of good in all regards must come to ruin among so many who are not good," he writes. "Hence it is necessary to a prince, if he wants to maintain himself, to learn to be able not to be good."
This Machiavellian willingness to be "altogether wicked" is difficult to square with some of what we in the modern world he helped create have made of Machiavelli. Perhaps most peculiar, and most telling, of all is the steady stream of self-help and advice manuals for everyday living that claim to have been inspired by him. There is What Would Machiavelli Do? The Ends Justify the Meanness, one of a number of Machiavellian guides for business success. There is also The Suit: A Machiavellian Approach to Men's StyleThe Princessa: Machiavelli for WomenA Child's Machiavelli; a slew of (largely self-published) Machiavellian tracts on picking up women; and, most recently, Machiavelli for Moms.
These books find their model primarily in The Prince, a work claiming to dispense advice for princes — not office pushovers, frumpy dressers, playground wimps, and dateless sad sacks. Machiavelli does venture some advice for mothers in the Discourses on Livy, in which he applauds Caterina Sforza's parenting, though it is hard to imagine that today's self-help gurus would share his admiration for her. After conspirators trying to take the city of Forlì kill Sforza's husband and capture her and her young children, she promises to betray the fortress to the conspirators if they release her, and she leaves her children with them as collateral. Machiavelli writes, "As soon as she was inside, she reproved them from the walls for the death of her husband....And to show that she did not care for her children, she showed them her genital parts, saying that she still had the mode for making more of them." That is Machiavelli for moms, though this story is not mentioned in the recent parenting guide. How then has Machiavelli, proponent of every kind of deceit, been domesticated, becoming a modern American sartorial consultant, business guru, and family therapist?
The process has been gradual, spanning several centuries — it began, in fact, with the first great American Machiavellian, Benjamin Franklin. Machiavelli's value for European geopolitical strategy was recognized almost immediately, but it was Franklin who realized that, although Machiavelli had largely been understood as an advisor to the rulers of great states, he was in fact a philosopher for losers. He wrote books about power and the men who had succeeded or failed to seize it, but men who are busy seizing and holding power rarely have time to read books. We are most receptive to Machiavelli when we are young and lowly, or when we have been brought low by some setback, and, in both cases, Machiavelli instructs the weak. But Franklin recognized, too, that Machiavelli speaks to the ambitious among the weak, those who are not satisfied to remain low, and this made him useful to Franklin in particular and to Americans in general.
Machiavelli still has much to teach the lowly and ambitious, but some of the more recent attempts to apply his insights to American life today have missed the point. In order to benefit from the useful lessons Machiavelli can teach us about succeeding in America, we need to identify exactly what those lessons really are. To do so, we would do well to re-examine Benjamin Franklin's approach to employing Machiavellian methods to get ahead in America.
FRANKLIN'S CIVIC MACHIAVELLIANISM
The American situation as Benjamin Franklin saw it in the 18th century was particularly ripe for Machiavellian losers. The period was marked by relative social equality, an observation Tocqueville would echo a half-century later. As Franklin wrote in an advertisement to potential immigrants in 1782, "The Truth is, that though there are in that Country few People so miserable as the Poor of Europe, there are also very few that in Europe would be called rich: it is rather a general happy Mediocrity that prevails." This happy mediocrity prevailed in part because of the availability of land, but also because members of the English aristocracy were disinclined to leave their estates and move to the American colonies, leaving the new world to be populated by lower gentry, small farmers, and tradesmen. The relative absence of aristocratic hierarchy in turn meant that ambitious but poor men like Franklin might rise by their own wits. Here, however, another central feature of the American situation stood in their way: Colonial Protestants looked down on worldly ambition, deeming it sinful to grasp after wealth and position, though to actually possess either or both was a mark of God's grace.
Read more at:


Tuesday, January 7, 2014

The Right side of history

Why liberals are conflicted over patriotism and western values
by Daniel Hannan
Why is patriotism, in English-speaking societies, mainly associated with conservatives? After all, measured against almost any other civilizational model, the Anglosphere has been overwhelmingly progressive.
It is true that the individualism of English-speaking societies has an anti-socialist bias: There has always been a measure of resistance to taxation, to state power, and, indeed, to collectivism of any kind. But look at the other side of the balance: equality before the law, regardless of sex or race, secularism, toleration for minorities, absence of censorship, social mobility, and universal schooling. In how many other places are these things taken for granted?
So why is the celebration of national identity a largely Rightist pursuit in English-speaking societies? It won’t do to say that patriotism is, by its nature, a Right-of-center attitude. In the European tradition, if anything, the reverse was the case. Continental nationalists—those who believed that the borders of their states should correlate to ethnic or linguistic frontiers—were, more often than not, radicals. The 1848 revolutions in Europe were broadly Leftist in inspiration. When the risings were put down, and the old monarchical-clerical order reestablished, the revolutionaries overwhelmingly fled to London, the one city that they knew would give them sanctuary. With the exception of Karl Marx, who never forgave the country that had sheltered him for failing to hold the revolution that he forecast, they admired Britain for its openness, tolerance, and freedom.
So what stops English-speaking Leftists from doing the same? Why, when they recall their history, do they focus, not on the extensions of the franchise or the war against slavery or the defeat of Nazism, but on the wicked imperialism of, first, the British and, later, the Americans?
The answer lies neither in politics nor in history, but in psychology. The more we learn about how the brain works, the more we discover that people’s political opinions tend to be a rationalization of their instincts. We subconsciously pick the data that sustain our prejudices and block out those that don’t. We can generally spot this tendency in other people; we almost never acknowledge it in ourselves.
A neat illustration of the phenomenon is the debate over global warming. At first glance, it seems odd that climate change should divide commentators along Left–Right lines. Science, after all, depends on data, not on our attitudes to taxation or defense or the family. The trouble is that we all have assumptions, scientists as much as anyone else. Our ancestors learned, on the savannahs of Pleistocene Africa, to make sense of their surroundings by finding patterns, and this tendency is encoded deep in our DNA. It explains the phenomenon of cognitive dissonance. When presented with a new discovery, we automatically try to press it into our existing belief system; if it doesn’t fit, we question the discovery before the belief system. Sometimes this habit leads us into error. But without it, we should hardly survive at all. As Edmund Burke argued, life would become impossible if we tried to think through every new situation from first principles, disregarding both our own experience and the accumulated wisdom of our people—if, in other words, we shed all prejudice.
If you begin with the beliefs that wealthy countries became wealthy by exploiting poor ones, that state action does more good than harm, and that we could all afford to pay a bit more tax, you are likelier than not to accept a thesis that seems to demand government intervention, supranational technocracy, and global wealth redistribution.
If, on the other hand, you begin from the propositions that individuals know better than governments, that collectivism was a demonstrable failure, and that bureaucracies will always seek to expand their powers, you are likelier than not to believe that global warming is just the left’s latest excuse for centralizing power.
Each side, convinced of its own bona fides, suspects the motives of the other, which is what makes the debate so vinegary. Proponents of both points of view are quite sure that they are dealing in proven facts, and that their critics must therefore be either knaves or fools.
The two sides don’t simply disagree about the interpretation of data; they disagree about the data. Never mind how to respond to changes in temperature; there isn’t even agreement on the extent to which the planet is heating. Though we all like to think we are dealing with hard, pure, demonstrable statistics, we are much likelier to be fitting the statistics around our preferred Weltanschauung.
Central to the worldview of most people who self-identify as Left-of-center is an honorable and high-minded impulse, namely support for the underdog. This impulse is by no means confined to Leftists, but Leftists exaggerate it, to the exclusion of rival impulses.
Jonathan Haidt is a psychologist, a man who began as a partisan liberal, and who set out to explain why political discourse was so bitter. In his seminal 2012 book, The Righteous Mind, he explains the way people of Left and Right fit their perceptions around their instinctive starting points. As he puts it, our elephant (our intuition) leans toward a particular conclusion; and its rider (our conscious reasoning) then scampers around seeking to justify that lean with what look like objective facts.
The liberal Support for the underdog is balanced by other tendencies in conservatives, such as respect for sanctity. In Leftists, it is not. Once you grasp this difference, all the apparent inconsistencies and contradictions of the Leftist outlook make sense. It explains why liberals think that immigration and multiculturalism are a good thing in Western democracies, but a bad thing in, say, the Amazon rain forest. It explains how people can simultaneously demand equality between the sexes and quotas for women. It explains why Israel is seen as right when fighting the British but wrong when fighting the Palestinians.

 Read the rest at:

Monday, January 6, 2014

100 Years After The Outbreak Of World War I, Could The World Commit Suicide Again?

Gallup’s latest poll finds 72 percent of Americans find Washington to be the largest threat to their freedom today
By Steven Hayward,
Humans are a sentimental species, making much of anniversaries and reunions.  We’re probably not far from extending our sentimental capacities to observing anniversaries of anniversaries, like aging baby boomers who argue over which Woodstock reunion was the best because their memory of the original is fading into the mists.
Already we are seeing the first trickle of articles and books that will reach a flood by summer noting the 100thanniversary of the outbreak of World War I.  These retrospectives come in two types.  The first is the never-ending historical argument over causation of the catastrophe whose derangements of the European equilibrium have not completely subsided today.  The various diagnostic camps formed into fixed positions decades ago: “miscalculation” (Barbara Tuchman) economic and demographic asymmetries (Marxists and other varieties of material determinists), or fear of rising rival power that spurs preemption (the various descendents of Thucydides).
The second retrospective mode comes from those who wish to dilate the famous remark that Mark Twain probably never said:
“History doesn’t repeat itself—but it rhymes.” 
Given that World War I was a surprise, a war that was thought to be impossible right up to the moment it wasn’t, could we wander into the same kind of conflagration today?  There are lots of plausible candidates for the locus of a world-splitting cataclysm, ranging from a rising China or that crucible of conflict since the beginning of recorded history—instability and ambition for conquest (or revenge) in south central Asia.  Certainly the rise of radical Islam in recent decades has taken the place of revolutionary Communist, socialist, and fascist ideologies that did so much to disrupt the 20th century in the aftermath of World War I.
Perhaps this retrospection on causal catastrophism will represent the much wished-for inversion of Santayana’s injunction to recall history so as not to repeat it.  As was the case before World War I, there are many secular reasons for optimism beyond the fine-toothed combing and re-telling of the political mistakes of 1914.  China is probably just too large and unwieldy to become a genuine threat of global war.  North Korea would be rolled up quickly if it ever acted seriously on its belligerent rhetoric.  India and Pakistan—or Iran and everyone in its neighborhood—may come to open war at some point (or the Syrian and Libyan civil wars may spread), and bad as that prospect is it would not likely spill over beyond the region or draw in the remaining world powers.  European disarmament—the great windmill of the 1930s—is coming to pass by degrees, the result of welfare-statism more than authentic Kantian pacifism.  Before long most of Europe won’t be able to fight each other even if they wanted to.  (Better watch out for those old Russians, though.  They didn’t get the memo from Brussels.)  The terror threat will be with us for a while, but can’t by itself plunge the world into a total war.
Beyond the geopolitical hypotheticals, Steven Pinker and others have noted that violence in the world, as measured in open conflicts and deaths, has been declining significantly for the last several decades.  The warfare that leading nations engage in today, such as the U.S. campaigns in Iraq and Afghanistan, resemble the pre-1914 world of professional armies and remote battle plains, with conflicts that did not involve whole populations like World Wars I & II.  Perhaps the promise of modernity, which was at the root of Progressive optimism a century ago, has belatedly come to fruition?
One aspect of the Great War story of a century ago seems to be missing from the growing inventory: the changes to the idea of Progress itself, and the rise of the administrative state in its wake.  The conventional wisdom for many years in U.S. historiography is that the coming of World War I entailed the end of the Progressive Era and its momentum for reform.  It is certainly true that World War I put paid to the easygoing faith in inevitable progress that prevailed everywhere in the advanced industrial nations before the war, and gave way to the existential pessimism of the interwar period that in turn brought us fully to today’s “postmodernism” that openly disdains progress in just about every form.
But it is quite wrong to suppose that World War I ended Progressivism.  To the contrary, it accelerated the rise of the modern administrative state that is the defining feature of what goes by the label of “Progressivism” today.  The British historian A.J.P. Taylor made this point inadvertently at the time of the 50thanniversary of World War I: “Until August 1914 a sensible, law-abiding Englishman could pass through life and hardly notice the existence of the state, beyond the post office and the policeman.  He could live where he liked and as he liked.  He had no official number or identity card.”
The same was true of the United States, where the brand new income tax barely reached 10 percent on a small handful of the highest incomes, and regulation was confined to a few well-defined agencies overseeing a small number of national industries and markets.  But as Taylor correctly notes, “All this was changed by the impact of the Great War.”  Far from ending “Progressivism,” World War I provided the launchpad for a century of wholesale expansion of administrative government in the United States, starting with jacking up the income tax to over 90 percent and then, in World War II, extending its reach to the lowest rungs of the middle class.  Coupled with the crisis of the Great Depression, “Progressive” government hasn’t looked back.  In other words, if you’d described to Progressive reformers in 1910 the course of American government over the next century, they wouldn’t have regarded World War I as the end of their dreams, but rather as the turning point.  Not for nothing did Randolph Bourne say that “War is the health of the State.”

 Read the rest at :

Wednesday, December 18, 2013

Boy Trouble

Family breakdown disproportionately harms young males
by KAY S. HYMOWITZ
by When I started following the research on child well-being about two decades ago, the focus was almost always girls’ problems—their low self-esteem, lax ambitions, eating disorders, and, most alarming, high rates of teen pregnancy. Now, though, with teen births down more than 50 percent from their 1991 peak and girls dominating classrooms and graduation ceremonies, boys and men are increasingly the ones under examination. Their high school grades and college attendance rates have remained stalled for decades. Among poor and working-class boys, the chances of climbing out of the low-end labor market—and of becoming reliable husbands and fathers—are looking worse and worse.
Economists have scratched their heads. “The greatest, most astonishing fact that I am aware of in social science right now is that women have been able to hear the labor market screaming out ‘You need more education’ and have been able to respond to that, and men have not,” MIT’s Michael Greenstone told theNew York Times. If boys were as rational as their sisters, he implied, they would be staying in school, getting degrees, and going on to buff their Florsheim shoes on weekdays at 7:30 AM. Instead, the rational sex, the proto-homo economicus, is shrugging off school and resigning itself to a life of shelf stocking. Why would that be?
This spring, another MIT economist, David Autor, and coauthor Melanie Wasserman, proposed an answer. The reason for boys’ dismal school performance, they argued, was the growing number of fatherless homes. Boys and young men weren’t behaving rationally, the theory suggested, because their family background left them without the necessary attitudes and skills to adapt to changing social and economic conditions. The paper generated a brief buzz but then vanished. That’s too bad, for the claim that family breakdown has had an especially harsh impact on boys, and therefore men, has considerable psychological and biological research behind it. Anyone interested in the plight of poor and working-class men—and, more broadly, mobility and the American dream—should keep it front and center in public debate.
In fact, signs that the nuclear-family meltdown of the past half-century has been particularly toxic to boys’ well-being are not new. By the 1970s and eighties, family researchers following the children of the divorce revolution noticed that, while both girls and boys showed distress when their parents split up, they had different ways of showing it. Girls tended to “internalize” their unhappiness: they became depressed and anxious, and many cut themselves, or got into drugs or alcohol. Boys, on the other hand, “externalized” or “acted out”: they became more impulsive, aggressive, and “antisocial.” Both reactions were worrisome, but boys’ behavior had the disadvantage of annoying and even frightening classmates, teachers, and neighbors. Boys from broken homes were more likely than their peers to get suspended and arrested. Girls’ unhappiness also seemed to ease within a year or two after their parents’ divorce; boys’ didn’t.
......

Read more at :