I must grudgingly acknowledge that American English is just about tolerable (sometimes)

accents

As it is Fourth of July, an American themed post seems in order. And more specifically, one about American English.

Pride and prejudice

It has been the soundtrack to my life ever since I moved to Korea. If you learn English in this country you learn to say ‘soccer’ and spell ‘colour’ without the ‘u’. In addition, the bulk of Anglophone expats in Korea come from the US and Canada.

Like a lot of Brits (at least of the educated, RP speaking variety) my instinctive reaction to American English, might be charitably described as ‘combative’. I belligerently continue to say things like ‘my trousers got so muddy I had to change them, so I got the lift up to my flat’. [Though I must confess that I all too often catch myself slipping on that final one.]

I also delight in making sure Americans are fully aware of any logical deficiencies I can identify in their dialect. ‘You call the liquid you put in your car gas? And you describe a sport in which players generally hold the ball in their hands as football? No, wonder you guys elected Trump!’*

My more sensible angel

I feel that this kind of thing is justifiable as friendly banter, but is otherwise daft. In Accidence will Happen – essentially a grammar book for people who care more about communicating clearly than catching other people out – the Times journalist Oliver Kamm writes that:

Prince Charles … declared to a British Council audience in 1995 that the American way of speaking was ‘very corrupting’. How so? Well, ‘people tend to invent all sorts of nouns and verbs and make words that shouldn’t be’. The Prince urged his audience: ‘We must act now to ensure that English – and that, to my way of thinking, means English English – maintains its position as the world language well into the next century.’

This is a very common view and is historically perverse. It identifies English with a particular country, and indeed with a particular region of a particular country, and assumes other influences are debased imitators against which barriers need to be arrayed.

But the way that the English language has developed in North America is not corrupting at all. Both American English and the dialect of English that Prince Charles speaks are descendants of a common ancestor. Neither of these dialects is the type of English spoken by Shakespeare and his contemporaries. In some respects, as far as we know, American dialects are closer to that ancestor. The r sound in the name Shakespeare has been lost in the dialect of South-East England, but retained in American speech and many other accents and dialects of English (such as Scottish enunciation).

Of Reds and Greys

There are more subtle arguments than the Prince’s for finding American English threatening. For example, the journalist Matthew Engel wrote an essay for the BBC in which he lamented:

…the sloppy loss of our own distinctive phraseology through sheer idleness, lack of self-awareness and our attitude of cultural cringe. We encourage the diversity offered by Welsh and Gaelic – even Cornish is making a comeback. But we are letting British English wither.

I see the logic of this point and it is part of why I am keen to retain the distinctively British character of my own language.

However, if we were concerned about the diversity of English, then protecting standard British English would be a perverse priority. Engel seems to think it is like the Red Squirrel, which is being driven to extinction by Grey Squirrels, an invasive species from North America. However, the reality is that the dialect I speak is a predator, not prey: It is steadily modifying or even absorbing the UK’s regional dialects.

And as Kamm notes:

It is particularly odd when pedants complain about the assimilation of Americanisms into the language, as Standard English has borrowed extensively from other languages and dialects over centuries.

And as Engel himself has to note, that includes American English:

The Americans imported English wholesale, forged it to meet their own needs, then exported their own words back across the Atlantic to be incorporated in the way we speak over here. Those seemingly innocuous words caused fury at the time.

The poet Coleridge denounced “talented” as a barbarous word in 1832, though a few years later it was being used by William Gladstone. A letter-writer to the Times, in 1857, described “reliable” as vile.

Engels suggests that the present situation is different because the pace of absorption is so much faster now. However, he provides only anecdotal evidence for this assertion, and also proffers counter-examples too:

When it comes to new technology, we often go our separate ways. They have cellphones – we have mobiles. We go to cash points or cash machines – they use ATMs. We have still never linked hands on motoring terminology – petrol, the boot, the bonnet, known in the US as gas, the trunk, the hood.

What’s right with American English

If you will permit me an uncharacteristic piece of generosity, I would actually commend certain aspects of American English as superior. The most obvious and important example is spelling. American spellings arose from a concerted effort to make the system more intuitive. It does that by placing more emphasis on correspondence with spoken English, and less on resembling French and Latin. That seems like an altogether sensible prioritisation.

Of the smaller examples, the one that stands out to me is saying ‘first floor’ to refer to the ground floor, rather than the floor above it. Given that generally the first floor one encounters on entering a building is the ground floor, describing another floor as such is counter-intuitive. Indeed, I was a teenager before I realised the UK didn’t have the American system! It just seemed so much more sensible!

The minimum wage in theory and practice

 

Economics is a subject that combines the abstract and the empirical.* It both builds mathematical models and uses statistics. And it needs to know how to balance the two. The danger of one approach predominating is illustrated by the issue of the minimum wage.

There is a pretty strong economic reasoning for saying that unemployment will rise if you raise the cost of labour. That leads many free marketeers to oppose the minimum wage.

The minimum wage makes some workers, those with the lowest skills, more expensive than they otherwise would be. When things get more expensive, people look for ways to avoid that increased expense. In the case of the minimum wage, employers try to substitute machines and technology for workers, or use higher-skilled workers who are already paid above the minimum instead of lower-skilled workers. It doesn’t require any extreme assumptions about the labor market being in equilibrium or that the demand curve is derived from the marginal product of labor. It’s just that there is some demand for labor and that it slopes downward. All that means is that when workers get more expensive, you try to avoid paying those costs. This is a not neoclassical or neoliberal or Chicago view of the world. It’s everyone’s view of the world.

I understand the force of this logic – I subscribed to it myself for a while. However – as the author of the above extract – has to admit the empirical evidence does not bear this out. As Laura D’Andrea Tyson a former chair of the Council of Economic Advisors writes at Economix:

a raft of meticulous economic research, including work by David Card and Alan B. Krueger, who served as chief economist at the Labor Department in the Clinton administration and more recently as the chairman of the Council of Economic Advisers in the Obama administration, has decisively demolished the old shibboleths. The weight of the evidence consistently finds no significant effects on employment when the minimum wage increases in reasonable increments.

For a good overview, look to a paper by Arindrajit Dube of the University of Massachusetts, Amherst; T. William Lester of the University of North Carolina, Chapel Hill; and Michael Reich of the University of California, Berkeley. Using two decades of data and side-by-side comparisons of bordering counties in the United States, they find that higher minimum wages raise the earnings of low-wage workers and have negligible effects on employment levels. According to their estimates, an increase of 10 percent in the minimum wage would have a statistically negligible effect on employment in industries and occupations employing minimum-wage workers.

In 1996, the prevailing view among economists was that an increase in the minimum wage would reduce employment. But opinions have changed in response to the evidence. In a recent survey of a panel of leading economists, only a third expected that an increase in the minimum wage to $9 an hour would make it “noticeably harder for low-skilled workers to find employment,” and nearly half agreed that the economic benefits of raising the minimum wage and indexing it to inflation would outweigh the economic costs.

So following the approach of John Maynard Keynes that: “when the facts change, I change my mind,” I’ve come round to the idea of a minimum wage and even campaigned for a living wage. However, I don’t imagine that we can push the minimum wage up indefinitely. For example, part of the recent difficulties of Puerto Rico seems to have been combining a Caribbean labour market with the US Federal minimum wage.

 

* Yes you’re right that doesn’t really distinguish it from other subjects

John Kerry’s next big headache – the US’s travel ban on the next PM of India?

INDIA_(F)_0713_-_Modi

Narendra Modi

Earlier this week, I blogged about the challenge that a new Hindu nationalist government might pose for the West. Potentially the trickiest of these issues is how the US deals with its travel ban on Narendra Modi, the man whose very likely to be the next prime minister of India for his role in anti-Muslim pogroms in his homestate of Gujarat. Foreign Policy’s The Cable blog explains the dilemma:

“If he becomes prime minister, the U.S. will have to find a way to do business with him,” Tanvi Madan, director of the Brookings Institution’s India Project, told The Cable. “The question is whether or not to do something before next year’s election.”

Both options present risks.

If the United States continues to restrict Modi’s travel and freeze him out of diplomatic discussions at the ambassadorial level, it risks alienating an important partner on everything from trade to security to finance to diaspora issues. By contrast, the European Union, Britain, and Germany have all engaged in ambassador-level discussions with Modi. This status quo also risks insulting hundreds of millions of Indians.

“The travel restriction has created resentment amongst the leadership and some amongst the rank-and-file BJP party workers,” said Milan Vaishnav of the Carnegie Endowment for International Peace. “We’re talking about a three-time incumbent chief minister. He hasn’t been found guilty by any court of law, he’s not under indictment for any crime, and there hasn’t been a smoking gun in their view. So how can you, the United States, prevent this guy from coming to your country?”

But not everyone agrees with the BJP’s interpretation of history. There is currently a trench war playing out on Capitol Hill over Modi’s legacy. Anti-Modi groups, such as the Indian American Muslim Council (IAMC), promise to name and shame anyone supportive of Modi, whom they consider a genocidal Hindu supremacist. IAMC has hired the lobbying firm Fidelis to advance its goals on the Hill, including a resolution critical of violations of minority groups in India that was introduced by Rep. Joe Pitts (R-PA).

The Cable has learned that anti-Modi groups are also planning a legal challenge against the chief minister should he ever travel to the United States. “Some of us are working with the next of kin of victims of the Gujarat 2002 violence living in the United States,” Shaik Ubaid, founder of the Coalition Against Genocide, said. “We will be ready to file criminal and tort cases against Modi should he try to come to the United States.”

Pro-Modi groups, such as the Hindu American Foundation, have accused these anti-Modi groups of slandering the reputation of India and its leaders. “It is certainly disappointing to see Indian- Americans hiring an American lobbying firm to advocate for a deeply flawed and insulting American resolution critical of India,” said the Hindu American Foundation‘s Jay Kansara.

The pro-Modi camp has courted high-profile Republican lawmakers such as Rep. Cathy McMorris Rodgers and Rep. Aaron Schock, but to varying degrees of success. After heaping effusive praise on Modi following a 2013 visit to Gujarat, McMorris Rodgers denied association with him in November after anti-genocide groups complained about an invite for Modi to talk to Republican leaders on Capitol Hill via video link. “They don’t have a relationship,” a congressional aide told The Cable.

Technically, it would not be difficult for Foggy Bottom to resolve Modi’s travel status. Although the department originally determined that Modi was ineligible for travel under the Immigration and Nationality Act, it’s not bound by that earlier decision.

“Our long-standing policy with regard to the chief minister is that he is welcome to apply for a visa and await a review like any other applicant,” Harf told The Cable. “That review will be grounded in U.S. law.”

However, Modi is unlikely to reapply for a visa between now and the 2014 elections.

Alternatively, the United States could implement a half-measure, such as issuing a statement that clarifies that America would never bar the leader of India from entering the country. But even that poses problems.

“Friends at the State Department say they’re hyperaware of this issue but constrained because of the elections,” said Vaishnav. “They don’t want to be seen as endorsing a candidate or meddling in Indian politics. The State Department doesn’t want to be on the front page of Indian newspapers.”

Madan agrees. “Any sign of foreign interference would be taken extremely negatively in India,” she said. “The Congress party would latch onto that, saying the U.S. has endorsed Modi.”

By and large, Foggy Bottom is boxed in on the issue. “There is little doubt that this poses a dilemma for the State Department,” said Madan. “Modi is a major figure in Indian politics. It’s impossible to imagine that they haven’t thought through the various scenarios, but it’s unclear what they’ll do.”

The unfortunately realistic economics of the Hunger Games

hungergames

One of the criticisms of the Hunger Games is that it’s not plausible that in a futuristic sci-fi world with extremely advanced technology, much of the population would still be on the edge of starvation. Matthew Yglesias argues that the extreme inequality between the Capital and the Districts is not only plausible but has actually existed and that Collins has identified how it would come about. He illustrates this by reference to the work of two economic historians:

Acemoglu and Robinson’s general theory can be grasped through the lens of the “reversal of fortune” they observe in the Western Hemisphere and originally described in an academic paper co-authored with Simon Johnson. If you plot per capita income in the Americas today, you see a clear pattern with the United States and Canada ahead, the southern cone around Chile and Argentina in second place, and the middle portion much poorer. It turns out that if you turn the clock back about 500 years, the pattern was reversed. The places that are rich today were poor then, while those that are poor today were generally rich in the past. This, they argue, is no coincidence. When Spanish conquistadors showed up in the prosperous areas of Latin America, they stole all the gold they could get their hands on and then set about putting the native populations to work. They set up “extractive institutions” whose purpose was to wring as many natural resources (silver, gold, food) from the land as possible while keeping power in the hands of a narrow elite. These institutions discourage savings and investment, since everyone knows any wealth can and will be arbitrarily expropriated. And while the injustice of it all led to periodic revolutions, the typical pattern was for the new boss to simply seize control of the extractive institutions and run them for his own benefit.

………………………………………………….

District 12 is a quintessential extractive economy. It’s oriented around a coal mine, the kind of facility where unskilled labor can be highly productive in light of the value of the underlying commodity. In a free society, market competition for labor and union organizing would drive wages up. But instead the Capitol imposes a single purchaser of mine labor and offers subsistence wages. Emigration to other districts in search of better opportunities is banned, as is exploitation of the apparently bountiful resources of the surrounding forest. With the mass of Seam workers unable to earn a decent wage, even relatively privileged townsfolk have modest living standards. If mineworkers earned more money, the Mellark family bakery would have more customers and more incentive to invest in expanded operations. A growing service economy would grow up around the mine. But the extractive institutions keep the entire District in a state of poverty, despite the availability of advanced technology in the Capitol.

……………………………………………..

But Collins is right in line with the most depressing conclusion offered by Acemoglu and Robinson, namely that once extractive institutions are established they’re hard to get rid of. Africa’s modern states, they note, were created by European colonialists who set out to create extractive institutions to exploit the local population. The injustice of the situation led eventually to African mass resistance and the overthrow of colonial rule. But in almost every case, the new elite simply started running the same extractive institutions for their own benefit. The real battle turned out to have been over who ran the machinery of extraction, not its existence. And this, precisely, is the moral of Collins’ trilogy. [Spoiler alert: Ignore rest of this story if you haven’t finished the trilogy.] To defeat the Capitol’s authoritarian power requires the construction of a tightly regimented, extremely disciplined society in District 13. That District’s leaders are able to mobilize mass discontent with the Capitol into a rebellion, but this leads not to the destruction of the system but its decapitation. Despite the sincere best efforts of ordinary people to better their circumstances, the deep logic of extractive institutions is difficult to overcome, whether in contemporary Nigeria or in Panem.

Talkin Bout Their Generation: the Insufferable Sixties

Baby Boomers should stop subjecting the rest of us to their solipsism

art_tw_flower_081409-350

Two American anniversaries: the one you’ve heard about and the one that matters

As you have likely noticed yesterday was the 50th assassination of John F. Kennedy. He is still a subject of massive fascination and a sizable industry. The most obvious aspect of this is the continuing obsession with the imagined conspiracy that killed him but it’s far from being the only one. For example, an article in the New Republic accused the up market Vanity Fair of having “an absurd preoccupation with the Kennedys” and noted that:

Since Michelle Obama became first lady, Jackie has merited more attention—20 mentions to Michelle’s 19. Since August 2008, when John McCain picked Sarah Palin as his running mate, searching for “Kennedy” in VF in Nexis yields twice as many results as searching for “Palin.” The “politics” section of VanityFair.com has a header for “The Kennedys”—an entire digital section devoted to political figures who are, save a few, no longer alive. Surely readers looking for political coverage would rather find, oh, say, a tab marked, “Presidential election 2012”?

When the American public are polled about who they think the greatest president is JFK is invariably near the top and a recent poll named him as the most popular president of the past sixty years. This stands in contrast to surveys of scholars who tend to give JFK a rather more middling rating.

The focus on the 50th anniversary of the JFK anniversary is all the more striking given that Tuesday was the 150th anniversary of a milestone in the presidency of the man who usually tops those scholar lists. On the 19th November 1863, Abraham Lincoln delivered the Gettysburg Address. This was probably the most important speech ever in American history and followed the decisive battle of the Civil War. Yet somehow this was overshadowed by the death of a mediocre president who happened to be rather handsome. I would suggest that this disparity can be explained by the fact that JFK was a figure from the Sixties, a period with an outsize role in our collective imagination.

Sixties Mania

Our obsession with the Sixties manifests itself in many ways. There is, for example, the adulation that attaches itself to Mad Men or the latest 1000 pages of Robert Caro’s oil tanker length biography of LBJ. We could also observe that while Vietnam remains a cultural touchstone, the war in Korea is largely forgotten. Or we could point to the massive followings still enjoyed by the Rolling Stones, the Who or Bob Dylan.

It seems to me that what has happened is that now Baby Boomers – now firmly ensconced at the top of the media and cultural institutions – have been treating the events that were especially interesting and formative for them as being so for the world in general. And in the process they have managed to convince many of the rest of us that the Sixties were particularly significant.

Was the Sixties all that?

Stepping back and looking at the 20th century as a whole, the Sixties don’t seem that seminal.

If we look at the arena of politics and international relations, then important things did happen in the boomers formative years: the revolution in Cuba, the 1968 uprisings and Vietnam. However, these seem less significant than say the Depression, World War II, the beginning of the Cold War, the consolidation of postwar European democracy, the fall of the Berlin Wall and the rise of China.

In terms of economics, the Sixties looks like a time of stasis between the emergence of Keynesian social democracy and it’s displacement by deregulation and deindustrialisation that began with the OPEC crisis in 1973.

And while it might have been a time of cultural tumult, it was intellectually arid. As Tony Judt observes in his masterful history of Postwar Europe, the Sixties was almost wholly devoid of great thinkers or movements. It produced no Durkheim, no Einstein nor a Wittgenstein.

Even in the changes in sexual politics were not as dramatic as often imagined. Phillip Larkin said that “Sexual intercourse began in nineteen sixty-three …..between the end of the “Chatterley” ban and the Beatles’ first LP.” However, an alternative narrative would be that the boomers rebelled against the sexual mores of their parents by embracing those of their grandparents.

What WAS unique about the Sixties was the boomers sense of their own importance. To again quote Judt:

Moments of great cultural significance are often appreciated only in retrospect. The Sixties were different: the transcendent importance contemporaries attached to their own own times – was one of the special features of the age. A significant part of the Sixties was spent in the words of The Who: ‘talking about My Generation’

I can’t be alone in thinking that it’s time for the generations that came before and after the boomers to tell them to see beyond their own formative years: if they can’t see that Gettysburg 1863 was more significant than Dallas 1963, they really need to learn to get some perspective.

The wisest conservative

Oakeshott

I don’t want Conservative Week to pass without me acknowledging that it’s not a tradition devoid of merit. In particular, I want to commend its most significant philosopher: Michael Oakeshott. A thinker with as much to say to the left as the right.

In his essay On Being Conservative, Oakeshott explained that he believed conservatism to be grounded in a preference for the known over the unknown. He argued that the key traits that followed from this were:

First, innovation entails certain loss and possible gain, therefore, the onus of proof, to show that the proposed change may be expected to be on the whole beneficial, rests with the would-be innovator. Secondly, he believes that the more closely an innovation resembles growth (that is, the more clearly it is intimated in and not merely imposed upon the situation) the less likely it is to result in a preponderance of loss. Thirdly, he thinks that the innovation which is a response to some specific defect, one designed to redress some specific disequilibrium, is more desirable than one which springs from a notion of a generally improved condition of human circumstances, and is far more desirable than one generated by a vision of perfection. Consequently, he prefers small and limited innovations to large and indefinite. Fourthly, he favors a slow rather than a rapid pace, and pauses to observe current consequences and make appropriate adjustments. And lastly, he believes the occasion to be important; and, other things being equal, he considers the most favourable occasion for innovation to be when the projected change is most likely to be limited to what is intended and least likely to be corrupted by undesired and unmanageable consequences.

So why does Oakeshott appeal to me? Part of the reason is doubtless that his vision of government is arguably more liberal than conservative:

The spring of this [conservative disposition]…in respect of governing and the instruments of government….is to be found in the acceptance of the current condition of human circumstances as I have described it: the propensity to make our own choices and find happiness in doing so, the variety of enterprises each pursued with passion, the diversity of beliefs each held with the conviction of its exclusive truth; the inventiveness, the changefulness and the absence of any large design; the excess, the over-activity and the informal compromise. And the office of government is not to impose other beliefs and activities upon its subjects, not to tutor or to educate them, not to make them better or happier in another way, not to direct them, to galvanize them into action, to lead them or to coordinate their activities so that no occasion of conflict shall occur; the office of government is merely to rule. This is a specific and limited activity, easily corrupted when it is combined with any other, and, in the circumstances, indispensable. The image of the ruler is the umpire whose business is to administer the rules of the game, or the chairman who governs the debate according to known rules but does not himself participate in it.

And it is true that his real identity might actually be an anti-utopian liberal like Isaiah Berlin. However, if he articulated liberal ideas, he did so within the Conservative tradition. He identified himself as such and is much more of a touchstone in Conservative circles than Liberal ones.

The real strength of his conservatism is that it is rooted in the present not the past. He’s warning about the drawbacks of dramatic change not extolling the benefits of a lost past. Therefore, his writings can cut against the right as much as the left.

In fact, it is principally as a critic of conservatives that I have come across Oakeshott. I first (unwittingly) imbibed his ideas through their reflection in Francis Fukuyama’s critique of the invasion of Iraq as a hopelessly utopian project, designed to produce change more rapid than any society could absorb. Then I started reading Andrew Sullivan’s Daily Dish blog. Sullivan wrote his PhD on Oakeshott and frequently used him to lash the Republican party. For Sullivan, the American Right is not about preserving but about destroying the New Deal and America’s tradition of tolerance.

While British conservatism is more Oakeshottian than its American counterpart, an Oakeshottian critique of it is still possible. The bungling mess that was Health and Social Care was a leap into the unknown that appeared to be less about ‘redressing some specific disequilibrium’ than a mania for change. Michael Gove often seems to be trying to drag education out of the present and into the past. And tearing Britain out of the European Union would be a disruptive change, whose proponents are nowhere near meeting ‘the onus of proof, to show that the proposed change may be expected to be on the whole beneficial.’

I find Oakeshott interesting because his brand of conservatism is not reactionary or counter-revolutionary. It allows space for gradual reform and therefore can be more pragmatic. Oakeshott is at pains to point out that he is advocating a ‘disposition’ rather than an ideology. Therefore, it provides a resource by for critiquing any ideology including those of conservative parties.

Batman and Guns

03-batman-november-1

On July 20th 2012, a young man walked into a midnight showing of the Dark Knight Rises in a cinema in Aurora, Colorado. He released tear gas and open fire on the audience. 12 people died and 70 were injured.

This grissly confluence of Batman and firearms led New Yorker journalist Jill Lepore to explore the history of the character and the weapon. Batman is generally associated with a rather unamerican antipathy to guns. For example, in the Dark Knight Rises, even during a hairy rooftop fight with Bane’s henchmen Batman still stops catwoman trying to shoot her way to safety.

But Lepore notes that:

It hasn’t always been this way. Americans used to hold a different set of beliefs about guns. So did Batman, who started out with a gun—until he got rid of it. The nineteen-thirties, the golden age of comic-book superheroes, was a time of landmark gun legislation. In 1934, the National Rifle Association supported the National Firearms Act—the first federal gun-control legislation—and, four years later, the 1938 Federal Firearms Act. A great many gun-safety measures on the books today date to those two pieces of legislation, which together mandated licensing for handgun dealers, introduced waiting periods for handgun buyers, required permits for anyone wishing to carry a concealed weapon, and effectively prohibited the sale of the only gun banned in the United States today: the automatic weapon (or “machine gun”).

Then in the 1940s, following a wave of concern about the social impact of comics that changed:

Maybe it was a simple demurral to the critics, but the disarming of the Dark Knight reads like a concern about the commonweal, a deferral to an accepted and important idea about the division between civilian and military life. Superheroes weren’t soldiers or policemen. They were private citizens. They shouldn’t carry concealed weapons. Villains carried guns. The Joker, introduced in the spring of 1940, carried a gun, and sometimes two. (Batman thwarts him with a bulletproof vest; once Joker realizes this, he aims for his head.) After Ellsworth told Kane to lay off the guns, Kane wrote a two-page piece—issued in November, 1939—explaining Batman’s origins: “Legend: The Bat Man and How He Came to Be.” When Bruce Wayne was a boy, his parents had been killed before his eyes, shot to death.

Batman, then, came out of a time when the private ownership of firearms was considered a proper matter for government regulation.

Lepore suggests that it is not Batman that has changed but America. Starting in the 1960s, conservatives began asserting their current absolutist reading of the second amendment. They took over the NRA and began using its mass membership to propogate their views. Batman belongs to an older and in many ways saner America.

Thus those who see Batman as a sort of right-wing ubermensch (either as praise or criticism) have failed to notice that even the Dark Knight needs limits on his power.

Coming up on Matter of Facts – Superhero week

burka-avenger_620x350

Earlier this year a kids program began airing on Pakistani TV with a novel twist on superhero genre:

Burka Avenger stars a girls’ school teacher who dons a burka to combat a cast of Taliban-esque villains with a decidedly conservative view of the appropriate role of women in society (the show contains clear parallels to Malala Yousafzai, the young campaigner for girls’ education in Pakistan who was shot in the head by the Taliban). To fight these nemeses, Jiya, the star of the show, employs a novel form of marshal arts that utilizes only books and pens. The message is clear: The pen is mightier than the sword.

On this blog, I try to cover both serious and amusing. This week’s topic is both.

Superheros are big business. Since 2008, the highest grossing film of the year has been a superhero film as often as not. The billions of pounds these films have made reflect millions of people watching them and with that a significant cultural influence. The Burka Avenger is just the latest example of these characters being used to discuss a significant issue. And while there have been many foreign imitations of them, they remain a fundamentally American totem and a means of spreading a distinctly American view of heroism.

Look out for posts on:

  • Which comic inspired electronic tagging
  • The first director to bring Batman to the screen
  • Superman’s religion
  • The real-life superheros
  • The best improvisation ever
  • Batman and guns

Note of thanks:

As I’ve never read a comic book in my life, I’m only able to do this series because I’ve been able to pick the brain’s of someone whose read literally thousands of them. So a big thanks to my good friend Sam Willis for providing quite a number of these facts.

Hungry yet obese (America week)

A study by Oxford University and Harvard Medical School looked at the weight of rough sleepers in Boston:

Researchers examined the body mass index (BMI) data of 5,632 homeless men and women in Boston, and found that nearly one-third of them were obese. They used the medical electronic records at 80 hospital and shelter sites for the homeless in Boston, using data from the Boston Health Care for the Homeless Program, one of the largest adult homeless study populations reported to date.

They found that just 1.6% of the homeless in the sample could be classed as ‘underweight’. Morbid obesity – where people are 50%-100% above their ideal body weight – was three times more common with 5.6% of homeless adults classed as morbidly obese.

As Wired explains:

The findings are the latest and most dramatic illustration of what’s called the “hunger-obesity paradox,” a term coined in 2005 by neurophysiologist Lawrence Scheier to describe the simultaneous presence of hunger and obesity.

Around that time, a vernacular sea change occurred, with “hunger” and its connotations of starvation replaced by “food insecure,” a term more descriptive of people who might consume enough raw calories but not enough nutrients.

The paradox fit with a general modern relationship in the United States between weight and wealth. Whereas obesity was once a sign of wealth, it now tracks with poverty. The poorer and less food-secure people are, the more likely they are to be overweight or obese.

Thus the waistlines of America’s homeless are indicative of the part that poverty has played in making the US into the most obese nations on earth. There is a clear correlation between obesity and inequality. Eating healthily has become too expensive for many of the poorest Americans: between 2007 and 2011 the price of the healthiest foods increased at around twice the rate of energy-dense junk food. And what is more, many of the poorest neighbourhoods – and in fact 10% of the country – lie in ‘food deserts’ which are “urban census tracts where a significant proportion of people live more than a mile away from a grocery store and rural tracts where they live more than 10 miles away.”

America’s clapped out constitution (America Week)

The Wright Brother's plane. Like the US constitution it's a miracle of invention, you'd be made to use in the present day.

The Wright Brother’s plane. Like the US constitution it’s a miracle of invention, you’d be mad to use it in the present day.

In America’s civil religion, the constitution is the sacred text and the Founding Father’s are the prophets. They even have temples like Independence Hall in Philadelphia. Both the document and its authors are held up as paragons and models with huge rhetorical power.

Yet when a nation’s constitution prevents the rubbish in its capital city being collected, that constitution is clearly broken. The sheer absurdity of what is happening in America at the moment is staggering: the government of the most powerful nation on earth has voted to spend more than it raises in taxes and now has to (but can’t) separately vote to borrow the money to plug the gap. This Washington Post article observes “countries like Pakistan and Colombia have had civil wars, coups, financial crises, even defaults but never a government shutdown.” In fact a Commonwealth guidance note on debt management for its developing members warns them that legislative involvement in specific lending decisions “adds a potentially cumbersome, time-consuming and overpoliticised step in the decision-making process when time is often of the essence for market borrowing” and that ideally “an annual borrowing limit is set consistently with the financing requirement implied by the annual budget.” The way the US handles its debt thus falls far short of what would be expected of any nation let alone a superpower.

The reality is that debt management is far from being the only area where the US constitution is antiquated. What makes the US constitution so remarkable is that is that it is the first example of creating a supreme law for a democratic nation. However, this is the source of its weakness in the present day. You are not getting to get things right on a first attempt and since the constitution was written in 1787, we have learnt a lot more about how to govern a country.

While prototypes in other fields are likely to be replaced or substantially improved, the constitution is entrenched and extraordinary hurdles must be cleared before it can be amended. So it remains riddled with problems that later constitutions have at least to certain extent solved such as:

  • It protects the wrong rights. The hallowed Bill of Rights is a strange document to 21st century eyes and not just because of the right to bear arms. It includes a right not to have soldiers garrisoned in your house but – unlike the ECHR – no protections for the right to marry, to receive an education or live free from discrimination. Many individual decisions are also dodgy, for example, the Citizens United decision that concluded that preventing unlimited corporate spending on advertising during elections was a breach of freedom of speech!
  • A lack of legislative accountability. It is often unclear to voters what branch of government is responsible for what outcome. So for example, the Democrats were punished in the 2010 mid-terms for Barack Obama’s percieved failure to get the economy moving even though it was Republicans in congress who shrunk his stimulus bill.
  • It politicises the judiciary by the combining an expansive role of the Supreme Court with requiring congressional approval for judicial appointments.
  • It’s undemocratic. Wyoming (population: 500,000) and California (population: 38,000,000) have the same representation in the US Senate.
  •  It’s a vested interests dream. You get loads of opportunities to block measures you don’t like. Take the healthcare industries effort to block Obamacare: it had to be voted through by the house, get a supermajority in the Senate, not be vetoed by the president, survive a challenge at the Supreme Court and Republican controlled states have dragged their feet on implementing it.
  • It makes solving social problems harder to solve. This is related to the point above. Because there are so many ‘veto points’ in the American political system that means it is hard to assemble a political coalition behind policies that might solve problems: universal healthcare is the obvious answer. In fact, much social science research has suggested that the more veto points in a nation’s constitution, the higher its poverty rate will be.
  • Inability to resolve impasses democratically. A constitution heavy in veto points assumes that it is possible to default back to the status quo. This isn’t always possible and then a dangerous impasse can result. The most tragic example in US history is the Civil War: slavery had to be either permitted or not in states acceding to the union, there was no status quo to default back to. The Northern dominated House and Southern dominated Senate couldn’t agree on this matter, and so the two sides resolved the matter on the battle field. In general, it seems that democracy fairs better in parliamentary systems because they avoid this kind of impasse.

To move on from this Model-T constitution,  the constitution should be made easier to change, so that it can evolve. Furthermore, the constitution should stop being used as a normative standard: the fact its in the constitution doesn’t stop the right to bear arms being a stupid idea.