Skip to main content

Home/ New Media Ethics 2009 course/ Group items tagged Correlation

Rss Feed Group items tagged

Weiye Loh

Evolutionary analysis shows languages obey few ordering rules - 0 views

  • The authors of the new paper point out just how hard it is to study languages. We're aware of over 7,000 of them, and they vary significantly in complexity. There are a number of large language families that are likely derived from a single root, but a large number of languages don't slot easily into one of the major groups. Against that backdrop, even a set of simple structural decisions—does the noun or verb come first? where does the preposition go?—become dizzyingly complex, with different patterns apparent even within a single language tree.
  • Linguists, however, have been attempting to find order within the chaos. Noam Chomsky helped establish the Generative school of thought, which suggests that there must be some constraints to this madness, some rules that help make a language easier for children to pick up, and hence more likely to persist. Others have approached this issue via a statistical approach (the authors credit those inspired by Joseph Greenberg for this), looking for word-order rules that consistently correlate across language families. This approach has identified a handful of what may be language universals, but our uncertainty about language relationships can make it challenging to know when some of these are correlations are simply derived from a common inheritance.
  • For anyone with a biology background, having traits shared through common inheritance should ring a bell. Evolutionary biologists have long been able to build family trees of related species, called phylogenetic trees. By figuring out what species have the most traits in common and grouping them together, it's possible to identify when certain features have evolved in the past. In recent years, the increase in computing power and DNA sequences to align has led to some very sophisticated phylogenetic software, which can analyze every possible tree and perform a Bayesian statistical analysis to figure out which trees are most likely to represent reality. By treating language features like subject-verb order as a trait, the authors were able to perform this sort of analysis on four different language families: 79 Indo-European languages, 130 Austronesian languages, 66 Bantu languages, and 26 Uto-Aztecan languages. Although we don't have a complete roster of the languages in those families, they include over 2,400 languages that have been evolving for a minimum of 4,000 years.
  • ...4 more annotations...
  • The results are bad news for universalists: "most observed functional dependencies between traits are lineage-specific rather than universal tendencies," according to the authors. The authors were able to identify 19 strong correlations between word order traits, but none of these appeared in all four families; only one of them appeared in more than two. Fifteen of them only occur in a single family. Specific predictions based on the Greenberg approach to linguistics also failed to hold up under the phylogenetic analysis. "Systematic linkages of traits are likely to be the rare exception rather than the rule," the authors conclude.
  • If universal features can't account for what we observe, what can? Common descent. "Cultural evolution is the primary factor that determines linguistic structure, with the current state of a linguistic system shaping and constraining future states."
  • it still leaves a lot of areas open for linguists to argue about. And the study did not build an exhaustive tree of any of the language families, in part because we probably don't have enough information to classify all of them at this point.
  • Still, it's hard to imagine any further details could overturn the gist of things, given how badly features failed to correlate across language families. And the work might be well received in some communities, since it provides an invitation to ask a fascinating question: given that there aren't obvious word order patterns across languages, how does the human brain do so well at learning the rules that are a peculiarity to any one of them?
  •  
    young children can easily learn to master more than one language in an astonishingly short period of time. This has led a number of linguists, most notably Noam Chomsky, to suggest that there might be language universals, common features of all languages that the human brain is attuned to, making learning easier; others have looked for statistical correlations between languages. Now, a team of cognitive scientists has teamed up with an evolutionary biologist to perform a phylogenetic analysis of language families, and the results suggest that when it comes to the way languages order key sentence components, there are no rules.
Weiye Loh

Why Are the Rich So Good at the Internet? | Fast Company - 0 views

  • It even suggests the existence of a tipping point, where Internet use takes off at a certain income level.
  • even among groups that own the necessary technology, less wealth equates to less (and less varied) Internet usage.
  • The report, an umbrella analysis of three Pew surveys conducted in 2009 and 2010, compares Internet use among American households in four different income brackets: less than $30,000 a year; $30,000-50,000; $50,000-75,000; and greater than $75,000. Respondents--more than 3,000 people participated--were asked a variety of questions about how often they used the Internet, and what sorts of services they took advantage of (such as email, online news, booking travel online, or health research).
  • ...7 more annotations...
  • As might be expected, the wealthier used the Internet more.
  • Almost 90% of the wealthiest respondents reported broadband access at home. Of those in the under-$30,000 households, that figure was only 40%. "I would expect some type of correlation," says Jansen. "But we controlled for community type--urban, rural, suburban--educational attainment, race, ethnicity, gender, and age." None was nearly so strongly correlated as income.
  • Age did have some effect, and rural regions were a good deal less wired
  • Once a modestly middle-class family buys a computer and Internet access, why is it that they spend less time researching products online than their wealthier counterparts, given that they have a tighter budget than the ultra-wealthy?
  • Jansen notes that for many questions Pew asked about Internet use, there appeared to be a tipping point somewhere in the $30,000-$50,000 range. Consider, for instance, the data on those who researched products online. Only 67% of lowest-income Internet users research products online. Make it over the hump into the $30,000-$50,000 bracket, though, and all of a sudden 81% of internet users do so--a jump of 14 points. But then as you climb the income ladder, the change in behavior begins to level out, just climbing a few percentage points with each bracket
  • "It would be interesting to look at what is going on at that particular income level," says Jansen, suggesting a potential tack for further research, "that seems to indicate a fairly robust use of technology and interest."
  • Jansen, like any careful researcher, cautions against confusing correlation with causation. It may be that people are using the web to make their fortunes, and not using their fortunes to surf the web.
  •  
    Pew Internet has released a report finding that income is the strongest predictor of whether, how often, and in what ways Americans use the web.
Weiye Loh

Freakonomics » Why Is Failure a Sign of a Healthy Economy? A Guest Post by Ti... - 0 views

  • Governments often fall down on all three: they have a particular ideology and so push a single-minded policy; they bet big; and they don’t bother to evaluate the results too carefully, perhaps through overconfidence. But markets can fail badly too, and for much the same reason. Just think about the subprime crisis. It failed the same three tests. First, many big banks and insurance companies were taking similar bets at similar times, so that when subprime loans started to go bad, much of Wall Street started struggling simultaneously. Second, the bets were gigantic. Fancy derivatives such as credit default swaps and complex mortgage-backed securities were new, rapidly growing, and largely untested. And third, many investment bankers were being paid large bonuses on the assumption that their performance could be measured properly – and it couldn’t, because profitable-seeming bets concealed large risks.
  • a study by Kathy Fogel, Randall Morck, and Bernard Yeung, found statistical evidence that economies with more churn in the corporate sector also had faster economic growth. The relationship even seems causal: churn today is correlated with fast economic growth tomorrow. The real benefit of this creative destruction, say Fogel and her colleagues, is not the appearance of “rising stars” but the disappearance of old, inefficient companies. Failure is not only common and unpredictable, it’s healthy.
  •  
    a study by Kathy Fogel, Randall Morck, and Bernard Yeung, found statistical evidence that economies with more churn in the corporate sector also had faster economic growth. The relationship even seems causal: churn today is correlated with fast economic growth tomorrow. The real benefit of this creative destruction, say Fogel and her colleagues, is not the appearance of "rising stars" but the disappearance of old, inefficient companies. Failure is not only common and unpredictable, it's healthy.
Weiye Loh

Does "Inclusion" Matter for Open Government? (The Answer Is, Very Much Indeed... - 0 views

  • But in the context of the Open Government Partnership and the 70 or so countries that have already committed themselves to this or are in the process I’m not sure that the world can afford to wait to see whether this correlation is direct, indirect or spurious especially if we can recognize that in the world of OGP, the currency of accumulation and concentration is not raw economic wealth but rather raw political power.
  • in the same way as there appears to be an association between the rise of the Internet and increasing concentrations of wealth one might anticipate that the rise of Internet enabled structures of government might be associated with the increasing concentration of political power in fewer and fewer hands and particularly the hands of those most adept at manipulating the artifacts and symbols of the new Internet age.
  • I am struck by the fact that while the OGP over and over talks about the importance and value and need for Open Government there is no similar or even partial call for Inclusive Government.  I’ve argued elsewhere how “Open”, in the absence of attention being paid to ensuring that the pre-conditions for the broadest base of participation will almost inevitably lead to the empowerment of the powerful. What I fear with the OGP is that by not paying even a modicum of attention to the issue of inclusion or inclusive development and participation that all of the idealism and energy that is displayed today in Brasilia is being directed towards the creation of the Governance equivalents of the Internet billionaires whatever that might look like.
  • ...1 more annotation...
  • crowd sourced public policy
  •  
    alongside the rise of the Internet and the empowerment of the Internet generation has emerged the greatest inequalities of wealth and privilege that any of the increasingly Internet enabled economies/societies have experienced at least since the great Depression and perhaps since the beginnings of systematic economic record keeping.  The association between the rise of inequality and the rise of the Internet has not yet been explained and if may simply be a coincidence but somehow I'm doubtful and we await a newer generation of rather more critical and less dewey economists to give us the models and explanations for this co-evolution.
Weiye Loh

Singapore M.D.: Confidence Goods 15 - 0 views

  • Mr Lee Seck Kay believes that"... doctors need to care about their looks; never mind if they are not handsome, but at least they should not give the impression that they are lackadaisical. It is a moral responsibility that many doctors tend to neglect, much to their detriment." (emphasis mine)Mr Anthony Goh's contribution is:"The doctor's personality and the way he conducts himself speak better than looks."Mr Javern Sim shares his experience and wisdom thus:"I have occasionally come across doctors who are more interested in getting thediagnosis and prescription of medicine over and done with, rather thancommunicating properly with their patients.It is imperative for doctors to be skilful not only on the treatment table, but also in terms of patient management and communication."
  • I would have thought that making the correct diagnosis and prescribing the appropriate medicine and letting the patient know the two constituted patient management and communication.
  • Why do the writers seem more hung up on how the doctors look or conduct themselves than on the quality of the medical care or advice, as if the clinical encounter was more a date than a consultation? My suspicion is that lacking the means or inclination to assess the quality of care, patients instead base their judgement of a doctor on things they can assess. It's a natural thing to do - it makes us feel we have control over the situation - but then how a doctor looks or behaves towards you may have very little correlation with the quality of care he provides. If patients choose to judge doctors on style than substance, then perhaps that what they will get.The irony, of course, is that doctors too sometimes judge patients by their appearances...
  •  
    laymen tend to judge doctors based more on style than substance
Weiye Loh

The Creativity Crisis - Newsweek - 0 views

  • The accepted definition of creativity is production of something original and useful, and that’s what’s reflected in the tests. There is never one right answer. To be creative requires divergent thinking (generating many unique ideas) and then convergent thinking (combining those ideas into the best result).
  • Torrance’s tasks, which have become the gold standard in creativity assessment, measure creativity perfectly. What’s shocking is how incredibly well Torrance’s creativity index predicted those kids’ creative accomplishments as adults.
  • The correlation to lifetime creative accomplishment was more than three times stronger for childhood creativity than childhood IQ.
  • ...20 more annotations...
  • there is one crucial difference between IQ and CQ scores. With intelligence, there is a phenomenon called the Flynn effect—each generation, scores go up about 10 points. Enriched environments are making kids smarter. With creativity, a reverse trend has just been identified and is being reported for the first time here: American creativity scores are falling.
  • creativity scores had been steadily rising, just like IQ scores, until 1990. Since then, creativity scores have consistently inched downward.
  • It is the scores of younger children in America—from kindergarten through sixth grade—for whom the decline is “most serious.”
  • It’s too early to determine conclusively why U.S. creativity scores are declining. One likely culprit is the number of hours kids now spend in front of the TV and playing videogames rather than engaging in creative activities. Another is the lack of creativity development in our schools. In effect, it’s left to the luck of the draw who becomes creative: there’s no concerted effort to nurture the creativity of all children.
  • Around the world, though, other countries are making creativity development a national priority.
  • In China there has been widespread education reform to extinguish the drill-and-kill teaching style. Instead, Chinese schools are also adopting a problem-based learning approach.
  • When faculty of a major Chinese university asked Plucker to identify trends in American education, he described our focus on standardized curriculum, rote memorization, and nationalized testing.
  • Overwhelmed by curriculum standards, American teachers warn there’s no room in the day for a creativity class.
  • The age-old belief that the arts have a special claim to creativity is unfounded. When scholars gave creativity tasks to both engineering majors and music majors, their scores laid down on an identical spectrum, with the same high averages and standard deviations.
  • The argument that we can’t teach creativity because kids already have too much to learn is a false trade-off. Creativity isn’t about freedom from concrete facts. Rather, fact-finding and deep research are vital stages in the creative process.
  • The lore of pop psychology is that creativity occurs on the right side of the brain. But we now know that if you tried to be creative using only the right side of your brain, it’d be like living with ideas perpetually at the tip of your tongue, just beyond reach.
  • Creativity requires constant shifting, blender pulses of both divergent thinking and convergent thinking, to combine new information with old and forgotten ideas. Highly creative people are very good at marshaling their brains into bilateral mode, and the more creative they are, the more they dual-activate.
  • “Creativity can be taught,” says James C. Kaufman, professor at California State University, San Bernardino. What’s common about successful programs is they alternate maximum divergent thinking with bouts of intense convergent thinking, through several stages. Real improvement doesn’t happen in a weekend workshop. But when applied to the everyday process of work or school, brain function improves.
  • highly creative adults tended to grow up in families embodying opposites. Parents encouraged uniqueness, yet provided stability. They were highly responsive to kids’ needs, yet challenged kids to develop skills. This resulted in a sort of adaptability: in times of anxiousness, clear rules could reduce chaos—yet when kids were bored, they could seek change, too. In the space between anxiety and boredom was where creativity flourished.
  • highly creative adults frequently grew up with hardship. Hardship by itself doesn’t lead to creativity, but it does force kids to become more flexible—and flexibility helps with creativity.
  • In early childhood, distinct types of free play are associated with high creativity. Preschoolers who spend more time in role-play (acting out characters) have higher measures of creativity: voicing someone else’s point of view helps develop their ability to analyze situations from different perspectives. When playing alone, highly creative first graders may act out strong negative emotions: they’ll be angry, hostile, anguished.
  • In middle childhood, kids sometimes create paracosms—fantasies of entire alternative worlds. Kids revisit their paracosms repeatedly, sometimes for months, and even create languages spoken there. This type of play peaks at age 9 or 10, and it’s a very strong sign of future creativity.
  • From fourth grade on, creativity no longer occurs in a vacuum; researching and studying become an integral part of coming up with useful solutions. But this transition isn’t easy. As school stuffs more complex information into their heads, kids get overloaded, and creativity suffers. When creative children have a supportive teacher—someone tolerant of unconventional answers, occasional disruptions, or detours of curiosity—they tend to excel. When they don’t, they tend to underperform and drop out of high school or don’t finish college at high rates.
  • They’re quitting because they’re discouraged and bored, not because they’re dark, depressed, anxious, or neurotic. It’s a myth that creative people have these traits. (Those traits actually shut down creativity; they make people less open to experience and less interested in novelty.) Rather, creative people, for the most part, exhibit active moods and positive affect. They’re not particularly happy—contentment is a kind of complacency creative people rarely have. But they’re engaged, motivated, and open to the world.
  • A similar study of 1,500 middle schoolers found that those high in creative self-efficacy had more confidence about their future and ability to succeed. They were sure that their ability to come up with alternatives would aid them, no matter what problems would arise.
  •  
    The Creativity Crisis For the first time, research shows that American creativity is declining. What went wrong-and how we can fix it.
juliet huang

Go slow with Net law - 4 views

Article : Go slow with tech law Published : 23 Aug 2009 Source: Straits Times Background : When Singapore signed a free trade agreement with the USA in 2003, intellectual property rights was a ...

sim lim square

started by juliet huang on 26 Aug 09 no follow-up yet
Weiye Loh

Lies, damned lies, and impact factors - The Dayside - 0 views

  • a journal's impact factor for a given year is the average number of citations received by papers published in the journal during the two preceding years. Letters to the editor, editorials, book reviews, and other non-papers are excluded from the impact factor calculation.
  • Review papers that don't necessarily contain new scientific knowledge yet provide useful overviews garner lots of citations. Five of the top 10 perennially highest-impact-factor journals, including the top four, are review journals.
  • Now suppose you're a journal editor or publisher. In these tough financial times, cash-strapped libraries use impact factors to determine which subscriptions to keep and which to cancel. How would you raise your journal's impact factor? Publishing fewer and better papers is one method. Or you could run more review articles. But, as a paper posted recently on arXiv describes, there's another option: You can manipulate the impact factor by publishing your own papers that cite your own journal.
  • ...1 more annotation...
  • Douglas Arnold and Kristine Fowler. "Nefarious Numbers" is the title they chose for the paper. Its abstract reads as follows: We investigate the journal impact factor, focusing on the applied mathematics category. We demonstrate that significant manipulation of the impact factor is being carried out by the editors of some journals and that the impact factor gives a very inaccurate view of journal quality, which is poorly correlated with expert opinion.
  •  
    Lies, damned lies, and impact factors
Weiye Loh

Is Pure Altruism Possible? - NYTimes.com - 0 views

  • It’s undeniable that people sometimes act in a way that benefits others, but it may seem that they always get something in return — at the very least, the satisfaction of having their desire to help fulfilled.
  • Contemporary discussions of altruism quickly turn to evolutionary explanations. Reciprocal altruism and kin selection are the two main theories. According to reciprocal altruism, evolution favors organisms that sacrifice their good for others in order to gain a favor in return. Kin selection — the famous “selfish gene” theory popularized by Richard Dawkins — says that an individual who behaves altruistically towards others who share its genes will tend to reproduce those genes. Organisms may be altruistic; genes are selfish. The feeling that loving your children more than yourself is hard-wired lends plausibility to the theory of kin selection.
  • The defect of reciprocal altruism is clear. If a person acts to benefit another in the expectation that the favor will be returned, the natural response is: “That’s not altruism!”  Pure altruism, we think, requires a person to sacrifice for another without consideration of personal gain. Doing good for another person because something’s in it for the do-er is the very opposite of what we have in mind. Kin selection does better by allowing that organisms may genuinely sacrifice their interests for another, but it fails to explain why they sometimes do so for those with whom they share no genes
  • ...12 more annotations...
  • When we ask whether human beings are altruistic, we want to know about their motives or intentions. Biological altruism explains how unselfish behavior might have evolved but, as Frans de Waal suggested in his column in The Stone on Sunday, it implies nothing about the motives or intentions of the agent: after all, birds and bats and bees can act altruistically. This fact helps to explain why, despite these evolutionary theories, the view that people never intentionally act to benefit others except to obtain some good for themselves still possesses a powerful lure over our thinking.
  • The lure of this view — egoism — has two sources, one psychological, the other logical. Consider first the psychological. One reason people deny that altruism exists is that, looking inward, they doubt the purity of their own motives. We know that even when we appear to act unselfishly, other reasons for our behavior often rear their heads: the prospect of a future favor, the boost to reputation, or simply the good feeling that comes from appearing to act unselfishly. As Kant and Freud observed, people’s true motives may be hidden, even (or perhaps especially) from themselves. Even if we think we’re acting solely to further another person’s good, that might not be the real reason. (There might be no single “real reason” — actions can have multiple motives.)
  • So the psychological lure of egoism as a theory of human action is partly explained by a certain humility or skepticism people have about their own or others’ motives
  • There’s also a less flattering reason: denying the possibility of pure altruism provides a convenient excuse for selfish behavior.
  • The logical lure of egoism is different: the view seems impossible to disprove. No matter how altruistic a person appears to be, it’s possible to conceive of her motive in egoistic terms.
  • The impossibility of disproving egoism may sound like a virtue of the theory, but, as philosophers of science know, it’s really a fatal drawback. A theory that purports to tell us something about the world, as egoism does, should be falsifiable. Not false, of course, but capable of being tested and thus proved false. If every state of affairs is compatible with egoism, then egoism doesn’t tell us anything distinctive about how things are.
  • s ambiguity in the concepts of desire and the satisfaction of desire. If people possess altruistic motives, then they sometimes act to benefit others without the prospect of gain to themselves. In other words, they desire the good of others for its own sake, not simply as a means to their own satisfaction.
  • Still, when our desires are satisfied we normally experience satisfaction; we feel good when we do good. But that doesn’t mean we do good only in order to get that “warm glow” — that our true incentives are self-interested (as economists tend to claim). Indeed, as de Waal argues, if we didn’t desire the good of others for its own sake, then attaining it wouldn’t produce the warm glow.
  • Common sense tells us that some people are more altruistic than others. Egoism’s claim that these differences are illusory — that deep down, everybody acts only to further their own interests — contradicts our observations and deep-seated human practices of moral evaluation.
  • At the same time, we may notice that generous people don’t necessarily suffer more or flourish less than those who are more self-interested.
  • The point is rather that the kind of altruism we ought to encourage, and probably the only kind with staying power, is satisfying to those who practice it. Studies of rescuers show that they don’t believe their behavior is extraordinary; they feel they must do what they do, because it’s just part of who they are. The same holds for more common, less newsworthy acts — working in soup kitchens, taking pets to people in nursing homes, helping strangers find their way, being neighborly. People who act in these ways believe that they ought to help others, but they also want to help, because doing so affirms who they are and want to be and the kind of world they want to exist. As Prof. Neera Badhwar has argued, their identity is tied up with their values, thus tying self-interest and altruism together. The correlation between doing good and feeling good is not inevitable— inevitability lands us again with that empty, unfalsifiable egoism — but it is more than incidental.
  • Altruists should not be confused with people who automatically sacrifice their own interests for others.
  •  
    Is Pure Altruism Possible?
Weiye Loh

Angry Doctor: Standing up for the 'godless' - 0 views

  • THE Saturday Special report last week ('God wants youth') stated that religious groups were determined not to lose a generation to godlessness, especially now with youth gangs in the news.It also noted that what is at stake is the potential of losing the youth to cynicism, violence and even fanaticism.These remarks suggest a prejudice against those without any religious affiliation.
  • As a society for non-believers, the Humanist Society (Singapore) disagrees.The reality in societies everywhere is that there is no difference between non-believing youth and the religious youth in their propensity towards violence. There are actually higher levels of violence among those who identify themselves as 'religious' or 'faithful'.As for cynicism, there is certainly no correlation between non-belief and a cynical attitude. Many non-believers are involved in the world around them, trying to make it a more humane, compassionate place.
  • "The reality in societies everywhere is that there is no difference between non-believing youth and the religious youth in their propensity towards violence. There are actually higher levels of violence among those who identify themselves as 'religious' or 'faithful'."were contradictory
  • ...1 more annotation...
  • the original letter submitted by Mr Tobin read (emphasis):"The reality in societies around the world is that there is either no difference between non-believing youth and the religious youth in their propensity toward violence or there is actually higher levels of violence among those who identify themselves as "religious" or "faithful." [See, for instance, the studies cited in Michael Shermer’s book “The Science of Good and Evil” 2004 pp. 235-236]"
Weiye Loh

The Inequality That Matters - Tyler Cowen - The American Interest Magazine - 0 views

  • most of the worries about income inequality are bogus, but some are probably better grounded and even more serious than even many of their heralds realize.
  • In terms of immediate political stability, there is less to the income inequality issue than meets the eye. Most analyses of income inequality neglect two major points. First, the inequality of personal well-being is sharply down over the past hundred years and perhaps over the past twenty years as well. Bill Gates is much, much richer than I am, yet it is not obvious that he is much happier if, indeed, he is happier at all. I have access to penicillin, air travel, good cheap food, the Internet and virtually all of the technical innovations that Gates does. Like the vast majority of Americans, I have access to some important new pharmaceuticals, such as statins to protect against heart disease. To be sure, Gates receives the very best care from the world’s top doctors, but our health outcomes are in the same ballpark. I don’t have a private jet or take luxury vacations, and—I think it is fair to say—my house is much smaller than his. I can’t meet with the world’s elite on demand. Still, by broad historical standards, what I share with Bill Gates is far more significant than what I don’t share with him.
  • when average people read about or see income inequality, they don’t feel the moral outrage that radiates from the more passionate egalitarian quarters of society. Instead, they think their lives are pretty good and that they either earned through hard work or lucked into a healthy share of the American dream.
  • ...35 more annotations...
  • This is why, for example, large numbers of Americans oppose the idea of an estate tax even though the current form of the tax, slated to return in 2011, is very unlikely to affect them or their estates. In narrowly self-interested terms, that view may be irrational, but most Americans are unwilling to frame national issues in terms of rich versus poor. There’s a great deal of hostility toward various government bailouts, but the idea of “undeserving” recipients is the key factor in those feelings. Resentment against Wall Street gamesters hasn’t spilled over much into resentment against the wealthy more generally. The bailout for General Motors’ labor unions wasn’t so popular either—again, obviously not because of any bias against the wealthy but because a basic sense of fairness was violated. As of November 2010, congressional Democrats are of a mixed mind as to whether the Bush tax cuts should expire for those whose annual income exceeds $250,000; that is in large part because their constituents bear no animus toward rich people, only toward undeservedly rich people.
  • envy is usually local. At least in the United States, most economic resentment is not directed toward billionaires or high-roller financiers—not even corrupt ones. It’s directed at the guy down the hall who got a bigger raise. It’s directed at the husband of your wife’s sister, because the brand of beer he stocks costs $3 a case more than yours, and so on. That’s another reason why a lot of people aren’t so bothered by income or wealth inequality at the macro level. Most of us don’t compare ourselves to billionaires. Gore Vidal put it honestly: “Whenever a friend succeeds, a little something in me dies.”
  • Occasionally the cynic in me wonders why so many relatively well-off intellectuals lead the egalitarian charge against the privileges of the wealthy. One group has the status currency of money and the other has the status currency of intellect, so might they be competing for overall social regard? The high status of the wealthy in America, or for that matter the high status of celebrities, seems to bother our intellectual class most. That class composes a very small group, however, so the upshot is that growing income inequality won’t necessarily have major political implications at the macro level.
  • All that said, income inequality does matter—for both politics and the economy.
  • The numbers are clear: Income inequality has been rising in the United States, especially at the very top. The data show a big difference between two quite separate issues, namely income growth at the very top of the distribution and greater inequality throughout the distribution. The first trend is much more pronounced than the second, although the two are often confused.
  • When it comes to the first trend, the share of pre-tax income earned by the richest 1 percent of earners has increased from about 8 percent in 1974 to more than 18 percent in 2007. Furthermore, the richest 0.01 percent (the 15,000 or so richest families) had a share of less than 1 percent in 1974 but more than 6 percent of national income in 2007. As noted, those figures are from pre-tax income, so don’t look to the George W. Bush tax cuts to explain the pattern. Furthermore, these gains have been sustained and have evolved over many years, rather than coming in one or two small bursts between 1974 and today.1
  • At the same time, wage growth for the median earner has slowed since 1973. But that slower wage growth has afflicted large numbers of Americans, and it is conceptually distinct from the higher relative share of top income earners. For instance, if you take the 1979–2005 period, the average incomes of the bottom fifth of households increased only 6 percent while the incomes of the middle quintile rose by 21 percent. That’s a widening of the spread of incomes, but it’s not so drastic compared to the explosive gains at the very top.
  • The broader change in income distribution, the one occurring beneath the very top earners, can be deconstructed in a manner that makes nearly all of it look harmless. For instance, there is usually greater inequality of income among both older people and the more highly educated, if only because there is more time and more room for fortunes to vary. Since America is becoming both older and more highly educated, our measured income inequality will increase pretty much by demographic fiat. Economist Thomas Lemieux at the University of British Columbia estimates that these demographic effects explain three-quarters of the observed rise in income inequality for men, and even more for women.2
  • Attacking the problem from a different angle, other economists are challenging whether there is much growth in inequality at all below the super-rich. For instance, real incomes are measured using a common price index, yet poorer people are more likely to shop at discount outlets like Wal-Mart, which have seen big price drops over the past twenty years.3 Once we take this behavior into account, it is unclear whether the real income gaps between the poor and middle class have been widening much at all. Robert J. Gordon, an economist from Northwestern University who is hardly known as a right-wing apologist, wrote in a recent paper that “there was no increase of inequality after 1993 in the bottom 99 percent of the population”, and that whatever overall change there was “can be entirely explained by the behavior of income in the top 1 percent.”4
  • And so we come again to the gains of the top earners, clearly the big story told by the data. It’s worth noting that over this same period of time, inequality of work hours increased too. The top earners worked a lot more and most other Americans worked somewhat less. That’s another reason why high earners don’t occasion more resentment: Many people understand how hard they have to work to get there. It also seems that most of the income gains of the top earners were related to performance pay—bonuses, in other words—and not wildly out-of-whack yearly salaries.5
  • It is also the case that any society with a lot of “threshold earners” is likely to experience growing income inequality. A threshold earner is someone who seeks to earn a certain amount of money and no more. If wages go up, that person will respond by seeking less work or by working less hard or less often. That person simply wants to “get by” in terms of absolute earning power in order to experience other gains in the form of leisure—whether spending time with friends and family, walking in the woods and so on. Luck aside, that person’s income will never rise much above the threshold.
  • The funny thing is this: For years, many cultural critics in and of the United States have been telling us that Americans should behave more like threshold earners. We should be less harried, more interested in nurturing friendships, and more interested in the non-commercial sphere of life. That may well be good advice. Many studies suggest that above a certain level more money brings only marginal increments of happiness. What isn’t so widely advertised is that those same critics have basically been telling us, without realizing it, that we should be acting in such a manner as to increase measured income inequality. Not only is high inequality an inevitable concomitant of human diversity, but growing income inequality may be, too, if lots of us take the kind of advice that will make us happier.
  • Why is the top 1 percent doing so well?
  • Steven N. Kaplan and Joshua Rauh have recently provided a detailed estimation of particular American incomes.6 Their data do not comprise the entire U.S. population, but from partial financial records they find a very strong role for the financial sector in driving the trend toward income concentration at the top. For instance, for 2004, nonfinancial executives of publicly traded companies accounted for less than 6 percent of the top 0.01 percent income bracket. In that same year, the top 25 hedge fund managers combined appear to have earned more than all of the CEOs from the entire S&P 500. The number of Wall Street investors earning more than $100 million a year was nine times higher than the public company executives earning that amount. The authors also relate that they shared their estimates with a former U.S. Secretary of the Treasury, one who also has a Wall Street background. He thought their estimates of earnings in the financial sector were, if anything, understated.
  • Many of the other high earners are also connected to finance. After Wall Street, Kaplan and Rauh identify the legal sector as a contributor to the growing spread in earnings at the top. Yet many high-earning lawyers are doing financial deals, so a lot of the income generated through legal activity is rooted in finance. Other lawyers are defending corporations against lawsuits, filing lawsuits or helping corporations deal with complex regulations. The returns to these activities are an artifact of the growing complexity of the law and government growth rather than a tale of markets per se. Finance aside, there isn’t much of a story of market failure here, even if we don’t find the results aesthetically appealing.
  • When it comes to professional athletes and celebrities, there isn’t much of a mystery as to what has happened. Tiger Woods earns much more, even adjusting for inflation, than Arnold Palmer ever did. J.K. Rowling, the first billionaire author, earns much more than did Charles Dickens. These high incomes come, on balance, from the greater reach of modern communications and marketing. Kids all over the world read about Harry Potter. There is more purchasing power to spend on children’s books and, indeed, on culture and celebrities more generally. For high-earning celebrities, hardly anyone finds these earnings so morally objectionable as to suggest that they be politically actionable. Cultural critics can complain that good schoolteachers earn too little, and they may be right, but that does not make celebrities into political targets. They’re too popular. It’s also pretty clear that most of them work hard to earn their money, by persuading fans to buy or otherwise support their product. Most of these individuals do not come from elite or extremely privileged backgrounds, either. They worked their way to the top, and even if Rowling is not an author for the ages, her books tapped into the spirit of their time in a special way. We may or may not wish to tax the wealthy, including wealthy celebrities, at higher rates, but there is no need to “cure” the structural causes of higher celebrity incomes.
  • to be sure, the high incomes in finance should give us all pause.
  • The first factor driving high returns is sometimes called by practitioners “going short on volatility.” Sometimes it is called “negative skewness.” In plain English, this means that some investors opt for a strategy of betting against big, unexpected moves in market prices. Most of the time investors will do well by this strategy, since big, unexpected moves are outliers by definition. Traders will earn above-average returns in good times. In bad times they won’t suffer fully when catastrophic returns come in, as sooner or later is bound to happen, because the downside of these bets is partly socialized onto the Treasury, the Federal Reserve and, of course, the taxpayers and the unemployed.
  • if you bet against unlikely events, most of the time you will look smart and have the money to validate the appearance. Periodically, however, you will look very bad. Does that kind of pattern sound familiar? It happens in finance, too. Betting against a big decline in home prices is analogous to betting against the Wizards. Every now and then such a bet will blow up in your face, though in most years that trading activity will generate above-average profits and big bonuses for the traders and CEOs.
  • To this mix we can add the fact that many money managers are investing other people’s money. If you plan to stay with an investment bank for ten years or less, most of the people playing this investing strategy will make out very well most of the time. Everyone’s time horizon is a bit limited and you will bring in some nice years of extra returns and reap nice bonuses. And let’s say the whole thing does blow up in your face? What’s the worst that can happen? Your bosses fire you, but you will still have millions in the bank and that MBA from Harvard or Wharton. For the people actually investing the money, there’s barely any downside risk other than having to quit the party early. Furthermore, if everyone else made more or less the same mistake (very surprising major events, such as a busted housing market, affect virtually everybody), you’re hardly disgraced. You might even get rehired at another investment bank, or maybe a hedge fund, within months or even weeks.
  • Moreover, smart shareholders will acquiesce to or even encourage these gambles. They gain on the upside, while the downside, past the point of bankruptcy, is borne by the firm’s creditors. And will the bondholders object? Well, they might have a difficult time monitoring the internal trading operations of financial institutions. Of course, the firm’s trading book cannot be open to competitors, and that means it cannot be open to bondholders (or even most shareholders) either. So what, exactly, will they have in hand to object to?
  • Perhaps more important, government bailouts minimize the damage to creditors on the downside. Neither the Treasury nor the Fed allowed creditors to take any losses from the collapse of the major banks during the financial crisis. The U.S. government guaranteed these loans, either explicitly or implicitly. Guaranteeing the debt also encourages equity holders to take more risk. While current bailouts have not in general maintained equity values, and while share prices have often fallen to near zero following the bust of a major bank, the bailouts still give the bank a lifeline. Instead of the bank being destroyed, sometimes those equity prices do climb back out of the hole. This is true of the major surviving banks in the United States, and even AIG is paying back its bailout. For better or worse, we’re handing out free options on recovery, and that encourages banks to take more risk in the first place.
  • there is an unholy dynamic of short-term trading and investing, backed up by bailouts and risk reduction from the government and the Federal Reserve. This is not good. “Going short on volatility” is a dangerous strategy from a social point of view. For one thing, in so-called normal times, the finance sector attracts a big chunk of the smartest, most hard-working and most talented individuals. That represents a huge human capital opportunity cost to society and the economy at large. But more immediate and more important, it means that banks take far too many risks and go way out on a limb, often in correlated fashion. When their bets turn sour, as they did in 2007–09, everyone else pays the price.
  • And it’s not just the taxpayer cost of the bailout that stings. The financial disruption ends up throwing a lot of people out of work down the economic food chain, often for long periods. Furthermore, the Federal Reserve System has recapitalized major U.S. banks by paying interest on bank reserves and by keeping an unusually high interest rate spread, which allows banks to borrow short from Treasury at near-zero rates and invest in other higher-yielding assets and earn back lots of money rather quickly. In essence, we’re allowing banks to earn their way back by arbitraging interest rate spreads against the U.S. government. This is rarely called a bailout and it doesn’t count as a normal budget item, but it is a bailout nonetheless. This type of implicit bailout brings high social costs by slowing down economic recovery (the interest rate spreads require tight monetary policy) and by redistributing income from the Treasury to the major banks.
  • the “going short on volatility” strategy increases income inequality. In normal years the financial sector is flush with cash and high earnings. In implosion years a lot of the losses are borne by other sectors of society. In other words, financial crisis begets income inequality. Despite being conceptually distinct phenomena, the political economy of income inequality is, in part, the political economy of finance. Simon Johnson tabulates the numbers nicely: From 1973 to 1985, the financial sector never earned more than 16 percent of domestic corporate profits. In 1986, that figure reached 19 percent. In the 1990s, it oscillated between 21 percent and 30 percent, higher than it had ever been in the postwar period. This decade, it reached 41 percent. Pay rose just as dramatically. From 1948 to 1982, average compensation in the financial sector ranged between 99 percent and 108 percent of the average for all domestic private industries. From 1983, it shot upward, reaching 181 percent in 2007.7
  • There’s a second reason why the financial sector abets income inequality: the “moving first” issue. Let’s say that some news hits the market and that traders interpret this news at different speeds. One trader figures out what the news means in a second, while the other traders require five seconds. Still other traders require an entire day or maybe even a month to figure things out. The early traders earn the extra money. They buy the proper assets early, at the lower prices, and reap most of the gains when the other, later traders pile on. Similarly, if you buy into a successful tech company in the early stages, you are “moving first” in a very effective manner, and you will capture most of the gains if that company hits it big.
  • The moving-first phenomenon sums to a “winner-take-all” market. Only some relatively small number of traders, sometimes just one trader, can be first. Those who are first will make far more than those who are fourth or fifth. This difference will persist, even if those who are fourth come pretty close to competing with those who are first. In this context, first is first and it doesn’t matter much whether those who come in fourth pile on a month, a minute or a fraction of a second later. Those who bought (or sold, as the case may be) first have captured and locked in most of the available gains. Since gains are concentrated among the early winners, and the closeness of the runner-ups doesn’t so much matter for income distribution, asset-market trading thus encourages the ongoing concentration of wealth. Many investors make lots of mistakes and lose their money, but each year brings a new bunch of projects that can turn the early investors and traders into very wealthy individuals.
  • These two features of the problem—“going short on volatility” and “getting there first”—are related. Let’s say that Goldman Sachs regularly secures a lot of the best and quickest trades, whether because of its quality analysis, inside connections or high-frequency trading apparatus (it has all three). It builds up a treasure chest of profits and continues to hire very sharp traders and to receive valuable information. Those profits allow it to make “short on volatility” bets faster than anyone else, because if it messes up, it still has a large enough buffer to pad losses. This increases the odds that Goldman will repeatedly pull in spectacular profits.
  • Still, every now and then Goldman will go bust, or would go bust if not for government bailouts. But the odds are in any given year that it won’t because of the advantages it and other big banks have. It’s as if the major banks have tapped a hole in the social till and they are drinking from it with a straw. In any given year, this practice may seem tolerable—didn’t the bank earn the money fair and square by a series of fairly normal looking trades? Yet over time this situation will corrode productivity, because what the banks do bears almost no resemblance to a process of getting capital into the hands of those who can make most efficient use of it. And it leads to periodic financial explosions. That, in short, is the real problem of income inequality we face today. It’s what causes the inequality at the very top of the earning pyramid that has dangerous implications for the economy as a whole.
  • What about controlling bank risk-taking directly with tight government oversight? That is not practical. There are more ways for banks to take risks than even knowledgeable regulators can possibly control; it just isn’t that easy to oversee a balance sheet with hundreds of billions of dollars on it, especially when short-term positions are wound down before quarterly inspections. It’s also not clear how well regulators can identify risky assets. Some of the worst excesses of the financial crisis were grounded in mortgage-backed assets—a very traditional function of banks—not exotic derivatives trading strategies. Virtually any asset position can be used to bet long odds, one way or another. It is naive to think that underpaid, undertrained regulators can keep up with financial traders, especially when the latter stand to earn billions by circumventing the intent of regulations while remaining within the letter of the law.
  • For the time being, we need to accept the possibility that the financial sector has learned how to game the American (and UK-based) system of state capitalism. It’s no longer obvious that the system is stable at a macro level, and extreme income inequality at the top has been one result of that imbalance. Income inequality is a symptom, however, rather than a cause of the real problem. The root cause of income inequality, viewed in the most general terms, is extreme human ingenuity, albeit of a perverse kind. That is why it is so hard to control.
  • Another root cause of growing inequality is that the modern world, by so limiting our downside risk, makes extreme risk-taking all too comfortable and easy. More risk-taking will mean more inequality, sooner or later, because winners always emerge from risk-taking. Yet bankers who take bad risks (provided those risks are legal) simply do not end up with bad outcomes in any absolute sense. They still have millions in the bank, lots of human capital and plenty of social status. We’re not going to bring back torture, trial by ordeal or debtors’ prisons, nor should we. Yet the threat of impoverishment and disgrace no longer looms the way it once did, so we no longer can constrain excess financial risk-taking. It’s too soft and cushy a world.
  • Why don’t we simply eliminate the safety net for clueless or unlucky risk-takers so that losses equal gains overall? That’s a good idea in principle, but it is hard to put into practice. Once a financial crisis arrives, politicians will seek to limit the damage, and that means they will bail out major financial institutions. Had we not passed TARP and related policies, the United States probably would have faced unemployment rates of 25 percent of higher, as in the Great Depression. The political consequences would not have been pretty. Bank bailouts may sound quite interventionist, and indeed they are, but in relative terms they probably were the most libertarian policy we had on tap. It meant big one-time expenses, but, for the most part, it kept government out of the real economy (the General Motors bailout aside).
  • We probably don’t have any solution to the hazards created by our financial sector, not because plutocrats are preventing our political system from adopting appropriate remedies, but because we don’t know what those remedies are. Yet neither is another crisis immediately upon us. The underlying dynamic favors excess risk-taking, but banks at the current moment fear the scrutiny of regulators and the public and so are playing it fairly safe. They are sitting on money rather than lending it out. The biggest risk today is how few parties will take risks, and, in part, the caution of banks is driving our current protracted economic slowdown. According to this view, the long run will bring another financial crisis once moods pick up and external scrutiny weakens, but that day of reckoning is still some ways off.
  • Is the overall picture a shame? Yes. Is it distorting resource distribution and productivity in the meantime? Yes. Will it again bring our economy to its knees? Probably. Maybe that’s simply the price of modern society. Income inequality will likely continue to rise and we will search in vain for the appropriate political remedies for our underlying problems.
Weiye Loh

The Problem with Climate Change | the kent ridge common - 0 views

  • what is climate change? From a scientific point of view, it is simply a statistical change in atmospheric variables (temperature, precipitation, humidity etc). It has been occurring ever since the Earth came into existence, far before humans even set foot on the planet: our climate has been fluctuating between warm periods and ice ages, with further variations within. In fact, we are living in a warm interglacial period in the middle of an ice age.
  • Global warming has often been portrayed in apocalyptic tones, whether from the mouth of the media or environmental groups: the daily news tell of natural disasters happening at a frightening pace, of crop failures due to strange weather, of mass extinctions and coral die-outs. When the devastating tsunami struck Southeast Asia years ago, some said it was the wrath of God against human mistreatment of the environment; when hurricane Katrina dealt out a catastrophe, others said it was because of (America’s) failure to deal with climate change. Science gives the figures and trends, and people take these to extremes.
  • One immediate problem with blaming climate change for every weather-related disaster or phenomenon is that it reduces humans’ responsibility of mitigating or preventing it. If natural disasters are already, as their name suggests, natural, adding the tag ‘global warming’ or ‘climate change’ emphasizes the dominance of natural forces, and our inability to do anything about it. Surely, humans cannot undo climate change? Even at Cancun, amid the carbon cuts that have been promised, questions are being brought up on whether they are sufficient to reverse our actions and ‘save’ the planet.  Yet the talk about this remote, omnipotent force known as climate change obscures the fact that, we can, and have always been, thinking of ways to reduce the impact of natural hazards. Forecasting, building better infrastructure and coordinating more efficient responses – all these are far more desirable to wading in woe. For example, we will do better at preventing floods in Singapore at tackling the problems rather than singing in praise of God.
  • ...5 more annotations...
  • However, a greater concern lies in the notion of climate change itself. Climate change is in essence one kind of nature-society relationship, in which humans influence the climate through greenhouse gas (particularly CO2) emissions, and the climate strikes back by heating up and going crazy at times. This can be further simplified into a battle between humans and CO2: reducing CO2 guards against climate change, and increasing it aggravates the consequences. This view is anchored in scientists’ recommendation that a ‘safe’ level of CO2 should be at 350 parts per million (ppm) instead of the current 390. Already, the need to reduce CO2 is understood, as is evident in the push for greener fuels, more efficient means of production, the proliferation of ‘green’ products and companies, and most recently, the Cancun talks.
  • So can there be anything wrong with reducing CO2? No, there isn’t, but singling out CO2 as the culprit of climate change or of the environmental problems we face prevents us from looking within. What do I mean? The enemy, CO2, is an ‘other’, an externality produced by our economic systems but never an inherent component of the systems. Thus, we can declare war on the gas or on climate change without taking a step back and questioning: is there anything wrong with the way we develop?  Take Singapore for example: the government pledged to reduce carbon emissions by 16% under ‘business as usual’ standards, which says nothing about how ‘business’ is going to be changed other than having less carbon emissions (in fact, it is questionable even that CO2 levels will decrease, as ‘business as usual’ standards project a steady increase emission of CO2 each year). With the development of green technologies, decrease in carbon emissions will mainly be brought about by increased energy efficiency and switch to alternative fuels (including the insidious nuclear energy).
  • Thus, the way we develop will hardly be changed. Nobody questions whether our neoliberal system of development, which relies heavily on consumption to drive economies, needs to be looked into. We assume that it is the right way to develop, and only tweak it for the amount of externalities produced. Whether or not we should be measuring development by the Gross Domestic Product (GDP) or if welfare is correlated to the amount of goods and services consumed is never considered. Even the UN-REDD (Reducing Emissions from Deforestation and Forest Degradation) scheme which aims to pay forest-rich countries for protecting their forests, ends up putting a price tag on them. The environment is being subsumed under the economy, when it should be that the economy is re-looked to take the environment into consideration.
  • when the world is celebrating after having held at bay the dangerous greenhouse gas, why would anyone bother rethinking about the economy? Yet we should, simply because there are alternative nature-society relationships and discourses about nature that are more or of equal importance as global warming. Annie Leonard’s informative videos on The Story of Stuff and specific products like electronics, bottled water and cosmetics shed light on the dangers of our ‘throw-away culture’ on the planet and poorer countries. What if the enemy was instead consumerism? Doing so would force countries (especially richer ones) to fundamentally question the nature of development, instead of just applying a quick technological fix. This is so much more difficult (and less economically viable), alongside other issues like environmental injustices – e.g. pollution or dumping of waste by Trans-National Corporations in poorer countries and removal of indigenous land rights. It is no wonder that we choose to disregard internal problems and focus instead on an external enemy; when CO2 is the culprit, the solution is too simple and detached from the communities that are affected by changes in their environment.
  • We need hence to allow for a greater politics of the environment. What I am proposing is not to diminish our action to reduce carbon emissions, for I do believe that it is part of the environmental problem that we are facing. What instead should be done is to reduce our fixation on CO2 as the main or only driver of climate change, and of climate change as the most pertinent nature-society issue we are facing. We should understand that there are many other ways of thinking about the environment; ‘developing’ countries, for example, tend to have a closer relationship with their environment – it is not something ‘out there’ but constantly interacted with for food, water, regulating services and cultural value. Their views and the impact of the socio-economic forces (often from TNCs and multi-lateral organizations like IMF) that shape the environment must also be taken into account, as do alternative meanings of sustainable development. Thus, even as we pat ourselves on the back for having achieved something significant at Cancun, our action should not and must not end there. Even if climate change hogs the headlines now, we must embrace more plurality in environmental discourse, for nature is not and never so simple as climate change alone. And hopefully sometime in the future, alongside a multi-lateral conference on climate change, the world can have one which rethinks the meaning of development.
  •  
    Chen Jinwen
Weiye Loh

It's Only A Theory: From the 2010 APA in Boston: Neuropsychology and ethics - 0 views

  • Joshua Greene from Harvard, known for his research on "neuroethics," the neurological underpinnings of ethical decision making in humans. The title of Greene's talk was "Beyond point-and-shoot morality: why cognitive neuroscience matters for ethics."
  • What Greene is interested in is to find out to what factors moral judgment is sensitive to, and whether it is sensitive to the relevant factors. He presented his dual process theory of morality. In this respect, he proposed an analogy with a camera. Cameras have automatic (point and shoot) settings as well as manual controls. The first mode is good enough for most purposes, the second allows the user to fine tune the settings more carefully. The two modes allow for a nice combination of efficiency and flexibility.
  • The idea is that the human brain also has two modes, a set of efficient automatic responses and a manual mode that makes us more flexible in response to non standard situations. The non moral example is our response to potential threats. Here the amygdala is very fast and efficient at focusing on potential threats (e.g., the outline of eyes in the dark), even when there actually is no threat (it's a controlled experiment in a lab, no lurking predator around).
  • ...12 more annotations...
  • Delayed gratification illustrates the interaction between the two modes. The brain is attracted by immediate rewards, no matter what kind. However, when larger rewards are eventually going to become available, other parts of the brain come into play to override (sometimes) the immediate urge.
  • Greene's research shows that our automatic setting is "Kantian," meaning that our intuitive responses are deontological, rule driven. The manual setting, on the other hand, tends to be more utilitarian / consequentialist. Accordingly, the first mode involves emotional areas of the brain, the second one involves more cognitive areas.
  • The evidence comes from the (in)famous trolley dilemma and it's many variations.
  • when people refuse to intervene in the footbridge (as opposed to the lever) version of the dilemma, they do so because of a strong emotional response, which contradicts the otherwise utilitarian calculus they make when considering the lever version.
  • psychopaths turn out to be more utilitarian than normal subjects - presumably not because consequentialism is inherently pathological, but because their emotional responses are stunted. Mood also affects the results, with people exposed to comedy (to enhance mood), for instance, more likely to say that it is okay to push the guy off the footbridge.
  • In a more recent experiment, subjects were asked to say which action carried the better consequences, which made them feel worse, and which was overall morally acceptable. The idea was to separate the cognitive, emotional and integrative aspects of moral decision making. Predictably, activity in the amygdala correlated with deontological judgment, activity in more cognitive areas was associated with utilitarianism, and different brain regions became involved in integrating the two.
  • Another recent experiment used visual vs. verbal descriptions of moral dilemmas. Turns out that more visual people tend to behave emotionally / deontologically, while more verbal people are more utilitarian.
  • studies show that interfering with moral judgment by engaging subjects with a cognitive task slows down (though it does not reverse) utilitarian judgment, but has no effect on deontological judgment. Again, in agreement with the conclusion that the first type of modality is the result of cognition, the latter of emotion.
  • Nice to know, by the way, that when experimenters controlled for "real world expectations" that people have about trolleys, or when they used more realistic scenarios than trolleys and bridges, the results don't vary. In other words, trolley thought experiments are actually informative, contrary to popular criticisms.
  • What factors affect people's decision making in moral judgment? The main one is proximity, with people feeling much stronger obligations if they are present to the event posing the dilemma, or even relatively near (a disaster happens in a nearby country), as opposed to when they are far (a country on the other side of the world).
  • Greene's general conclusion is that neuroscience matters to ethics because it reveals the hidden mechanisms of human moral decision making. However, he says this is interesting to philosophers because it may lead to question ethical theories that are implicitly or explicitly based on such judgments. But neither philosophical deontology nor consequentialism are in fact based on common moral judgments, seems to me. They are the result of explicit analysis. (Though Greene raises the possibility that some philosophers engage in rationalizing, rather than reason, as in Kant's famously convoluted idea that masturbation is wrong because one is using oneself as a mean to an end...)
  • this is not to say that understanding moral decision making in humans isn't interesting or in fact even helpful in real life cases. An example of the latter is the common moral condemnation of incest, which is an emotional reaction that probably evolved to avoid genetically diseased offspring. It follows that science can tell us that three is nothing morally wrong in cases of incest when precautions have been taken to avoid pregnancy (and assuming psychological reactions are also accounted for). Greene puts this in terms of science helping us to transform difficult ought questions into easier ought questions.
Weiye Loh

TPM: The Philosophers' Magazine | Is morality relative? Depends on your personality - 0 views

  • no real evidence is ever offered for the original assumption that ordinary moral thought and talk has this objective character. Instead, philosophers tend simply to assert that people’s ordinary practice is objectivist and then begin arguing from there.
  • If we really want to go after these issues in a rigorous way, it seems that we should adopt a different approach. The first step is to engage in systematic empirical research to figure out how the ordinary practice actually works. Then, once we have the relevant data in hand, we can begin looking more deeply into the philosophical implications – secure in the knowledge that we are not just engaging in a philosophical fiction but rather looking into the philosophical implications of people’s actual practices.
  • in the past few years, experimental philosophers have been gathering a wealth of new data on these issues, and we now have at least the first glimmerings of a real empirical research program here
  • ...8 more annotations...
  • when researchers took up these questions experimentally, they did not end up confirming the traditional view. They did not find that people overwhelmingly favoured objectivism. Instead, the results consistently point to a more complex picture. There seems to be a striking degree of conflict even in the intuitions of ordinary folks, with some people under some circumstances offering objectivist answers, while other people under other circumstances offer more relativist views. And that is not all. The experimental results seem to be giving us an ever deeper understanding of why it is that people are drawn in these different directions, what it is that makes some people move toward objectivism and others toward more relativist views.
  • consider a study by Adam Feltz and Edward Cokely. They were interested in the relationship between belief in moral relativism and the personality trait openness to experience. Accordingly, they conducted a study in which they measured both openness to experience and belief in moral relativism. To get at people’s degree of openness to experience, they used a standard measure designed by researchers in personality psychology. To get at people’s agreement with moral relativism, they told participants about two characters – John and Fred – who held opposite opinions about whether some given act was morally bad. Participants were then asked whether one of these two characters had to be wrong (the objectivist answer) or whether it could be that neither of them was wrong (the relativist answer). What they found was a quite surprising result. It just wasn’t the case that participants overwhelmingly favoured the objectivist answer. Instead, people’s answers were correlated with their personality traits. The higher a participant was in openness to experience, the more likely that participant was to give a relativist answer.
  • Geoffrey Goodwin and John Darley pursued a similar approach, this time looking at the relationship between people’s belief in moral relativism and their tendency to approach questions by considering a whole variety of possibilities. They proceeded by giving participants mathematical puzzles that could only be solved by looking at multiple different possibilities. Thus, participants who considered all these possibilities would tend to get these problems right, whereas those who failed to consider all the possibilities would tend to get the problems wrong. Now comes the surprising result: those participants who got these problems right were significantly more inclined to offer relativist answers than were those participants who got the problems wrong.
  • Shaun Nichols and Tricia Folds-Bennett looked at how people’s moral conceptions develop as they grow older. Research in developmental psychology has shown that as children grow up, they develop different understandings of the physical world, of numbers, of other people’s minds. So what about morality? Do people have a different understanding of morality when they are twenty years old than they do when they are only four years old? What the results revealed was a systematic developmental difference. Young children show a strong preference for objectivism, but as they grow older, they become more inclined to adopt relativist views. In other words, there appears to be a developmental shift toward increasing relativism as children mature. (In an exciting new twist on this approach, James Beebe and David Sackris have shown that this pattern eventually reverses, with middle-aged people showing less inclination toward relativism than college students do.)
  • People are more inclined to be relativists when they score highly in openness to experience, when they have an especially good ability to consider multiple possibilities, when they have matured past childhood (but not when they get to be middle-aged). Looking at these various effects, my collaborators and I thought that it might be possible to offer a single unifying account that explained them all. Specifically, our thought was that people might be drawn to relativism to the extent that they open their minds to alternative perspectives. There could be all sorts of different factors that lead people to open their minds in this way (personality traits, cognitive dispositions, age), but regardless of the instigating factor, researchers seemed always to be finding the same basic effect. The more people have a capacity to truly engage with other perspectives, the more they seem to turn toward moral relativism.
  • To really put this hypothesis to the test, Hagop Sarkissian, Jennifer Wright, John Park, David Tien and I teamed up to run a series of new studies. Our aim was to actually manipulate the degree to which people considered alternative perspectives. That is, we wanted to randomly assign people to different conditions in which they would end up thinking in different ways, so that we could then examine the impact of these different conditions on their intuitions about moral relativism.
  • The results of the study showed a systematic difference between conditions. In particular, as we moved toward more distant cultures, we found a steady shift toward more relativist answers – with people in the first condition tending to agree with the statement that at least one of them had to be wrong, people in the second being pretty evenly split between the two answers, and people in the third tending to reject the statement quite decisively.
  • If we learn that people’s ordinary practice is not an objectivist one – that it actually varies depending on the degree to which people take other perspectives into account – how can we then use this information to address the deeper philosophical issues about the true nature of morality? The answer here is in one way very complex and in another very simple. It is complex in that one can answer such questions only by making use of very sophisticated and subtle philosophical methods. Yet, at the same time, it is simple in that such methods have already been developed and are being continually refined and elaborated within the literature in analytic philosophy. The trick now is just to take these methods and apply them to working out the implications of an ordinary practice that actually exists.
Weiye Loh

Religion: Faith in science : Nature News - 0 views

  • The Templeton Foundation claims to be a friend of science. So why does it make so many researchers uneasy?
  • With a current endowment estimated at US$2.1 billion, the organization continues to pursue Templeton's goal of building bridges between science and religion. Each year, it doles out some $70 million in grants, more than $40 million of which goes to research in fields such as cosmology, evolutionary biology and psychology.
  • however, many scientists find it troubling — and some see it as a threat. Jerry Coyne, an evolutionary biologist at the University of Chicago, Illinois, calls the foundation "sneakier than the creationists". Through its grants to researchers, Coyne alleges, the foundation is trying to insinuate religious values into science. "It claims to be on the side of science, but wants to make faith a virtue," he says.
  • ...25 more annotations...
  • But other researchers, both with and without Templeton grants, say that they find the foundation remarkably open and non-dogmatic. "The Templeton Foundation has never in my experience pressured, suggested or hinted at any kind of ideological slant," says Michael Shermer, editor of Skeptic, a magazine that debunks pseudoscience, who was hired by the foundation to edit an essay series entitled 'Does science make belief in God obsolete?'
  • The debate highlights some of the challenges facing the Templeton Foundation after the death of its founder in July 2008, at the age of 95.
  • With the help of a $528-million bequest from Templeton, the foundation has been radically reframing its research programme. As part of that effort, it is reducing its emphasis on religion to make its programmes more palatable to the broader scientific community. Like many of his generation, Templeton was a great believer in progress, learning, initiative and the power of human imagination — not to mention the free-enterprise system that allowed him, a middle-class boy from Winchester, Tennessee, to earn billions of dollars on Wall Street. The foundation accordingly allocates 40% of its annual grants to programmes with names such as 'character development', 'freedom and free enterprise' and 'exceptional cognitive talent and genius'.
  • Unlike most of his peers, however, Templeton thought that the principles of progress should also apply to religion. He described himself as "an enthusiastic Christian" — but was also open to learning from Hinduism, Islam and other religious traditions. Why, he wondered, couldn't religious ideas be open to the type of constructive competition that had produced so many advances in science and the free market?
  • That question sparked Templeton's mission to make religion "just as progressive as medicine or astronomy".
  • Early Templeton prizes had nothing to do with science: the first went to the Catholic missionary Mother Theresa of Calcutta in 1973.
  • By the 1980s, however, Templeton had begun to realize that fields such as neuroscience, psychology and physics could advance understanding of topics that are usually considered spiritual matters — among them forgiveness, morality and even the nature of reality. So he started to appoint scientists to the prize panel, and in 1985 the award went to a research scientist for the first time: Alister Hardy, a marine biologist who also investigated religious experience. Since then, scientists have won with increasing frequency.
  • "There's a distinct feeling in the research community that Templeton just gives the award to the most senior scientist they can find who's willing to say something nice about religion," says Harold Kroto, a chemist at Florida State University in Tallahassee, who was co-recipient of the 1996 Nobel Prize in Chemistry and describes himself as a devout atheist.
  • Yet Templeton saw scientists as allies. They had what he called "the humble approach" to knowledge, as opposed to the dogmatic approach. "Almost every scientist will agree that they know so little and they need to learn," he once said.
  • Templeton wasn't interested in funding mainstream research, says Barnaby Marsh, the foundation's executive vice-president. Templeton wanted to explore areas — such as kindness and hatred — that were not well known and did not attract major funding agencies. Marsh says Templeton wondered, "Why is it that some conflicts go on for centuries, yet some groups are able to move on?"
  • Templeton's interests gave the resulting list of grants a certain New Age quality (See Table 1). For example, in 1999 the foundation gave $4.6 million for forgiveness research at the Virginia Commonwealth University in Richmond, and in 2001 it donated $8.2 million to create an Institute for Research on Unlimited Love (that is, altruism and compassion) at Case Western Reserve University in Cleveland, Ohio. "A lot of money wasted on nonsensical ideas," says Kroto. Worse, says Coyne, these projects are profoundly corrupting to science, because the money tempts researchers into wasting time and effort on topics that aren't worth it. If someone is willing to sell out for a million dollars, he says, "Templeton is there to oblige him".
  • At the same time, says Marsh, the 'dean of value investing', as Templeton was known on Wall Street, had no intention of wasting his money on junk science or unanswerables such as whether God exists. So before pursuing a scientific topic he would ask his staff to get an assessment from appropriate scholars — a practice that soon evolved into a peer-review process drawing on experts from across the scientific community.
  • Because Templeton didn't like bureaucracy, adds Marsh, the foundation outsourced much of its peer review and grant giving. In 1996, for example, it gave $5.3 million to the American Association for the Advancement of Science (AAAS) in Washington DC, to fund efforts that work with evangelical groups to find common ground on issues such as the environment, and to get more science into seminary curricula. In 2006, Templeton gave $8.8 million towards the creation of the Foundational Questions Institute (FQXi), which funds research on the origins of the Universe and other fundamental issues in physics, under the leadership of Anthony Aguirre, an astrophysicist at the University of California, Santa Cruz, and Max Tegmark, a cosmologist at the Massachusetts Institute of Technology in Cambridge.
  • But external peer review hasn't always kept the foundation out of trouble. In the 1990s, for example, Templeton-funded organizations gave book-writing grants to Guillermo Gonzalez, an astrophysicist now at Grove City College in Pennsylvania, and William Dembski, a philosopher now at the Southwestern Baptist Theological Seminary in Fort Worth, Texas. After obtaining the grants, both later joined the Discovery Institute — a think-tank based in Seattle, Washington, that promotes intelligent design. Other Templeton grants supported a number of college courses in which intelligent design was discussed. Then, in 1999, the foundation funded a conference at Concordia University in Mequon, Wisconsin, in which intelligent-design proponents confronted critics. Those awards became a major embarrassment in late 2005, during a highly publicized court fight over the teaching of intelligent design in schools in Dover, Pennsylvania. A number of media accounts of the intelligent design movement described the Templeton Foundation as a major supporter — a charge that Charles Harper, then senior vice-president, was at pains to deny.
  • Some foundation officials were initially intrigued by intelligent design, Harper told The New York Times. But disillusionment set in — and Templeton funding stopped — when it became clear that the theory was part of a political movement from the Christian right wing, not science. Today, the foundation website explicitly warns intelligent-design researchers not to bother submitting proposals: they will not be considered.
  • Avowedly antireligious scientists such as Coyne and Kroto see the intelligent-design imbroglio as a symptom of their fundamental complaint that religion and science should not mix at all. "Religion is based on dogma and belief, whereas science is based on doubt and questioning," says Coyne, echoing an argument made by many others. "In religion, faith is a virtue. In science, faith is a vice." The purpose of the Templeton Foundation is to break down that wall, he says — to reconcile the irreconcilable and give religion scholarly legitimacy.
  • Foundation officials insist that this is backwards: questioning is their reason for being. Religious dogma is what they are fighting. That does seem to be the experience of many scientists who have taken Templeton money. During the launch of FQXi, says Aguirre, "Max and I were very suspicious at first. So we said, 'We'll try this out, and the minute something smells, we'll cut and run.' It never happened. The grants we've given have not been connected with religion in any way, and they seem perfectly happy about that."
  • John Cacioppo, a psychologist at the University of Chicago, also had concerns when he started a Templeton-funded project in 2007. He had just published a paper with survey data showing that religious affiliation had a negative correlation with health among African-Americans — the opposite of what he assumed the foundation wanted to hear. He was bracing for a protest when someone told him to look at the foundation's website. They had displayed his finding on the front page. "That made me relax a bit," says Cacioppo.
  • Yet, even scientists who give the foundation high marks for openness often find it hard to shake their unease. Sean Carroll, a physicist at the California Institute of Technology in Pasadena, is willing to participate in Templeton-funded events — but worries about the foundation's emphasis on research into 'spiritual' matters. "The act of doing science means that you accept a purely material explanation of the Universe, that no spiritual dimension is required," he says.
  • It hasn't helped that Jack Templeton is much more politically and religiously conservative than his father was. The foundation shows no obvious rightwards trend in its grant-giving and other activities since John Templeton's death — and it is barred from supporting political activities by its legal status as a not-for-profit corporation. Still, many scientists find it hard to trust an organization whose president has used his personal fortune to support right-leaning candidates and causes such as the 2008 ballot initiative that outlawed gay marriage in California.
  • Scientists' discomfort with the foundation is probably inevitable in the current political climate, says Scott Atran, an anthropologist at the University of Michigan in Ann Arbor. The past 30 years have seen the growing power of the Christian religious right in the United States, the rise of radical Islam around the world, and religiously motivated terrorist attacks such as those in the United States on 11 September 2001. Given all that, says Atran, many scientists find it almost impossible to think of religion as anything but fundamentalism at war with reason.
  • the foundation has embraced the theme of 'science and the big questions' — an open-ended list that includes topics such as 'Does the Universe have a purpose?'
  • Towards the end of Templeton's life, says Marsh, he became increasingly concerned that this reaction was getting in the way of the foundation's mission: that the word 'religion' was alienating too many good scientists.
  • The peer-review and grant-making system has also been revamped: whereas in the past the foundation ran an informal mix of projects generated by Templeton and outside grant seekers, the system is now organized around an annual list of explicit funding priorities.
  • The foundation is still a work in progress, says Jack Templeton — and it always will be. "My father believed," he says, "we were all called to be part of an ongoing creative process. He was always trying to make people think differently." "And he always said, 'If you're still doing today what you tried to do two years ago, then you're not making progress.'" 
Weiye Loh

The world through language » Scienceline - 0 views

  • If you know only one language, you live only once. A man who knows two languages is worth two men. He who loses his language loses his world. (Czech, French and Gaelic proverbs.)
  • The hypothesis first put forward fifty years ago by linguist Benjamin Lee Whorf—that our language significantly affects our experience of the world—is making a comeback in various forms, and with it no shortage of debate.
  • The idea that language shapes thought was taboo for a long time, said Dan Slobin, a psycholinguist at the University of California, Berkeley. “Now the ice is breaking.” The taboo, according to Slobin, was largely due to the widespread acceptance of the ideas of Noam Chomsky, one of the most influential linguists of the 20th century. Chomsky proposed that the human brain comes equipped at birth with a set of rules—or universal grammar—that organizes language. As he likes to say, a visiting Martian would conclude that everyone on Earth speaks mutually unintelligible dialects of a single language.
  • ...11 more annotations...
  • Chomsky is hesitant to accept the recent claims of language’s profound influence on thought. “I’m rather skeptical about all of this, though there probably are some marginal effects,” he said.
  • Some advocates of the Whorfian view find support in studies of how languages convey spatial orientation. English and Dutch speakers describe orientation from an egocentric frame of reference (to my left or right). Mayan speakers use a geocentric frame of reference (to the north or south).
  • Does this mean they think about space in fundamentally different ways? Not exactly, said Lila Gleitman, a psychologist from the University of Pennsylvania. Since we ordinarily assume that others talk like us, she explained, vague instructions like “arrange it the same way” will be interpreted in whatever orientation (egocentric or geocentric) is most common in our language. “That’s going to influence how you solve an ambiguous problem, but it doesn’t mean that’s the way you think, or must think,” said Gleitman. In fact, she repeated the experiment with unambiguous instructions, providing cues to indicate whether objects should be arranged north-south or left-right. She found that people in both languages are just as good at arranging objects in either orientation.
  • Similarly, Anna Papafragou, a psychologist at the University of Delaware, thinks that the extent of language’s effect on thought has been somewhat exaggerated.
  • Papafragou compared how long Greek and English speakers paid attention to clip-art animation sequences, for example, a man skating towards a snowman. By measuring their eye movements, Papafragou was able to tell which parts of the scene held their gaze the longest. Because English speakers generally use verbs that describe manner of motion, like slide and skip, she predicted they would pay more attention to what was moving (the skates). Since Greeks use verbs that describe path, like approach and ascend, they should pay more attention to endpoint of the motion (the snowman). She found that this was true only when people had to describe the scene; when asked to memorize it, attention patterns were nearly identical. According to Papafragou, when people need to speak about what they see, they’ll focus on the parts relevant for planning sentences. Otherwise, language does not show much of an effect on attention.
  • “Each language is a bright transparent medium through which our thoughts may pass, relatively undistorted,” said Gleitman.
  • Others think that language does, in fact, introduce some distortion. Linguist Guy Deutscher of the University of Manchester in the U.K. suggests that while language can’t prevent you from thinking anything, it does compel you to think in specific ways. Language forces you to habitually pay attention to different aspects of the world.
  • For example, many languages assign genders to nouns (“bridge” is feminine in German and masculine in Spanish). A study by cognitive psychologist Lera Boroditsky of Stanford University found that German speakers were more likely to describe “bridge” with feminine terms like elegant and slender, while Spanish speakers picked words like sturdy and towering. Having to constantly keep track of gender, Deutscher suggests, may subtly change the way native speakers imagine object’s characteristics.
  • However, this falls short of the extreme view some ascribe to Whorf: that language actually determines thought. According to Steven Pinker, an experimental psychologist and linguist at Harvard University, three things have to hold for the Whorfian hypothesis to be true: speakers of one language should find it nearly impossible to think like speakers of another language; the differences in language should affect actual reasoning; and the differences should be caused by language, not just correlated with it. Otherwise, we may just be dealing with a case of “crying Whorf.”
  • But even mild claims may reveal complexities in the relationship between language and thought. “You can’t actually separate language, thought and perception,” said Debi Roberson, a psychologist at the University of Essex in the U.K. “All of these processes are going on, not just in parallel, but interactively.”
  • Language may not, as the Gaelic proverb suggests, form our entire world. But it will continue to provide insights into our thoughts—whether as a window, a looking glass, or a distorted mirror.
Weiye Loh

Rationally Speaking: Studying folk morality: philosophy, psychology, or what? - 0 views

  • in the magazine article Joshua mentions several studies of “folk morality,” i.e. of how ordinary people think about moral problems. The results are fascinating. It turns out that people’s views are correlated with personality traits, with subjects who score high on “openness to experience” being reliably more relativists than objectivists about morality (I am not using the latter term in the infamous Randyan meaning here, but as Knobe does, to indicate the idea that morality has objective bases).
  • Other studies show that people who are capable of considering multiple options in solving mathematical puzzles also tend to be moral relativists, and — in a study co-authored by Knobe himself — the very same situation (infanticide) was judged along a sliding scale from objectivism to relativism depending on whether the hypothetical scenario involved a fellow American (presumably sharing our same general moral values), the member of an imaginary Amazonian tribe (for which infanticide was acceptable), and an alien from the planet Pentar (belonging to a race whose only goal in life is to turn everything into equilateral pentagons, and killing individuals that might get in the way of that lofty objective is a duty). Oh, and related research also shows that young children tend to be objectivists, while young adults are usually relativists — but that later in life one’s primordial objectivism apparently experiences a comeback.
  • This is all very interesting social science, but is it philosophy? Granted, the differences between various disciplines are often not clear cut, and of course whenever people engage in truly inter-disciplinary work we should simply applaud the effort and encourage further work. But I do wonder in what sense, if any, the kinds of results that Joshua and his colleagues find have much to do with moral philosophy.
  • ...6 more annotations...
  • there seems to me the potential danger of confusing various categories of moral discourse. For instance, are the “folks” studied in these cases actually relativist, or perhaps adherents to one of several versions of moral anti-realism? The two are definitely not the same, but I doubt that the subjects in question could tell the difference (and I wouldn’t expect them to, after all they are not philosophers).
  • why do we expect philosophers to learn from “folk morality” when we do not expect, say, physicists to learn from folk physics (which tends to be Aristotelian in nature), or statisticians from people’s understanding of probability theory (which is generally remarkably poor, as casino owners know very well)? Or even, while I’m at it, why not ask literary critics to discuss Shakespeare in light of what common folks think about the bard (making sure, perhaps, that they have at least read his works, and not just watched the movies)?
  • Hence, my other examples of stat (i.e., math) and literary criticism. I conceive of philosophy in general, and moral philosophy in particular, as more akin to a (science-informed, to be sure) mix between logic and criticism. Some moral philosophy consists in engaging an “if ... then” sort of scenario, akin to logical-mathematical thinking, where one begins with certain axioms and attempts to derive the consequences of such axioms. In other respects, moral philosophers exercise reflective criticism concerning those consequences as they might be relevant to practical problems.
  • For instance, we may write philosophically about abortion, and begin our discussion from a comparison of different conceptions of “person.” We might conclude that “if” one adopts conception X of what a person is, “then” abortion is justifiable under such and such conditions; while “if” one adopts conception Y of a person, “then” abortion is justifiable under a different set of conditions, or not justifiable at all. We could, of course, back up even further and engage in a discussion of what “personhood” is, thus moving from moral philosophy to metaphysics.
  • Nowhere in the above are we going to ask “folks” what they think a person is, or how they think their implicit conception of personhood informs their views on abortion. Of course people’s actual views on abortion are crucial — especially for public policy — and they are intrinsically interesting to social scientists. But they don’t seem to me to make much more contact with philosophy than the above mentioned popular opinions on Shakespeare make contact with serious literary criticism. And please, let’s not play the cheap card of “elitism,” unless we are willing to apply the label to just about any intellectual endeavor, in any discipline.
  • There is one area in which experimental philosophy can potentially contribute to philosophy proper (as opposed to social science). Once we have a more empirically grounded understanding of what people’s moral reasoning actually is, then we can analyze the likely consequences of that reasoning for a variety of societal issues. But now we would be doing something more akin to political than moral philosophy.
  •  
    My colleague Joshua Knobe at Yale University recently published an intriguing article in The Philosopher's Magazine about the experimental philosophy of moral decision making. Joshua and I have had a nice chat during a recent Rationally Speaking podcast dedicated to experimental philosophy, but I'm still not convinced about the whole enterprise.
Weiye Loh

Science, Strong Inference -- Proper Scientific Method - 0 views

  • Scientists these days tend to keep up a polite fiction that all science is equal. Except for the work of the misguided opponent whose arguments we happen to be refuting at the time, we speak as though every scientist's field and methods of study are as good as every other scientist's and perhaps a little better. This keeps us all cordial when it comes to recommending each other for government grants.
  • Why should there be such rapid advances in some fields and not in others? I think the usual explanations that we tend to think of - such as the tractability of the subject, or the quality or education of the men drawn into it, or the size of research contracts - are important but inadequate. I have begun to believe that the primary factor in scientific advance is an intellectual one. These rapidly moving fields are fields where a particular method of doing scientific research is systematically used and taught, an accumulative method of inductive inference that is so effective that I think it should be given the name of "strong inference." I believe it is important to examine this method, its use and history and rationale, and to see whether other groups and individuals might learn to adopt it profitably in their own scientific and intellectual work. In its separate elements, strong inference is just the simple and old-fashioned method of inductive inference that goes back to Francis Bacon. The steps are familiar to every college student and are practiced, off and on, by every scientist. The difference comes in their systematic application. Strong inference consists of applying the following steps to every problem in science, formally and explicitly and regularly: Devising alternative hypotheses; Devising a crucial experiment (or several of them), with alternative possible outcomes, each of which will, as nearly is possible, exclude one or more of the hypotheses; Carrying out the experiment so as to get a clean result; Recycling the procedure, making subhypotheses or sequential hypotheses to refine the possibilities that remain, and so on.
  • On any new problem, of course, inductive inference is not as simple and certain as deduction, because it involves reaching out into the unknown. Steps 1 and 2 require intellectual inventions, which must be cleverly chosen so that hypothesis, experiment, outcome, and exclusion will be related in a rigorous syllogism; and the question of how to generate such inventions is one which has been extensively discussed elsewhere (2, 3). What the formal schema reminds us to do is to try to make these inventions, to take the next step, to proceed to the next fork, without dawdling or getting tied up in irrelevancies.
  • ...28 more annotations...
  • It is clear why this makes for rapid and powerful progress. For exploring the unknown, there is no faster method; this is the minimum sequence of steps. Any conclusion that is not an exclusion is insecure and must be rechecked. Any delay in recycling to the next set of hypotheses is only a delay. Strong inference, and the logical tree it generates, are to inductive reasoning what the syllogism is to deductive reasoning in that it offers a regular method for reaching firm inductive conclusions one after the other as rapidly as possible.
  • "But what is so novel about this?" someone will say. This is the method of science and always has been, why give it a special name? The reason is that many of us have almost forgotten it. Science is now an everyday business. Equipment, calculations, lectures become ends in themselves. How many of us write down our alternatives and crucial experiments every day, focusing on the exclusion of a hypothesis? We may write our scientific papers so that it looks as if we had steps 1, 2, and 3 in mind all along. But in between, we do busywork. We become "method- oriented" rather than "problem-oriented." We say we prefer to "feel our way" toward generalizations. We fail to teach our students how to sharpen up their inductive inferences. And we do not realize the added power that the regular and explicit use of alternative hypothesis and sharp exclusion could give us at every step of our research.
  • A distinguished cell biologist rose and said, "No two cells give the same properties. Biology is the science of heterogeneous systems." And he added privately. "You know there are scientists, and there are people in science who are just working with these over-simplified model systems - DNA chains and in vitro systems - who are not doing science at all. We need their auxiliary work: they build apparatus, they make minor studies, but they are not scientists." To which Cy Levinthal replied: "Well, there are two kinds of biologists, those who are looking to see if there is one thing that can be understood and those who keep saying it is very complicated and that nothing can be understood. . . . You must study the simplest system you think has the properties you are interested in."
  • At the 1958 Conference on Biophysics, at Boulder, there was a dramatic confrontation between the two points of view. Leo Szilard said: "The problems of how enzymes are induced, of how proteins are synthesized, of how antibodies are formed, are closer to solution than is generally believed. If you do stupid experiments, and finish one a year, it can take 50 years. But if you stop doing experiments for a little while and think how proteins can possibly be synthesized, there are only about 5 different ways, not 50! And it will take only a few experiments to distinguish these." One of the young men added: "It is essentially the old question: How small and elegant an experiment can you perform?" These comments upset a number of those present. An electron microscopist said. "Gentlemen, this is off the track. This is philosophy of science." Szilard retorted. "I was not quarreling with third-rate scientists: I was quarreling with first-rate scientists."
  • Any criticism or challenge to consider changing our methods strikes of course at all our ego-defenses. But in this case the analytical method offers the possibility of such great increases in effectiveness that it is unfortunate that it cannot be regarded more often as a challenge to learning rather than as challenge to combat. Many of the recent triumphs in molecular biology have in fact been achieved on just such "oversimplified model systems," very much along the analytical lines laid down in the 1958 discussion. They have not fallen to the kind of men who justify themselves by saying "No two cells are alike," regardless of how true that may ultimately be. The triumphs are in fact triumphs of a new way of thinking.
  • the emphasis on strong inference
  • is also partly due to the nature of the fields themselves. Biology, with its vast informational detail and complexity, is a "high-information" field, where years and decades can easily be wasted on the usual type of "low-information" observations or experiments if one does not think carefully in advance about what the most important and conclusive experiments would be. And in high-energy physics, both the "information flux" of particles from the new accelerators and the million-dollar costs of operation have forced a similar analytical approach. It pays to have a top-notch group debate every experiment ahead of time; and the habit spreads throughout the field.
  • Historically, I think, there have been two main contributions to the development of a satisfactory strong-inference method. The first is that of Francis Bacon (13). He wanted a "surer method" of "finding out nature" than either the logic-chopping or all-inclusive theories of the time or the laudable but crude attempts to make inductions "by simple enumeration." He did not merely urge experiments as some suppose, he showed the fruitfulness of interconnecting theory and experiment so that the one checked the other. Of the many inductive procedures he suggested, the most important, I think, was the conditional inductive tree, which proceeded from alternative hypothesis (possible "causes," as he calls them), through crucial experiments ("Instances of the Fingerpost"), to exclusion of some alternatives and adoption of what is left ("establishing axioms"). His Instances of the Fingerpost are explicitly at the forks in the logical tree, the term being borrowed "from the fingerposts which are set up where roads part, to indicate the several directions."
  • ere was a method that could separate off the empty theories! Bacon, said the inductive method could be learned by anybody, just like learning to "draw a straighter line or more perfect circle . . . with the help of a ruler or a pair of compasses." "My way of discovering sciences goes far to level men's wit and leaves but little to individual excellence, because it performs everything by the surest rules and demonstrations." Even occasional mistakes would not be fatal. "Truth will sooner come out from error than from confusion."
  • Nevertheless there is a difficulty with this method. As Bacon emphasizes, it is necessary to make "exclusions." He says, "The induction which is to be available for the discovery and demonstration of sciences and arts, must analyze nature by proper rejections and exclusions, and then, after a sufficient number of negatives come to a conclusion on the affirmative instances." "[To man] it is granted only to proceed at first by negatives, and at last to end in affirmatives after exclusion has been exhausted." Or, as the philosopher Karl Popper says today there is no such thing as proof in science - because some later alternative explanation may be as good or better - so that science advances only by disproofs. There is no point in making hypotheses that are not falsifiable because such hypotheses do not say anything, "it must be possible for all empirical scientific system to be refuted by experience" (14).
  • The difficulty is that disproof is a hard doctrine. If you have a hypothesis and I have another hypothesis, evidently one of them must be eliminated. The scientist seems to have no choice but to be either soft-headed or disputatious. Perhaps this is why so many tend to resist the strong analytical approach and why some great scientists are so disputatious.
  • Fortunately, it seems to me, this difficulty can be removed by the use of a second great intellectual invention, the "method of multiple hypotheses," which is what was needed to round out the Baconian scheme. This is a method that was put forward by T.C. Chamberlin (15), a geologist at Chicago at the turn of the century, who is best known for his contribution to the Chamberlain-Moulton hypothesis of the origin of the solar system.
  • Chamberlin says our trouble is that when we make a single hypothesis, we become attached to it. "The moment one has offered an original explanation for a phenomenon which seems satisfactory, that moment affection for his intellectual child springs into existence, and as the explanation grows into a definite theory his parental affections cluster about his offspring and it grows more and more dear to him. . . . There springs up also unwittingly a pressing of the theory to make it fit the facts and a pressing of the facts to make them fit the theory..." "To avoid this grave danger, the method of multiple working hypotheses is urged. It differs from the simple working hypothesis in that it distributes the effort and divides the affections. . . . Each hypothesis suggests its own criteria, its own method of proof, its own method of developing the truth, and if a group of hypotheses encompass the subject on all sides, the total outcome of means and of methods is full and rich."
  • The conflict and exclusion of alternatives that is necessary to sharp inductive inference has been all too often a conflict between men, each with his single Ruling Theory. But whenever each man begins to have multiple working hypotheses, it becomes purely a conflict between ideas. It becomes much easier then for each of us to aim every day at conclusive disproofs - at strong inference - without either reluctance or combativeness. In fact, when there are multiple hypotheses, which are not anyone's "personal property," and when there are crucial experiments to test them, the daily life in the laboratory takes on an interest and excitement it never had, and the students can hardly wait to get to work to see how the detective story will come out. It seems to me that this is the reason for the development of those distinctive habits of mind and the "complex thought" that Chamberlin described, the reason for the sharpness, the excitement, the zeal, the teamwork - yes, even international teamwork - in molecular biology and high- energy physics today. What else could be so effective?
  • Unfortunately, I think, there are other other areas of science today that are sick by comparison, because they have forgotten the necessity for alternative hypotheses and disproof. Each man has only one branch - or none - on the logical tree, and it twists at random without ever coming to the need for a crucial decision at any point. We can see from the external symptoms that there is something scientifically wrong. The Frozen Method, The Eternal Surveyor, The Never Finished, The Great Man With a Single Hypothcsis, The Little Club of Dependents, The Vendetta, The All-Encompassing Theory Which Can Never Be Falsified.
  • a "theory" of this sort is not a theory at all, because it does not exclude anything. It predicts everything, and therefore does not predict anything. It becomes simply a verbal formula which the graduate student repeats and believes because the professor has said it so often. This is not science, but faith; not theory, but theology. Whether it is hand-waving or number-waving, or equation-waving, a theory is not a theory unless it can be disproved. That is, unless it can be falsified by some possible experimental outcome.
  • the work methods of a number of scientists have been testimony to the power of strong inference. Is success not due in many cases to systematic use of Bacon's "surest rules and demonstrations" as much as to rare and unattainable intellectual power? Faraday's famous diary (16), or Fermi's notebooks (3, 17), show how these men believed in the effectiveness of daily steps in applying formal inductive methods to one problem after another.
  • Surveys, taxonomy, design of equipment, systematic measurements and tables, theoretical computations - all have their proper and honored place, provided they are parts of a chain of precise induction of how nature works. Unfortunately, all too often they become ends in themselves, mere time-serving from the point of view of real scientific advance, a hypertrophied methodology that justifies itself as a lore of respectability.
  • We speak piously of taking measurements and making small studies that will "add another brick to the temple of science." Most such bricks just lie around the brickyard (20). Tables of constraints have their place and value, but the study of one spectrum after another, if not frequently re-evaluated, may become a substitute for thinking, a sad waste of intelligence in a research laboratory, and a mistraining whose crippling effects may last a lifetime.
  • Beware of the man of one method or one instrument, either experimental or theoretical. He tends to become method-oriented rather than problem-oriented. The method-oriented man is shackled; the problem-oriented man is at least reaching freely toward that is most important. Strong inference redirects a man to problem-orientation, but it requires him to be willing repeatedly to put aside his last methods and teach himself new ones.
  • anyone who asks the question about scientific effectiveness will also conclude that much of the mathematizing in physics and chemistry today is irrelevant if not misleading. The great value of mathematical formulation is that when an experiment agrees with a calculation to five decimal places, a great many alternative hypotheses are pretty well excluded (though the Bohr theory and the Schrödinger theory both predict exactly the same Rydberg constant!). But when the fit is only to two decimal places, or one, it may be a trap for the unwary; it may be no better than any rule-of-thumb extrapolation, and some other kind of qualitative exclusion might be more rigorous for testing the assumptions and more important to scientific understanding than the quantitative fit.
  • Today we preach that science is not science unless it is quantitative. We substitute correlations for causal studies, and physical equations for organic reasoning. Measurements and equations are supposed to sharpen thinking, but, in my observation, they more often tend to make the thinking noncausal and fuzzy. They tend to become the object of scientific manipulation instead of auxiliary tests of crucial inferences.
  • Many - perhaps most - of the great issues of science are qualitative, not quantitative, even in physics and chemistry. Equations and measurements are useful when and only when they are related to proof; but proof or disproof comes first and is in fact strongest when it is absolutely convincing without any quantitative measurement.
  • you can catch phenomena in a logical box or in a mathematical box. The logical box is coarse but strong. The mathematical box is fine-grained but flimsy. The mathematical box is a beautiful way of wrapping up a problem, but it will not hold the phenomena unless they have been caught in a logical box to begin with.
  • Of course it is easy - and all too common - for one scientist to call the others unscientific. My point is not that my particular conclusions here are necessarily correct, but that we have long needed some absolute standard of possible scientific effectiveness by which to measure how well we are succeeding in various areas - a standard that many could agree on and one that would be undistorted by the scientific pressures and fashions of the times and the vested interests and busywork that they develop. It is not public evaluation I am interested in so much as a private measure by which to compare one's own scientific performance with what it might be. I believe that strong inference provides this kind of standard of what the maximum possible scientific effectiveness could be - as well as a recipe for reaching it.
  • The strong-inference point of view is so resolutely critical of methods of work and values in science that any attempt to compare specific cases is likely to sound but smug and destructive. Mainly one should try to teach it by example and by exhorting to self-analysis and self-improvement only in general terms
  • one severe but useful private test - a touchstone of strong inference - that removes the necessity for third-person criticism, because it is a test that anyone can learn to carry with him for use as needed. It is our old friend the Baconian "exclusion," but I call it "The Question." Obviously it should be applied as much to one's own thinking as to others'. It consists of asking in your own mind, on hearing any scientific explanation or theory put forward, "But sir, what experiment could disprove your hypothesis?"; or, on hearing a scientific experiment described, "But sir, what hypothesis does your experiment disprove?"
  • It is not true that all science is equal; or that we cannot justly compare the effectiveness of scientists by any method other than a mutual-recommendation system. The man to watch, the man to put your money on, is not the man who wants to make "a survey" or a "more detailed study" but the man with the notebook, the man with the alternative hypotheses and the crucial experiments, the man who knows how to answer your Question of disproof and is already working on it.
  •  
    There is so much bad science and bad statistics information in media reports, publications, and shared between conversants that I think it is important to understand about facts and proofs and the associated pitfalls.
Weiye Loh

Don't Miss this Video: "A link between climate change and Joplin tornadoes? N... - 0 views

  •  
    The video takes Bill Mckibben's recent editorial from the Washington Post, sets it to music and powerful video of the last year's weather events. If you haven't seen the editorial, give that a look first. It starts out - Caution: It is vitally important not to make connections. When you see pictures of rubble like this week's shots from Joplin, Mo., you should not wonder: Is this somehow related to the tornado outbreak three weeks ago in Tuscaloosa, Ala., or the enormous outbreak a couple of weeks before that (which, together, comprised the most active April for tornadoes in U.S. history). No, that doesn't mean a thing. It is far better to think of these as isolated, unpredictable, discrete events. It is not advisable to try to connect them in your mind with, say, the fires burning across Texas - fires that have burned more of America at this point this year than any wildfires have in previous years. Texas, and adjoining parts of Oklahoma and New Mexico, are drier than they've ever been - the drought is worse than that of the Dust Bowl. But do not wonder if they're somehow connected.
1 - 20 of 23 Next ›
Showing 20 items per page