Skip to main content

Home/ New Media Ethics 2009 course/ Group items tagged Nurture

Rss Feed Group items tagged

Inosha Wickrama

ethical porn? - 50 views

I've seen that video recently. Anyway, some points i need to make. 1. different countries have different ages of consent. Does that mean children mature faster in some countries and not in other...

pornography

Weiye Loh

Rethinking the gene » Scienceline - 0 views

  • Currently, the public views genes primarily as self-contained packets of information that come from parents and are distinct from the environment. “The popular notion of the gene is an attractive idea—it’s so magical,” said Mark Blumberg, a developmental biologist at the University of Iowa in Iowa City. But it ignores the growing scientific understanding of how genes and local environments interplay, he said.
  • With the rise of molecular biology in the 1930s and genomics (the study of entire genomes) in the 1970s, scientists have developed a much more dynamic and complex picture of this interplay. The simplistic notion of the gene has been replaced with gene-environment interactions and developmental influences—nature and nurture as completely intertwined.
  • But the public hasn’t quite kept up. There remains a “huge chasm” between the way scientists understand genetics and the way the public understands it, said David Shenk, an author who has written extensively on genetics and intelligence.
  • ...8 more annotations...
  • the public still thinks of genes as blueprints, providing precise instructions for each individual.
  • “The elegant simplicity of the idea is so powerful,” said Shenk. Unfortunately, it is also false. The blueprint metaphor is fundamentally deceptive, he said, and “leads people to believe that any difference they see can be tied back to specific genes.”
  • Instead, Shenk advocates the metaphor of a giant mixing board, in which genes are a multitude of knobs and switches that get turned on and off depending on various factors in their environment. Interaction is key, though it goes against how most people see genetics: the classic, but inaccurate, dichotomies of nature versus nurture, innate versus acquired and genes versus environment.
  • Belief in those dichotomies is hard to eliminate because people tend to understand genetics through the two separate “tracks” of genes and the environment, according to speech communication expert Celeste Condit from the University of Georgia in Athens. Condit suggests that, whenever possible, explanations of genetics—by scientists, authors, journalists, or doctors—should draw connections between the two tracks, effectively merging them into one. “We need to link up the gene and environment tracks,” she said, “so that [people] can’t think of one without thinking of the other.”
  • Part of what makes these concepts so difficult lies in the language of genetics itself. A recent study by Condit in the September issue of Clinical Genetics found that when people hear the word genetics, they primarily think of heredity, or the quality of being heritable (passed from one generation to the next). Unfortunately, the terms heredity and heritable are often confused with heritability, which has a very different meaning.
  • heritability has single-handedly muddled the discourse of genetics to such a degree that even experts can’t keep it straight, argues historian of science Evelyn Fox Keller at the Massachusetts Institute of Technology in her recent book, The Mirage of a Space Between Nature and Nurture. Keller describes how heritability (in the technical literature) refers to how much of the variation in a trait is due to genetic explanation. But the term has seeped out into the general public and is, understandably, taken to mean heritable, or ability to be inherited. These concepts are fundamentally different, but often hard to grasp.
  • For example, let’s say that in a population with people of different heights, 60 percent of the variation in height is attributable to genes (as opposed to nutrition). The heritability of height is 60 percent. This does not mean, however, that 60 percent of an individual’s height comes from her genes, and 40 percent from what she ate growing up. Heritability refers to causes of variations (between people), not to causes of traits themselves (in each particular individual). The conflation of crucially different terms like traits and variations has wreaked havoc on the public understanding of genetics.
  • The stakes are high. Condit emphasizes how important a solid understanding of genetics is for making health decisions. Whether people see diabetes or lung cancer as determined by family history or responsive to changes in behavior depends greatly on how they understand genetics. Policy decisions about education, childcare, or the workplace are all informed by a proper understanding of the dynamic interplay of genes and the environment, and this means looking beyond the Mendelian lens of heredity. According to Shenk, everyone in the business of communicating these issues “needs to bend over backwards to help people understand.”
Weiye Loh

It's Even Less in Your Genes by Richard C. Lewontin | The New York Review of Books - 0 views

  • One of the complications is that the effective environment is defined by the life activities of the organism itself.
  • Thus, as organisms evolve, their environments necessarily evolve with them. Although classic Darwinism is framed by referring to organisms adapting to environments, the actual process of evolution involves the creation of new “ecological niches” as new life forms come into existence. Part of the ecological niche of an earthworm is the tunnel excavated by the worm and part of the ecological niche of a tree is the assemblage of fungi associated with the tree’s root system that provide it with nutrients.
  • , the distinction between organisms and their environments remains deeply embedded in our consciousness. Partly this is due to the inertia of educational institutions and materials
  • ...7 more annotations...
  • But the problem is deeper than simply intellectual inertia. It goes back, ultimately, to the unconsidered differentiations we make—at every moment when we distinguish among objects—between those in the foreground of our consciousness and the background places in which the objects happen to be situated. Moreover, this distinction creates a hierarchy of objects. We are conscious not only of the skin that encloses and defines the object, but of bits and pieces of that object, each of which must have its own “skin.” That is the problem of anatomization. A car has a motor and brakes and a transmission and an outer body that, at appropriate moments, become separate objects of our consciousness, objects that at least some knowledgeable person recognizes as coherent entities.
  • Evelyn Fox Keller sees “The Mirage of a Space Between Nature and Nurture” as a consequence of our false division of the world into living objects without sufficient consideration of the external milieu in which they are embedded, since organisms help create effective environments through their own life activities.
  • The central point of her analysis has been that gender itself (as opposed to sex) is socially constructed, and that construction has influenced the development of science:If there is a single point on which all feminist scholarship…has converged, it is the importance of recognizing the social construction of gender…. All of my work on gender and science proceeds from this basic recognition. My endeavor has been to call attention to the ways in which the social construction of a binary opposition between “masculine” and “feminine” has influenced the social construction of science.
  • major critical concern of Fox Keller’s present book is the widespread attempt to partition in some quantitative way the contribution made to human variation by differences in biological inheritance, that is, differences in genes, as opposed to differences in life experience. She wants to make clear a distinction between analyzing the relative strength of the causes of variation among individuals and groups, an analysis that is coherent in principle, and simply assigning the relative contributions of biological and environmental causes to the value of some character in an individual
  • It is, for example, all very well to say that genetic variation is responsible for 76 percent of the observed variation in adult height among American women while the remaining 24 percent is a consequence of differences in nutrition. The implication is that if all variation in nutrition were abolished then 24 percent of the observed height variation among individuals in the population in the next generation would disappear. To say, however, that 76 percent of Evelyn Fox Keller’s height was caused by her genes and 24 percent by her nutrition does not make sense. The nonsensical implication of trying to partition the causes of her individual height would be that if she never ate anything she would still be three quarters as tall as she is.
  • In fact, Keller is too optimistic about the assignment of causes of variation even when considering variation in a population. As she herself notes parenthetically, the assignment of relative proportions of population variation to different causes in a population depends on there being no specific interaction between the causes.
  • Keller’s rather casual treatment of the interaction between causal factors in the case of the drummers, despite her very great sophistication in analyzing the meaning of variation, is a symptom of a fault that is deeply embedded in the analytic training and thinking of both natural and social scientists. If there are several variable factors influencing some phenomenon, how are we to assign the relative importance to each in determining total variation? Let us take an extreme example. Suppose that we plant seeds of each of two different varieties of corn in two different locations with the following results measured in bushels of corn produced (see Table 1). There are differences between the varieties in their yield from location to location and there are differences between locations from variety to variety. So, both variety and location matter. But there is no average variation between locations when averaged over varieties or between varieties when averaged over locations. Just by knowing the variation in yield associated with location and variety separately does not tell us which factor is the more important source of variation; nor do the facts of location and variety exhaust the description of that variation.
  •  
    In trying to analyze the natural world, scientists are seldom aware of the degree to which their ideas are influenced both by their way of perceiving the everyday world and by the constraints that our cognitive development puts on our formulations. At every moment of perception of the world around us, we isolate objects as discrete entities with clear boundaries while we relegate the rest to a background in which the objects exist.
Weiye Loh

The Creativity Crisis - Newsweek - 0 views

  • The accepted definition of creativity is production of something original and useful, and that’s what’s reflected in the tests. There is never one right answer. To be creative requires divergent thinking (generating many unique ideas) and then convergent thinking (combining those ideas into the best result).
  • Torrance’s tasks, which have become the gold standard in creativity assessment, measure creativity perfectly. What’s shocking is how incredibly well Torrance’s creativity index predicted those kids’ creative accomplishments as adults.
  • The correlation to lifetime creative accomplishment was more than three times stronger for childhood creativity than childhood IQ.
  • ...20 more annotations...
  • there is one crucial difference between IQ and CQ scores. With intelligence, there is a phenomenon called the Flynn effect—each generation, scores go up about 10 points. Enriched environments are making kids smarter. With creativity, a reverse trend has just been identified and is being reported for the first time here: American creativity scores are falling.
  • creativity scores had been steadily rising, just like IQ scores, until 1990. Since then, creativity scores have consistently inched downward.
  • It is the scores of younger children in America—from kindergarten through sixth grade—for whom the decline is “most serious.”
  • It’s too early to determine conclusively why U.S. creativity scores are declining. One likely culprit is the number of hours kids now spend in front of the TV and playing videogames rather than engaging in creative activities. Another is the lack of creativity development in our schools. In effect, it’s left to the luck of the draw who becomes creative: there’s no concerted effort to nurture the creativity of all children.
  • Around the world, though, other countries are making creativity development a national priority.
  • In China there has been widespread education reform to extinguish the drill-and-kill teaching style. Instead, Chinese schools are also adopting a problem-based learning approach.
  • When faculty of a major Chinese university asked Plucker to identify trends in American education, he described our focus on standardized curriculum, rote memorization, and nationalized testing.
  • Overwhelmed by curriculum standards, American teachers warn there’s no room in the day for a creativity class.
  • The age-old belief that the arts have a special claim to creativity is unfounded. When scholars gave creativity tasks to both engineering majors and music majors, their scores laid down on an identical spectrum, with the same high averages and standard deviations.
  • The argument that we can’t teach creativity because kids already have too much to learn is a false trade-off. Creativity isn’t about freedom from concrete facts. Rather, fact-finding and deep research are vital stages in the creative process.
  • The lore of pop psychology is that creativity occurs on the right side of the brain. But we now know that if you tried to be creative using only the right side of your brain, it’d be like living with ideas perpetually at the tip of your tongue, just beyond reach.
  • Creativity requires constant shifting, blender pulses of both divergent thinking and convergent thinking, to combine new information with old and forgotten ideas. Highly creative people are very good at marshaling their brains into bilateral mode, and the more creative they are, the more they dual-activate.
  • “Creativity can be taught,” says James C. Kaufman, professor at California State University, San Bernardino. What’s common about successful programs is they alternate maximum divergent thinking with bouts of intense convergent thinking, through several stages. Real improvement doesn’t happen in a weekend workshop. But when applied to the everyday process of work or school, brain function improves.
  • highly creative adults tended to grow up in families embodying opposites. Parents encouraged uniqueness, yet provided stability. They were highly responsive to kids’ needs, yet challenged kids to develop skills. This resulted in a sort of adaptability: in times of anxiousness, clear rules could reduce chaos—yet when kids were bored, they could seek change, too. In the space between anxiety and boredom was where creativity flourished.
  • highly creative adults frequently grew up with hardship. Hardship by itself doesn’t lead to creativity, but it does force kids to become more flexible—and flexibility helps with creativity.
  • In early childhood, distinct types of free play are associated with high creativity. Preschoolers who spend more time in role-play (acting out characters) have higher measures of creativity: voicing someone else’s point of view helps develop their ability to analyze situations from different perspectives. When playing alone, highly creative first graders may act out strong negative emotions: they’ll be angry, hostile, anguished.
  • In middle childhood, kids sometimes create paracosms—fantasies of entire alternative worlds. Kids revisit their paracosms repeatedly, sometimes for months, and even create languages spoken there. This type of play peaks at age 9 or 10, and it’s a very strong sign of future creativity.
  • From fourth grade on, creativity no longer occurs in a vacuum; researching and studying become an integral part of coming up with useful solutions. But this transition isn’t easy. As school stuffs more complex information into their heads, kids get overloaded, and creativity suffers. When creative children have a supportive teacher—someone tolerant of unconventional answers, occasional disruptions, or detours of curiosity—they tend to excel. When they don’t, they tend to underperform and drop out of high school or don’t finish college at high rates.
  • They’re quitting because they’re discouraged and bored, not because they’re dark, depressed, anxious, or neurotic. It’s a myth that creative people have these traits. (Those traits actually shut down creativity; they make people less open to experience and less interested in novelty.) Rather, creative people, for the most part, exhibit active moods and positive affect. They’re not particularly happy—contentment is a kind of complacency creative people rarely have. But they’re engaged, motivated, and open to the world.
  • A similar study of 1,500 middle schoolers found that those high in creative self-efficacy had more confidence about their future and ability to succeed. They were sure that their ability to come up with alternatives would aid them, no matter what problems would arise.
  •  
    The Creativity Crisis For the first time, research shows that American creativity is declining. What went wrong-and how we can fix it.
Weiye Loh

CancerGuide: The Median Isn't the Message - 0 views

  • Statistics recognizes different measures of an "average," or central tendency. The mean is our usual concept of an overall average - add up the items and divide them by the number of sharers
  • The median, a different measure of central tendency, is the half-way point.
  • A politician in power might say with pride, "The mean income of our citizens is $15,000 per year." The leader of the opposition might retort, "But half our citizens make less than $10,000 per year." Both are right, but neither cites a statistic with impassive objectivity. The first invokes a mean, the second a median. (Means are higher than medians in such cases because one millionaire may outweigh hundreds of poor people in setting a mean; but he can balance only one mendicant in calculating a median).
  • ...7 more annotations...
  • The larger issue that creates a common distrust or contempt for statistics is more troubling. Many people make an unfortunate and invalid separation between heart and mind, or feeling and intellect. In some contemporary traditions, abetted by attitudes stereotypically centered on Southern California, feelings are exalted as more "real" and the only proper basis for action - if it feels good, do it - while intellect gets short shrift as a hang-up of outmoded elitism. Statistics, in this absurd dichotomy, often become the symbol of the enemy. As Hilaire Belloc wrote, "Statistics are the triumph of the quantitative method, and the quantitative method is the victory of sterility and death."
  • This is a personal story of statistics, properly interpreted, as profoundly nurturant and life-giving. It declares holy war on the downgrading of intellect by telling a small story about the utility of dry, academic knowledge about science. Heart and head are focal points of one body, one personality.
  • We still carry the historical baggage of a Platonic heritage that seeks sharp essences and definite boundaries. (Thus we hope to find an unambiguous "beginning of life" or "definition of death," although nature often comes to us as irreducible continua.) This Platonic heritage, with its emphasis in clear distinctions and separated immutable entities, leads us to view statistical measures of central tendency wrongly, indeed opposite to the appropriate interpretation in our actual world of variation, shadings, and continua. In short, we view means and medians as the hard "realities," and the variation that permits their calculation as a set of transient and imperfect measurements of this hidden essence. If the median is the reality and variation around the median just a device for its calculation, the "I will probably be dead in eight months" may pass as a reasonable interpretation.
  • But all evolutionary biologists know that variation itself is nature's only irreducible essence. Variation is the hard reality, not a set of imperfect measures for a central tendency. Means and medians are the abstractions. Therefore, I looked at the mesothelioma statistics quite differently - and not only because I am an optimist who tends to see the doughnut instead of the hole, but primarily because I know that variation itself is the reality. I had to place myself amidst the variation. When I learned about the eight-month median, my first intellectual reaction was: fine, half the people will live longer; now what are my chances of being in that half. I read for a furious and nervous hour and concluded, with relief: damned good. I possessed every one of the characteristics conferring a probability of longer life: I was young; my disease had been recognized in a relatively early stage; I would receive the nation's best medical treatment; I had the world to live for; I knew how to read the data properly and not despair.
  • Another technical point then added even more solace. I immediately recognized that the distribution of variation about the eight-month median would almost surely be what statisticians call "right skewed." (In a symmetrical distribution, the profile of variation to the left of the central tendency is a mirror image of variation to the right. In skewed distributions, variation to one side of the central tendency is more stretched out - left skewed if extended to the left, right skewed if stretched out to the right.) The distribution of variation had to be right skewed, I reasoned. After all, the left of the distribution contains an irrevocable lower boundary of zero (since mesothelioma can only be identified at death or before). Thus, there isn't much room for the distribution's lower (or left) half - it must be scrunched up between zero and eight months. But the upper (or right) half can extend out for years and years, even if nobody ultimately survives. The distribution must be right skewed, and I needed to know how long the extended tail ran - for I had already concluded that my favorable profile made me a good candidate for that part of the curve.
  • The distribution was indeed, strongly right skewed, with a long tail (however small) that extended for several years above the eight month median. I saw no reason why I shouldn't be in that small tail, and I breathed a very long sigh of relief. My technical knowledge had helped. I had read the graph correctly. I had asked the right question and found the answers. I had obtained, in all probability, the most precious of all possible gifts in the circumstances - substantial time.
  • One final point about statistical distributions. They apply only to a prescribed set of circumstances - in this case to survival with mesothelioma under conventional modes of treatment. If circumstances change, the distribution may alter. I was placed on an experimental protocol of treatment and, if fortune holds, will be in the first cohort of a new distribution with high median and a right tail extending to death by natural causes at advanced old age.
  •  
    The Median Isn't the Message by Stephen Jay Gould
Weiye Loh

Kevin Kelly and Steven Johnson on Where Ideas Come From | Magazine - 0 views

  • Say the word “inventor” and most people think of a solitary genius toiling in a basement. But two ambitious new books on the history of innovation—by Steven Johnson and Kevin Kelly, both longtime wired contributors—argue that great discoveries typically spring not from individual minds but from the hive mind. In Where Good Ideas Come From: The Natural History of Innovation, Johnson draws on seven centuries of scientific and technological progress, from Gutenberg to GPS, to show what sorts of environments nurture ingenuity. He finds that great creative milieus, whether MIT or Los Alamos, New York City or the World Wide Web, are like coral reefs—teeming, diverse colonies of creators who interact with and influence one another.
  • Seven centuries are an eyeblink in the scope of Kelly’s book, What Technology Wants, which looks back over some 50,000 years of history and peers nearly that far into the future. His argument is similarly sweeping: Technology, Kelly believes, can be seen as a sort of autonomous life-form, with intrinsic goals toward which it gropes over the course of its long development. Those goals, he says, are much like the tendencies of biological life, which over time diversifies, specializes, and (eventually) becomes more sentient.
  • We share a fascination with the long history of simultaneous invention: cases where several people come up with the same idea at almost exactly the same time. Calculus, the electrical battery, the telephone, the steam engine, the radio—all these groundbreaking innovations were hit upon by multiple inventors working in parallel with no knowledge of one another.
  • ...25 more annotations...
  • It’s amazing that the myth of the lone genius has persisted for so long, since simultaneous invention has always been the norm, not the exception. Anthropologists have shown that the same inventions tended to crop up in prehistory at roughly similar times, in roughly the same order, among cultures on different continents that couldn’t possibly have contacted one another.
  • Also, there’s a related myth—that innovation comes primarily from the profit motive, from the competitive pressures of a market society. If you look at history, innovation doesn’t come just from giving people incentives; it comes from creating environments where their ideas can connect.
  • The musician Brian Eno invented a wonderful word to describe this phenomenon: scenius. We normally think of innovators as independent geniuses, but Eno’s point is that innovation comes from social scenes,from passionate and connected groups of people.
  • It turns out that the lone genius entrepreneur has always been a rarity—there’s far more innovation coming out of open, nonmarket networks than we tend to assume.
  • Really, we should think of ideas as connections,in our brains and among people. Ideas aren’t self-contained things; they’re more like ecologies and networks. They travel in clusters.
  • ideas are networks
  • In part, that’s because ideas that leap too far ahead are almost never implemented—they aren’t even valuable. People can absorb only one advance, one small hop, at a time. Gregor Mendel’s ideas about genetics, for example: He formulated them in 1865, but they were ignored for 35 years because they were too advanced. Nobody could incorporate them. Then, when the collective mind was ready and his idea was only one hop away, three different scientists independently rediscovered his work within roughly a year of one another.
  • Charles Babbage is another great case study. His “analytical engine,” which he started designing in the 1830s, was an incredibly detailed vision of what would become the modern computer, with a CPU, RAM, and so on. But it couldn’t possibly have been built at the time, and his ideas had to be rediscovered a hundred years later.
  • I think there are a lot of ideas today that are ahead of their time. Human cloning, autopilot cars, patent-free law—all are close technically but too many steps ahead culturally. Innovating is about more than just having the idea yourself; you also have to bring everyone else to where your idea is. And that becomes really difficult if you’re too many steps ahead.
  • The scientist Stuart Kauffman calls this the “adjacent possible.” At any given moment in evolution—of life, of natural systems, or of cultural systems—there’s a space of possibility that surrounds any current configuration of things. Change happens when you take that configuration and arrange it in a new way. But there are limits to how much you can change in a single move.
  • Which is why the great inventions are usually those that take the smallest possible step to unleash the most change. That was the difference between Tim Berners-Lee’s successful HTML code and Ted Nelson’s abortive Xanadu project. Both tried to jump into the same general space—a networked hypertext—but Tim’s approach did it with a dumb half-step, while Ted’s earlier, more elegant design required that everyone take five steps all at once.
  • Also, the steps have to be taken in the right order. You can’t invent the Internet and then the digital computer. This is true of life as well. The building blocks of DNA had to be in place before evolution could build more complex things. One of the key ideas I’ve gotten from you, by the way—when I read your book Out of Control in grad school—is this continuity between biological and technological systems.
  • technology is something that can give meaning to our lives, particularly in a secular world.
  • He had this bleak, soul-sucking vision of technology as an autonomous force for evil. You also present technology as a sort of autonomous force—as wanting something, over the long course of its evolution—but it’s a more balanced and ultimately positive vision, which I find much more appealing than the alternative.
  • As I started thinking about the history of technology, there did seem to be a sense in which, during any given period, lots of innovations were in the air, as it were. They came simultaneously. It appeared as if they wanted to happen. I should hasten to add that it’s not a conscious agency; it’s a lower form, something like the way an organism or bacterium can be said to have certain tendencies, certain trends, certain urges. But it’s an agency nevertheless.
  • technology wants increasing diversity—which is what I think also happens in biological systems, as the adjacent possible becomes larger with each innovation. As tech critics, I think we have to keep this in mind, because when you expand the diversity of a system, that leads to an increase in great things and an increase in crap.
  • the idea that the most creative environments allow for repeated failure.
  • And for wastes of time and resources. If you knew nothing about the Internet and were trying to figure it out from the data, you would reasonably conclude that it was designed for the transmission of spam and porn. And yet at the same time, there’s more amazing stuff available to us than ever before, thanks to the Internet.
  • To create something great, you need the means to make a lot of really bad crap. Another example is spectrum. One reason we have this great explosion of innovation in wireless right now is that the US deregulated spectrum. Before that, spectrum was something too precious to be wasted on silliness. But when you deregulate—and say, OK, now waste it—then you get Wi-Fi.
  • If we didn’t have genetic mutations, we wouldn’t have us. You need error to open the door to the adjacent possible.
  • image of the coral reef as a metaphor for where innovation comes from. So what, today, are some of the most reeflike places in the technological realm?
  • Twitter—not to see what people are having for breakfast, of course, but to see what people are talking about, the links to articles and posts that they’re passing along.
  • second example of an information coral reef, and maybe the less predictable one, is the university system. As much as we sometimes roll our eyes at the ivory-tower isolation of universities, they continue to serve as remarkable engines of innovation.
  • Life seems to gravitate toward these complex states where there’s just enough disorder to create new things. There’s a rate of mutation just high enough to let interesting new innovations happen, but not so many mutations that every new generation dies off immediately.
  • , technology is an extension of life. Both life and technology are faces of the same larger system.
  •  
    Kevin Kelly and Steven Johnson on Where Ideas Come From By Wired September 27, 2010  |  2:00 pm  |  Wired October 2010
Weiye Loh

Epiphenom: Suicide in American colleges - the importance of existential well being - 0 views

  • Lindsay Taliaferro, a doctoral candidate at the University of Florida, surveyed over 400 of her fellow students. The response rate was high - around 90%. The good news is that, for the most part, they were not suicidal! On average, they scored 11 on a 70-point scale of suicidal thinking.
  • as expected, those who reported high levels of religious well being (e.g. that they find strength or support from God) or involvement in religious activities had fewer suicidal thoughts.
  • She also asked how hopeless or depressed the students felt, and how much social support they felt they got. When she took this into account, the effects of religion disappeared. What this suggests is that religious well-being and involvement have whatever effects they have by reducing hopelessness and depression, and by increasing social support.
  • ...4 more annotations...
  • what is surprising is that she found a third factor that was even more important that religion and social support. That factor was "Existential Well-Being", which relates to things such as feeling fulfilled and satisfied with life, and finding meaning and purpose in life.
  • Existential Well-Being remained important even after taking into account hopelessness, depression and social support. In other words, even if you feel hopeless, depressed, and alone, existential well-being (unlike religious well-being) can ease suicidal thoughts.
  • this does seem to fit in with other studies which have shown that spirituality does not reduce suicidal thoughts,and that feeling close to God is linked to a history of depression, whereas existential well being is linked to dramatically less depression.
  • Results from the present investigation indicate that many college students did not demonstrate high involvement in organized religion. Yet they reported high levels of spiritual well-being, especially existential well-being, and low levels of suicidal ideation. Furthermore, results highlighted existential well-being as an important factor associated with lower levels of suicidal ideation among college students. Overall, these findings suggest that a strategy for reducing distress and preventing suicide among college students may involve exploring mechanisms that nurture a sense of meaning in life in individuals for whom organized religion remains unimportant. Health professionals may have more success in improving young people’s sense of meaning and purpose by methods other than an increase in faith, participation in organized religion, or other indicators of religiosity.
  •  
    Suicide
Weiye Loh

Is There a Liberal Gene? : Discovery News - 0 views

  • Is political ideology derived from a person's social environment or is it a result of genetic predisposition?
  • It's an interaction of both, according to a recent study on our political leanings that boosts both sides of the nature versus nurture debate.
  • Scientists at the University of California San Diego and Harvard University determined that people who carry a variant of the DRD4 gene are more likely to be liberals as adults, depending on the number of friendships they had during high school. They published their study in a recent issue of The Journal of Politics.
  • ...2 more annotations...
  • The 7R variant of DRD4, a dopamine receptor gene, had previously been associated with novelty seeking. The researchers theorized novelty seeking would be related to openness, a psychological trait that has been associated with political liberalism.
  • However, social environment was critical. The more friends gene carriers have in high school, the more likely they are to be liberals as adults. The authors write, "Ten friends can move a person with two copies of 7R allele almost halfway from being a conservative to moderate or from being moderate to liberal."
  •  
    IS THERE A LIBERAL GENE?
Weiye Loh

The Inequality That Matters - Tyler Cowen - The American Interest Magazine - 0 views

  • most of the worries about income inequality are bogus, but some are probably better grounded and even more serious than even many of their heralds realize.
  • In terms of immediate political stability, there is less to the income inequality issue than meets the eye. Most analyses of income inequality neglect two major points. First, the inequality of personal well-being is sharply down over the past hundred years and perhaps over the past twenty years as well. Bill Gates is much, much richer than I am, yet it is not obvious that he is much happier if, indeed, he is happier at all. I have access to penicillin, air travel, good cheap food, the Internet and virtually all of the technical innovations that Gates does. Like the vast majority of Americans, I have access to some important new pharmaceuticals, such as statins to protect against heart disease. To be sure, Gates receives the very best care from the world’s top doctors, but our health outcomes are in the same ballpark. I don’t have a private jet or take luxury vacations, and—I think it is fair to say—my house is much smaller than his. I can’t meet with the world’s elite on demand. Still, by broad historical standards, what I share with Bill Gates is far more significant than what I don’t share with him.
  • when average people read about or see income inequality, they don’t feel the moral outrage that radiates from the more passionate egalitarian quarters of society. Instead, they think their lives are pretty good and that they either earned through hard work or lucked into a healthy share of the American dream.
  • ...35 more annotations...
  • This is why, for example, large numbers of Americans oppose the idea of an estate tax even though the current form of the tax, slated to return in 2011, is very unlikely to affect them or their estates. In narrowly self-interested terms, that view may be irrational, but most Americans are unwilling to frame national issues in terms of rich versus poor. There’s a great deal of hostility toward various government bailouts, but the idea of “undeserving” recipients is the key factor in those feelings. Resentment against Wall Street gamesters hasn’t spilled over much into resentment against the wealthy more generally. The bailout for General Motors’ labor unions wasn’t so popular either—again, obviously not because of any bias against the wealthy but because a basic sense of fairness was violated. As of November 2010, congressional Democrats are of a mixed mind as to whether the Bush tax cuts should expire for those whose annual income exceeds $250,000; that is in large part because their constituents bear no animus toward rich people, only toward undeservedly rich people.
  • envy is usually local. At least in the United States, most economic resentment is not directed toward billionaires or high-roller financiers—not even corrupt ones. It’s directed at the guy down the hall who got a bigger raise. It’s directed at the husband of your wife’s sister, because the brand of beer he stocks costs $3 a case more than yours, and so on. That’s another reason why a lot of people aren’t so bothered by income or wealth inequality at the macro level. Most of us don’t compare ourselves to billionaires. Gore Vidal put it honestly: “Whenever a friend succeeds, a little something in me dies.”
  • Occasionally the cynic in me wonders why so many relatively well-off intellectuals lead the egalitarian charge against the privileges of the wealthy. One group has the status currency of money and the other has the status currency of intellect, so might they be competing for overall social regard? The high status of the wealthy in America, or for that matter the high status of celebrities, seems to bother our intellectual class most. That class composes a very small group, however, so the upshot is that growing income inequality won’t necessarily have major political implications at the macro level.
  • All that said, income inequality does matter—for both politics and the economy.
  • The numbers are clear: Income inequality has been rising in the United States, especially at the very top. The data show a big difference between two quite separate issues, namely income growth at the very top of the distribution and greater inequality throughout the distribution. The first trend is much more pronounced than the second, although the two are often confused.
  • When it comes to the first trend, the share of pre-tax income earned by the richest 1 percent of earners has increased from about 8 percent in 1974 to more than 18 percent in 2007. Furthermore, the richest 0.01 percent (the 15,000 or so richest families) had a share of less than 1 percent in 1974 but more than 6 percent of national income in 2007. As noted, those figures are from pre-tax income, so don’t look to the George W. Bush tax cuts to explain the pattern. Furthermore, these gains have been sustained and have evolved over many years, rather than coming in one or two small bursts between 1974 and today.1
  • At the same time, wage growth for the median earner has slowed since 1973. But that slower wage growth has afflicted large numbers of Americans, and it is conceptually distinct from the higher relative share of top income earners. For instance, if you take the 1979–2005 period, the average incomes of the bottom fifth of households increased only 6 percent while the incomes of the middle quintile rose by 21 percent. That’s a widening of the spread of incomes, but it’s not so drastic compared to the explosive gains at the very top.
  • The broader change in income distribution, the one occurring beneath the very top earners, can be deconstructed in a manner that makes nearly all of it look harmless. For instance, there is usually greater inequality of income among both older people and the more highly educated, if only because there is more time and more room for fortunes to vary. Since America is becoming both older and more highly educated, our measured income inequality will increase pretty much by demographic fiat. Economist Thomas Lemieux at the University of British Columbia estimates that these demographic effects explain three-quarters of the observed rise in income inequality for men, and even more for women.2
  • Attacking the problem from a different angle, other economists are challenging whether there is much growth in inequality at all below the super-rich. For instance, real incomes are measured using a common price index, yet poorer people are more likely to shop at discount outlets like Wal-Mart, which have seen big price drops over the past twenty years.3 Once we take this behavior into account, it is unclear whether the real income gaps between the poor and middle class have been widening much at all. Robert J. Gordon, an economist from Northwestern University who is hardly known as a right-wing apologist, wrote in a recent paper that “there was no increase of inequality after 1993 in the bottom 99 percent of the population”, and that whatever overall change there was “can be entirely explained by the behavior of income in the top 1 percent.”4
  • And so we come again to the gains of the top earners, clearly the big story told by the data. It’s worth noting that over this same period of time, inequality of work hours increased too. The top earners worked a lot more and most other Americans worked somewhat less. That’s another reason why high earners don’t occasion more resentment: Many people understand how hard they have to work to get there. It also seems that most of the income gains of the top earners were related to performance pay—bonuses, in other words—and not wildly out-of-whack yearly salaries.5
  • It is also the case that any society with a lot of “threshold earners” is likely to experience growing income inequality. A threshold earner is someone who seeks to earn a certain amount of money and no more. If wages go up, that person will respond by seeking less work or by working less hard or less often. That person simply wants to “get by” in terms of absolute earning power in order to experience other gains in the form of leisure—whether spending time with friends and family, walking in the woods and so on. Luck aside, that person’s income will never rise much above the threshold.
  • The funny thing is this: For years, many cultural critics in and of the United States have been telling us that Americans should behave more like threshold earners. We should be less harried, more interested in nurturing friendships, and more interested in the non-commercial sphere of life. That may well be good advice. Many studies suggest that above a certain level more money brings only marginal increments of happiness. What isn’t so widely advertised is that those same critics have basically been telling us, without realizing it, that we should be acting in such a manner as to increase measured income inequality. Not only is high inequality an inevitable concomitant of human diversity, but growing income inequality may be, too, if lots of us take the kind of advice that will make us happier.
  • Why is the top 1 percent doing so well?
  • Steven N. Kaplan and Joshua Rauh have recently provided a detailed estimation of particular American incomes.6 Their data do not comprise the entire U.S. population, but from partial financial records they find a very strong role for the financial sector in driving the trend toward income concentration at the top. For instance, for 2004, nonfinancial executives of publicly traded companies accounted for less than 6 percent of the top 0.01 percent income bracket. In that same year, the top 25 hedge fund managers combined appear to have earned more than all of the CEOs from the entire S&P 500. The number of Wall Street investors earning more than $100 million a year was nine times higher than the public company executives earning that amount. The authors also relate that they shared their estimates with a former U.S. Secretary of the Treasury, one who also has a Wall Street background. He thought their estimates of earnings in the financial sector were, if anything, understated.
  • Many of the other high earners are also connected to finance. After Wall Street, Kaplan and Rauh identify the legal sector as a contributor to the growing spread in earnings at the top. Yet many high-earning lawyers are doing financial deals, so a lot of the income generated through legal activity is rooted in finance. Other lawyers are defending corporations against lawsuits, filing lawsuits or helping corporations deal with complex regulations. The returns to these activities are an artifact of the growing complexity of the law and government growth rather than a tale of markets per se. Finance aside, there isn’t much of a story of market failure here, even if we don’t find the results aesthetically appealing.
  • When it comes to professional athletes and celebrities, there isn’t much of a mystery as to what has happened. Tiger Woods earns much more, even adjusting for inflation, than Arnold Palmer ever did. J.K. Rowling, the first billionaire author, earns much more than did Charles Dickens. These high incomes come, on balance, from the greater reach of modern communications and marketing. Kids all over the world read about Harry Potter. There is more purchasing power to spend on children’s books and, indeed, on culture and celebrities more generally. For high-earning celebrities, hardly anyone finds these earnings so morally objectionable as to suggest that they be politically actionable. Cultural critics can complain that good schoolteachers earn too little, and they may be right, but that does not make celebrities into political targets. They’re too popular. It’s also pretty clear that most of them work hard to earn their money, by persuading fans to buy or otherwise support their product. Most of these individuals do not come from elite or extremely privileged backgrounds, either. They worked their way to the top, and even if Rowling is not an author for the ages, her books tapped into the spirit of their time in a special way. We may or may not wish to tax the wealthy, including wealthy celebrities, at higher rates, but there is no need to “cure” the structural causes of higher celebrity incomes.
  • to be sure, the high incomes in finance should give us all pause.
  • The first factor driving high returns is sometimes called by practitioners “going short on volatility.” Sometimes it is called “negative skewness.” In plain English, this means that some investors opt for a strategy of betting against big, unexpected moves in market prices. Most of the time investors will do well by this strategy, since big, unexpected moves are outliers by definition. Traders will earn above-average returns in good times. In bad times they won’t suffer fully when catastrophic returns come in, as sooner or later is bound to happen, because the downside of these bets is partly socialized onto the Treasury, the Federal Reserve and, of course, the taxpayers and the unemployed.
  • if you bet against unlikely events, most of the time you will look smart and have the money to validate the appearance. Periodically, however, you will look very bad. Does that kind of pattern sound familiar? It happens in finance, too. Betting against a big decline in home prices is analogous to betting against the Wizards. Every now and then such a bet will blow up in your face, though in most years that trading activity will generate above-average profits and big bonuses for the traders and CEOs.
  • To this mix we can add the fact that many money managers are investing other people’s money. If you plan to stay with an investment bank for ten years or less, most of the people playing this investing strategy will make out very well most of the time. Everyone’s time horizon is a bit limited and you will bring in some nice years of extra returns and reap nice bonuses. And let’s say the whole thing does blow up in your face? What’s the worst that can happen? Your bosses fire you, but you will still have millions in the bank and that MBA from Harvard or Wharton. For the people actually investing the money, there’s barely any downside risk other than having to quit the party early. Furthermore, if everyone else made more or less the same mistake (very surprising major events, such as a busted housing market, affect virtually everybody), you’re hardly disgraced. You might even get rehired at another investment bank, or maybe a hedge fund, within months or even weeks.
  • Moreover, smart shareholders will acquiesce to or even encourage these gambles. They gain on the upside, while the downside, past the point of bankruptcy, is borne by the firm’s creditors. And will the bondholders object? Well, they might have a difficult time monitoring the internal trading operations of financial institutions. Of course, the firm’s trading book cannot be open to competitors, and that means it cannot be open to bondholders (or even most shareholders) either. So what, exactly, will they have in hand to object to?
  • Perhaps more important, government bailouts minimize the damage to creditors on the downside. Neither the Treasury nor the Fed allowed creditors to take any losses from the collapse of the major banks during the financial crisis. The U.S. government guaranteed these loans, either explicitly or implicitly. Guaranteeing the debt also encourages equity holders to take more risk. While current bailouts have not in general maintained equity values, and while share prices have often fallen to near zero following the bust of a major bank, the bailouts still give the bank a lifeline. Instead of the bank being destroyed, sometimes those equity prices do climb back out of the hole. This is true of the major surviving banks in the United States, and even AIG is paying back its bailout. For better or worse, we’re handing out free options on recovery, and that encourages banks to take more risk in the first place.
  • there is an unholy dynamic of short-term trading and investing, backed up by bailouts and risk reduction from the government and the Federal Reserve. This is not good. “Going short on volatility” is a dangerous strategy from a social point of view. For one thing, in so-called normal times, the finance sector attracts a big chunk of the smartest, most hard-working and most talented individuals. That represents a huge human capital opportunity cost to society and the economy at large. But more immediate and more important, it means that banks take far too many risks and go way out on a limb, often in correlated fashion. When their bets turn sour, as they did in 2007–09, everyone else pays the price.
  • And it’s not just the taxpayer cost of the bailout that stings. The financial disruption ends up throwing a lot of people out of work down the economic food chain, often for long periods. Furthermore, the Federal Reserve System has recapitalized major U.S. banks by paying interest on bank reserves and by keeping an unusually high interest rate spread, which allows banks to borrow short from Treasury at near-zero rates and invest in other higher-yielding assets and earn back lots of money rather quickly. In essence, we’re allowing banks to earn their way back by arbitraging interest rate spreads against the U.S. government. This is rarely called a bailout and it doesn’t count as a normal budget item, but it is a bailout nonetheless. This type of implicit bailout brings high social costs by slowing down economic recovery (the interest rate spreads require tight monetary policy) and by redistributing income from the Treasury to the major banks.
  • the “going short on volatility” strategy increases income inequality. In normal years the financial sector is flush with cash and high earnings. In implosion years a lot of the losses are borne by other sectors of society. In other words, financial crisis begets income inequality. Despite being conceptually distinct phenomena, the political economy of income inequality is, in part, the political economy of finance. Simon Johnson tabulates the numbers nicely: From 1973 to 1985, the financial sector never earned more than 16 percent of domestic corporate profits. In 1986, that figure reached 19 percent. In the 1990s, it oscillated between 21 percent and 30 percent, higher than it had ever been in the postwar period. This decade, it reached 41 percent. Pay rose just as dramatically. From 1948 to 1982, average compensation in the financial sector ranged between 99 percent and 108 percent of the average for all domestic private industries. From 1983, it shot upward, reaching 181 percent in 2007.7
  • There’s a second reason why the financial sector abets income inequality: the “moving first” issue. Let’s say that some news hits the market and that traders interpret this news at different speeds. One trader figures out what the news means in a second, while the other traders require five seconds. Still other traders require an entire day or maybe even a month to figure things out. The early traders earn the extra money. They buy the proper assets early, at the lower prices, and reap most of the gains when the other, later traders pile on. Similarly, if you buy into a successful tech company in the early stages, you are “moving first” in a very effective manner, and you will capture most of the gains if that company hits it big.
  • The moving-first phenomenon sums to a “winner-take-all” market. Only some relatively small number of traders, sometimes just one trader, can be first. Those who are first will make far more than those who are fourth or fifth. This difference will persist, even if those who are fourth come pretty close to competing with those who are first. In this context, first is first and it doesn’t matter much whether those who come in fourth pile on a month, a minute or a fraction of a second later. Those who bought (or sold, as the case may be) first have captured and locked in most of the available gains. Since gains are concentrated among the early winners, and the closeness of the runner-ups doesn’t so much matter for income distribution, asset-market trading thus encourages the ongoing concentration of wealth. Many investors make lots of mistakes and lose their money, but each year brings a new bunch of projects that can turn the early investors and traders into very wealthy individuals.
  • These two features of the problem—“going short on volatility” and “getting there first”—are related. Let’s say that Goldman Sachs regularly secures a lot of the best and quickest trades, whether because of its quality analysis, inside connections or high-frequency trading apparatus (it has all three). It builds up a treasure chest of profits and continues to hire very sharp traders and to receive valuable information. Those profits allow it to make “short on volatility” bets faster than anyone else, because if it messes up, it still has a large enough buffer to pad losses. This increases the odds that Goldman will repeatedly pull in spectacular profits.
  • Still, every now and then Goldman will go bust, or would go bust if not for government bailouts. But the odds are in any given year that it won’t because of the advantages it and other big banks have. It’s as if the major banks have tapped a hole in the social till and they are drinking from it with a straw. In any given year, this practice may seem tolerable—didn’t the bank earn the money fair and square by a series of fairly normal looking trades? Yet over time this situation will corrode productivity, because what the banks do bears almost no resemblance to a process of getting capital into the hands of those who can make most efficient use of it. And it leads to periodic financial explosions. That, in short, is the real problem of income inequality we face today. It’s what causes the inequality at the very top of the earning pyramid that has dangerous implications for the economy as a whole.
  • What about controlling bank risk-taking directly with tight government oversight? That is not practical. There are more ways for banks to take risks than even knowledgeable regulators can possibly control; it just isn’t that easy to oversee a balance sheet with hundreds of billions of dollars on it, especially when short-term positions are wound down before quarterly inspections. It’s also not clear how well regulators can identify risky assets. Some of the worst excesses of the financial crisis were grounded in mortgage-backed assets—a very traditional function of banks—not exotic derivatives trading strategies. Virtually any asset position can be used to bet long odds, one way or another. It is naive to think that underpaid, undertrained regulators can keep up with financial traders, especially when the latter stand to earn billions by circumventing the intent of regulations while remaining within the letter of the law.
  • For the time being, we need to accept the possibility that the financial sector has learned how to game the American (and UK-based) system of state capitalism. It’s no longer obvious that the system is stable at a macro level, and extreme income inequality at the top has been one result of that imbalance. Income inequality is a symptom, however, rather than a cause of the real problem. The root cause of income inequality, viewed in the most general terms, is extreme human ingenuity, albeit of a perverse kind. That is why it is so hard to control.
  • Another root cause of growing inequality is that the modern world, by so limiting our downside risk, makes extreme risk-taking all too comfortable and easy. More risk-taking will mean more inequality, sooner or later, because winners always emerge from risk-taking. Yet bankers who take bad risks (provided those risks are legal) simply do not end up with bad outcomes in any absolute sense. They still have millions in the bank, lots of human capital and plenty of social status. We’re not going to bring back torture, trial by ordeal or debtors’ prisons, nor should we. Yet the threat of impoverishment and disgrace no longer looms the way it once did, so we no longer can constrain excess financial risk-taking. It’s too soft and cushy a world.
  • Why don’t we simply eliminate the safety net for clueless or unlucky risk-takers so that losses equal gains overall? That’s a good idea in principle, but it is hard to put into practice. Once a financial crisis arrives, politicians will seek to limit the damage, and that means they will bail out major financial institutions. Had we not passed TARP and related policies, the United States probably would have faced unemployment rates of 25 percent of higher, as in the Great Depression. The political consequences would not have been pretty. Bank bailouts may sound quite interventionist, and indeed they are, but in relative terms they probably were the most libertarian policy we had on tap. It meant big one-time expenses, but, for the most part, it kept government out of the real economy (the General Motors bailout aside).
  • We probably don’t have any solution to the hazards created by our financial sector, not because plutocrats are preventing our political system from adopting appropriate remedies, but because we don’t know what those remedies are. Yet neither is another crisis immediately upon us. The underlying dynamic favors excess risk-taking, but banks at the current moment fear the scrutiny of regulators and the public and so are playing it fairly safe. They are sitting on money rather than lending it out. The biggest risk today is how few parties will take risks, and, in part, the caution of banks is driving our current protracted economic slowdown. According to this view, the long run will bring another financial crisis once moods pick up and external scrutiny weakens, but that day of reckoning is still some ways off.
  • Is the overall picture a shame? Yes. Is it distorting resource distribution and productivity in the meantime? Yes. Will it again bring our economy to its knees? Probably. Maybe that’s simply the price of modern society. Income inequality will likely continue to rise and we will search in vain for the appropriate political remedies for our underlying problems.
Weiye Loh

Odds Are, It's Wrong - Science News - 0 views

  • science has long been married to mathematics. Generally it has been for the better. Especially since the days of Galileo and Newton, math has nurtured science. Rigorous mathematical methods have secured science’s fidelity to fact and conferred a timeless reliability to its findings.
  • a mutant form of math has deflected science’s heart from the modes of calculation that had long served so faithfully. Science was seduced by statistics, the math rooted in the same principles that guarantee profits for Las Vegas casinos. Supposedly, the proper use of statistics makes relying on scientific results a safe bet. But in practice, widespread misuse of statistical methods makes science more like a crapshoot.
  • science’s dirtiest secret: The “scientific method” of testing hypotheses by statistical analysis stands on a flimsy foundation. Statistical tests are supposed to guide scientists in judging whether an experimental result reflects some real effect or is merely a random fluke, but the standard methods mix mutually inconsistent philosophies and offer no meaningful basis for making such decisions. Even when performed correctly, statistical tests are widely misunderstood and frequently misinterpreted. As a result, countless conclusions in the scientific literature are erroneous, and tests of medical dangers or treatments are often contradictory and confusing.
  • ...24 more annotations...
  • Experts in the math of probability and statistics are well aware of these problems and have for decades expressed concern about them in major journals. Over the years, hundreds of published papers have warned that science’s love affair with statistics has spawned countless illegitimate findings. In fact, if you believe what you read in the scientific literature, you shouldn’t believe what you read in the scientific literature.
  • “There are more false claims made in the medical literature than anybody appreciates,” he says. “There’s no question about that.”Nobody contends that all of science is wrong, or that it hasn’t compiled an impressive array of truths about the natural world. Still, any single scientific study alone is quite likely to be incorrect, thanks largely to the fact that the standard statistical system for drawing conclusions is, in essence, illogical. “A lot of scientists don’t understand statistics,” says Goodman. “And they don’t understand statistics because the statistics don’t make sense.”
  • In 2007, for instance, researchers combing the medical literature found numerous studies linking a total of 85 genetic variants in 70 different genes to acute coronary syndrome, a cluster of heart problems. When the researchers compared genetic tests of 811 patients that had the syndrome with a group of 650 (matched for sex and age) that didn’t, only one of the suspect gene variants turned up substantially more often in those with the syndrome — a number to be expected by chance.“Our null results provide no support for the hypothesis that any of the 85 genetic variants tested is a susceptibility factor” for the syndrome, the researchers reported in the Journal of the American Medical Association.How could so many studies be wrong? Because their conclusions relied on “statistical significance,” a concept at the heart of the mathematical analysis of modern scientific experiments.
  • Statistical significance is a phrase that every science graduate student learns, but few comprehend. While its origins stretch back at least to the 19th century, the modern notion was pioneered by the mathematician Ronald A. Fisher in the 1920s. His original interest was agriculture. He sought a test of whether variation in crop yields was due to some specific intervention (say, fertilizer) or merely reflected random factors beyond experimental control.Fisher first assumed that fertilizer caused no difference — the “no effect” or “null” hypothesis. He then calculated a number called the P value, the probability that an observed yield in a fertilized field would occur if fertilizer had no real effect. If P is less than .05 — meaning the chance of a fluke is less than 5 percent — the result should be declared “statistically significant,” Fisher arbitrarily declared, and the no effect hypothesis should be rejected, supposedly confirming that fertilizer works.Fisher’s P value eventually became the ultimate arbiter of credibility for science results of all sorts
  • But in fact, there’s no logical basis for using a P value from a single study to draw any conclusion. If the chance of a fluke is less than 5 percent, two possible conclusions remain: There is a real effect, or the result is an improbable fluke. Fisher’s method offers no way to know which is which. On the other hand, if a study finds no statistically significant effect, that doesn’t prove anything, either. Perhaps the effect doesn’t exist, or maybe the statistical test wasn’t powerful enough to detect a small but real effect.
  • Soon after Fisher established his system of statistical significance, it was attacked by other mathematicians, notably Egon Pearson and Jerzy Neyman. Rather than testing a null hypothesis, they argued, it made more sense to test competing hypotheses against one another. That approach also produces a P value, which is used to gauge the likelihood of a “false positive” — concluding an effect is real when it actually isn’t. What  eventually emerged was a hybrid mix of the mutually inconsistent Fisher and Neyman-Pearson approaches, which has rendered interpretations of standard statistics muddled at best and simply erroneous at worst. As a result, most scientists are confused about the meaning of a P value or how to interpret it. “It’s almost never, ever, ever stated correctly, what it means,” says Goodman.
  • experimental data yielding a P value of .05 means that there is only a 5 percent chance of obtaining the observed (or more extreme) result if no real effect exists (that is, if the no-difference hypothesis is correct). But many explanations mangle the subtleties in that definition. A recent popular book on issues involving science, for example, states a commonly held misperception about the meaning of statistical significance at the .05 level: “This means that it is 95 percent certain that the observed difference between groups, or sets of samples, is real and could not have arisen by chance.”
  • That interpretation commits an egregious logical error (technical term: “transposed conditional”): confusing the odds of getting a result (if a hypothesis is true) with the odds favoring the hypothesis if you observe that result. A well-fed dog may seldom bark, but observing the rare bark does not imply that the dog is hungry. A dog may bark 5 percent of the time even if it is well-fed all of the time. (See Box 2)
    • Weiye Loh
       
      Does the problem then, lie not in statistics, but the interpretation of statistics? Is the fallacy of appeal to probability is at work in such interpretation? 
  • Another common error equates statistical significance to “significance” in the ordinary use of the word. Because of the way statistical formulas work, a study with a very large sample can detect “statistical significance” for a small effect that is meaningless in practical terms. A new drug may be statistically better than an old drug, but for every thousand people you treat you might get just one or two additional cures — not clinically significant. Similarly, when studies claim that a chemical causes a “significantly increased risk of cancer,” they often mean that it is just statistically significant, possibly posing only a tiny absolute increase in risk.
  • Statisticians perpetually caution against mistaking statistical significance for practical importance, but scientific papers commit that error often. Ziliak studied journals from various fields — psychology, medicine and economics among others — and reported frequent disregard for the distinction.
  • “I found that eight or nine of every 10 articles published in the leading journals make the fatal substitution” of equating statistical significance to importance, he said in an interview. Ziliak’s data are documented in the 2008 book The Cult of Statistical Significance, coauthored with Deirdre McCloskey of the University of Illinois at Chicago.
  • Multiplicity of mistakesEven when “significance” is properly defined and P values are carefully calculated, statistical inference is plagued by many other problems. Chief among them is the “multiplicity” issue — the testing of many hypotheses simultaneously. When several drugs are tested at once, or a single drug is tested on several groups, chances of getting a statistically significant but false result rise rapidly.
  • Recognizing these problems, some researchers now calculate a “false discovery rate” to warn of flukes disguised as real effects. And genetics researchers have begun using “genome-wide association studies” that attempt to ameliorate the multiplicity issue (SN: 6/21/08, p. 20).
  • Many researchers now also commonly report results with confidence intervals, similar to the margins of error reported in opinion polls. Such intervals, usually given as a range that should include the actual value with 95 percent confidence, do convey a better sense of how precise a finding is. But the 95 percent confidence calculation is based on the same math as the .05 P value and so still shares some of its problems.
  • Statistical problems also afflict the “gold standard” for medical research, the randomized, controlled clinical trials that test drugs for their ability to cure or their power to harm. Such trials assign patients at random to receive either the substance being tested or a placebo, typically a sugar pill; random selection supposedly guarantees that patients’ personal characteristics won’t bias the choice of who gets the actual treatment. But in practice, selection biases may still occur, Vance Berger and Sherri Weinstein noted in 2004 in ControlledClinical Trials. “Some of the benefits ascribed to randomization, for example that it eliminates all selection bias, can better be described as fantasy than reality,” they wrote.
  • Randomization also should ensure that unknown differences among individuals are mixed in roughly the same proportions in the groups being tested. But statistics do not guarantee an equal distribution any more than they prohibit 10 heads in a row when flipping a penny. With thousands of clinical trials in progress, some will not be well randomized. And DNA differs at more than a million spots in the human genetic catalog, so even in a single trial differences may not be evenly mixed. In a sufficiently large trial, unrandomized factors may balance out, if some have positive effects and some are negative. (See Box 3) Still, trial results are reported as averages that may obscure individual differences, masking beneficial or harm­ful effects and possibly leading to approval of drugs that are deadly for some and denial of effective treatment to others.
  • nother concern is the common strategy of combining results from many trials into a single “meta-analysis,” a study of studies. In a single trial with relatively few participants, statistical tests may not detect small but real and possibly important effects. In principle, combining smaller studies to create a larger sample would allow the tests to detect such small effects. But statistical techniques for doing so are valid only if certain criteria are met. For one thing, all the studies conducted on the drug must be included — published and unpublished. And all the studies should have been performed in a similar way, using the same protocols, definitions, types of patients and doses. When combining studies with differences, it is necessary first to show that those differences would not affect the analysis, Goodman notes, but that seldom happens. “That’s not a formal part of most meta-analyses,” he says.
  • Meta-analyses have produced many controversial conclusions. Common claims that antidepressants work no better than placebos, for example, are based on meta-analyses that do not conform to the criteria that would confer validity. Similar problems afflicted a 2007 meta-analysis, published in the New England Journal of Medicine, that attributed increased heart attack risk to the diabetes drug Avandia. Raw data from the combined trials showed that only 55 people in 10,000 had heart attacks when using Avandia, compared with 59 people per 10,000 in comparison groups. But after a series of statistical manipulations, Avandia appeared to confer an increased risk.
  • combining small studies in a meta-analysis is not a good substitute for a single trial sufficiently large to test a given question. “Meta-analyses can reduce the role of chance in the interpretation but may introduce bias and confounding,” Hennekens and DeMets write in the Dec. 2 Journal of the American Medical Association. “Such results should be considered more as hypothesis formulating than as hypothesis testing.”
  • Some studies show dramatic effects that don’t require sophisticated statistics to interpret. If the P value is 0.0001 — a hundredth of a percent chance of a fluke — that is strong evidence, Goodman points out. Besides, most well-accepted science is based not on any single study, but on studies that have been confirmed by repetition. Any one result may be likely to be wrong, but confidence rises quickly if that result is independently replicated.“Replication is vital,” says statistician Juliet Shaffer, a lecturer emeritus at the University of California, Berkeley. And in medicine, she says, the need for replication is widely recognized. “But in the social sciences and behavioral sciences, replication is not common,” she noted in San Diego in February at the annual meeting of the American Association for the Advancement of Science. “This is a sad situation.”
  • Most critics of standard statistics advocate the Bayesian approach to statistical reasoning, a methodology that derives from a theorem credited to Bayes, an 18th century English clergyman. His approach uses similar math, but requires the added twist of a “prior probability” — in essence, an informed guess about the expected probability of something in advance of the study. Often this prior probability is more than a mere guess — it could be based, for instance, on previous studies.
  • it basically just reflects the need to include previous knowledge when drawing conclusions from new observations. To infer the odds that a barking dog is hungry, for instance, it is not enough to know how often the dog barks when well-fed. You also need to know how often it eats — in order to calculate the prior probability of being hungry. Bayesian math combines a prior probability with observed data to produce an estimate of the likelihood of the hunger hypothesis. “A scientific hypothesis cannot be properly assessed solely by reference to the observational data,” but only by viewing the data in light of prior belief in the hypothesis, wrote George Diamond and Sanjay Kaul of UCLA’s School of Medicine in 2004 in the Journal of the American College of Cardiology. “Bayes’ theorem is ... a logically consistent, mathematically valid, and intuitive way to draw inferences about the hypothesis.” (See Box 4)
  • In many real-life contexts, Bayesian methods do produce the best answers to important questions. In medical diagnoses, for instance, the likelihood that a test for a disease is correct depends on the prevalence of the disease in the population, a factor that Bayesian math would take into account.
  • But Bayesian methods introduce a confusion into the actual meaning of the mathematical concept of “probability” in the real world. Standard or “frequentist” statistics treat probabilities as objective realities; Bayesians treat probabilities as “degrees of belief” based in part on a personal assessment or subjective decision about what to include in the calculation. That’s a tough placebo to swallow for scientists wedded to the “objective” ideal of standard statistics. “Subjective prior beliefs are anathema to the frequentist, who relies instead on a series of ad hoc algorithms that maintain the facade of scientific objectivity,” Diamond and Kaul wrote.Conflict between frequentists and Bayesians has been ongoing for two centuries. So science’s marriage to mathematics seems to entail some irreconcilable differences. Whether the future holds a fruitful reconciliation or an ugly separation may depend on forging a shared understanding of probability.“What does probability mean in real life?” the statistician David Salsburg asked in his 2001 book The Lady Tasting Tea. “This problem is still unsolved, and ... if it remains un­solved, the whole of the statistical approach to science may come crashing down from the weight of its own inconsistencies.”
  •  
    Odds Are, It's Wrong Science fails to face the shortcomings of statistics
Weiye Loh

Is it a boy or a girl? You decide - Prospect Magazine « Prospect Magazine - 0 views

  • The only way to guarantee either a daughter or son is to undergo pre-implantation genetic diagnosis: a genetic analysis of an embryo before it is placed in the womb. This is illegal in Britain except for couples at risk of having a child with a life-threatening gender-linked disorder.
  • It’s also illegal for clinics to offer sex selection methods such as MicroSort, that sift the slightly larger X chromosome-bearing (female) sperm from their weedier Y chromosome-bearing (male) counterparts, and then use the preferred sperm in an IVF cycle. With a success rate hovering around 80-90 per cent, it’s better than Mother Nature’s odds of conception, but not immaculate.
  • Years ago I agreed with this ban on socially motivated sex selection. But I can’t defend that stance today. My opposition was based on two worries: the gender balance being skewed—look at China—and the perils of letting society think it’s acceptable to prize one sex more than the other. Unlike many politicians, however, I think it is only right and proper to perform an ideological U-turn when presented with convincing opposing evidence.
  • ...4 more annotations...
  • A 2003 survey published in the journal Human Reproduction showed that few British adults would be concerned enough about their baby’s gender to use the technology, and most adults wanted the same number of sons as daughters
  • Bioethics specialist Edgar Dahl of the University of Geissen found that 68 per cent of Britons craved an equal number of boys and girls; 6 per cent wanted more boys; 4 per cent more girls; 3 per cent only boys; and 2 per cent only girls. Fascinatingly, even if a baby’s sex could be decided by simply taking a blue pill or a pink pill, 90 per cent of British respondents said they wouldn’t take it.
  • What about the danger of stigmatising the unwanted sex if gender selection was allowed? According to experts on so-called “gender disappointment,” the unwanted sex would actually be male.
  • I may think it is old-fashioned to want a son so that he can inherit the family business, or a daughter to have someone to go shopping with. But how different is that from the other preferences and expectations we have for our children, such as hoping they will be gifted at mathematics, music or sport? We all nurture secret expectations for our children: I hope that mine will be clever, beautiful, witty and wise. Perhaps it is not the end of the world if we allow some parents to add “female” or “male” to the list.
  •  
    Is it a boy or a girl? You decide ANJANA AHUJA   28th April 2010  -  Issue 170 Choosing the sex of an unborn child is illegal, but would it harm society if it wasn't?
Weiye Loh

The Decline Effect and the Scientific Method : The New Yorker - 0 views

  • On September 18, 2007, a few dozen neuroscientists, psychiatrists, and drug-company executives gathered in a hotel conference room in Brussels to hear some startling news. It had to do with a class of drugs known as atypical or second-generation antipsychotics, which came on the market in the early nineties.
  • the therapeutic power of the drugs appeared to be steadily waning. A recent study showed an effect that was less than half of that documented in the first trials, in the early nineteen-nineties. Many researchers began to argue that the expensive pharmaceuticals weren’t any better than first-generation antipsychotics, which have been in use since the fifties. “In fact, sometimes they now look even worse,” John Davis, a professor of psychiatry at the University of Illinois at Chicago, told me.
  • Before the effectiveness of a drug can be confirmed, it must be tested and tested again. Different scientists in different labs need to repeat the protocols and publish their results. The test of replicability, as it’s known, is the foundation of modern research. Replicability is how the community enforces itself. It’s a safeguard for the creep of subjectivity. Most of the time, scientists know what results they want, and that can influence the results they get. The premise of replicability is that the scientific community can correct for these flaws.
  • ...30 more annotations...
  • But now all sorts of well-established, multiply confirmed findings have started to look increasingly uncertain. It’s as if our facts were losing their truth: claims that have been enshrined in textbooks are suddenly unprovable. This phenomenon doesn’t yet have an official name, but it’s occurring across a wide range of fields, from psychology to ecology. In the field of medicine, the phenomenon seems extremely widespread, affecting not only antipsychotics but also therapies ranging from cardiac stents to Vitamin E and antidepressants: Davis has a forthcoming analysis demonstrating that the efficacy of antidepressants has gone down as much as threefold in recent decades.
  • In private, Schooler began referring to the problem as “cosmic habituation,” by analogy to the decrease in response that occurs when individuals habituate to particular stimuli. “Habituation is why you don’t notice the stuff that’s always there,” Schooler says. “It’s an inevitable process of adjustment, a ratcheting down of excitement. I started joking that it was like the cosmos was habituating to my ideas. I took it very personally.”
  • At first, he assumed that he’d made an error in experimental design or a statistical miscalculation. But he couldn’t find anything wrong with his research. He then concluded that his initial batch of research subjects must have been unusually susceptible to verbal overshadowing. (John Davis, similarly, has speculated that part of the drop-off in the effectiveness of antipsychotics can be attributed to using subjects who suffer from milder forms of psychosis which are less likely to show dramatic improvement.) “It wasn’t a very satisfying explanation,” Schooler says. “One of my mentors told me that my real mistake was trying to replicate my work. He told me doing that was just setting myself up for disappointment.”
  • the effect is especially troubling because of what it exposes about the scientific process. If replication is what separates the rigor of science from the squishiness of pseudoscience, where do we put all these rigorously validated findings that can no longer be proved? Which results should we believe? Francis Bacon, the early-modern philosopher and pioneer of the scientific method, once declared that experiments were essential, because they allowed us to “put nature to the question.” But it appears that nature often gives us different answers.
  • The most likely explanation for the decline is an obvious one: regression to the mean. As the experiment is repeated, that is, an early statistical fluke gets cancelled out. The extrasensory powers of Schooler’s subjects didn’t decline—they were simply an illusion that vanished over time. And yet Schooler has noticed that many of the data sets that end up declining seem statistically solid—that is, they contain enough data that any regression to the mean shouldn’t be dramatic. “These are the results that pass all the tests,” he says. “The odds of them being random are typically quite remote, like one in a million. This means that the decline effect should almost never happen. But it happens all the time!
  • this is why Schooler believes that the decline effect deserves more attention: its ubiquity seems to violate the laws of statistics. “Whenever I start talking about this, scientists get very nervous,” he says. “But I still want to know what happened to my results. Like most scientists, I assumed that it would get easier to document my effect over time. I’d get better at doing the experiments, at zeroing in on the conditions that produce verbal overshadowing. So why did the opposite happen? I’m convinced that we can use the tools of science to figure this out. First, though, we have to admit that we’ve got a problem.”
  • In 2001, Michael Jennions, a biologist at the Australian National University, set out to analyze “temporal trends” across a wide range of subjects in ecology and evolutionary biology. He looked at hundreds of papers and forty-four meta-analyses (that is, statistical syntheses of related studies), and discovered a consistent decline effect over time, as many of the theories seemed to fade into irrelevance. In fact, even when numerous variables were controlled for—Jennions knew, for instance, that the same author might publish several critical papers, which could distort his analysis—there was still a significant decrease in the validity of the hypothesis, often within a year of publication. Jennions admits that his findings are troubling, but expresses a reluctance to talk about them publicly. “This is a very sensitive issue for scientists,” he says. “You know, we’re supposed to be dealing with hard facts, the stuff that’s supposed to stand the test of time. But when you see these trends you become a little more skeptical of things.”
  • the worst part was that when I submitted these null results I had difficulty getting them published. The journals only wanted confirming data. It was too exciting an idea to disprove, at least back then.
  • the steep rise and slow fall of fluctuating asymmetry is a clear example of a scientific paradigm, one of those intellectual fads that both guide and constrain research: after a new paradigm is proposed, the peer-review process is tilted toward positive results. But then, after a few years, the academic incentives shift—the paradigm has become entrenched—so that the most notable results are now those that disprove the theory.
  • Jennions, similarly, argues that the decline effect is largely a product of publication bias, or the tendency of scientists and scientific journals to prefer positive data over null results, which is what happens when no effect is found. The bias was first identified by the statistician Theodore Sterling, in 1959, after he noticed that ninety-seven per cent of all published psychological studies with statistically significant data found the effect they were looking for. A “significant” result is defined as any data point that would be produced by chance less than five per cent of the time. This ubiquitous test was invented in 1922 by the English mathematician Ronald Fisher, who picked five per cent as the boundary line, somewhat arbitrarily, because it made pencil and slide-rule calculations easier. Sterling saw that if ninety-seven per cent of psychology studies were proving their hypotheses, either psychologists were extraordinarily lucky or they published only the outcomes of successful experiments. In recent years, publication bias has mostly been seen as a problem for clinical trials, since pharmaceutical companies are less interested in publishing results that aren’t favorable. But it’s becoming increasingly clear that publication bias also produces major distortions in fields without large corporate incentives, such as psychology and ecology.
  • While publication bias almost certainly plays a role in the decline effect, it remains an incomplete explanation. For one thing, it fails to account for the initial prevalence of positive results among studies that never even get submitted to journals. It also fails to explain the experience of people like Schooler, who have been unable to replicate their initial data despite their best efforts
  • an equally significant issue is the selective reporting of results—the data that scientists choose to document in the first place. Palmer’s most convincing evidence relies on a statistical tool known as a funnel graph. When a large number of studies have been done on a single subject, the data should follow a pattern: studies with a large sample size should all cluster around a common value—the true result—whereas those with a smaller sample size should exhibit a random scattering, since they’re subject to greater sampling error. This pattern gives the graph its name, since the distribution resembles a funnel.
  • The funnel graph visually captures the distortions of selective reporting. For instance, after Palmer plotted every study of fluctuating asymmetry, he noticed that the distribution of results with smaller sample sizes wasn’t random at all but instead skewed heavily toward positive results.
  • Palmer has since documented a similar problem in several other contested subject areas. “Once I realized that selective reporting is everywhere in science, I got quite depressed,” Palmer told me. “As a researcher, you’re always aware that there might be some nonrandom patterns, but I had no idea how widespread it is.” In a recent review article, Palmer summarized the impact of selective reporting on his field: “We cannot escape the troubling conclusion that some—perhaps many—cherished generalities are at best exaggerated in their biological significance and at worst a collective illusion nurtured by strong a-priori beliefs often repeated.”
  • Palmer emphasizes that selective reporting is not the same as scientific fraud. Rather, the problem seems to be one of subtle omissions and unconscious misperceptions, as researchers struggle to make sense of their results. Stephen Jay Gould referred to this as the “shoehorning” process. “A lot of scientific measurement is really hard,” Simmons told me. “If you’re talking about fluctuating asymmetry, then it’s a matter of minuscule differences between the right and left sides of an animal. It’s millimetres of a tail feather. And so maybe a researcher knows that he’s measuring a good male”—an animal that has successfully mated—“and he knows that it’s supposed to be symmetrical. Well, that act of measurement is going to be vulnerable to all sorts of perception biases. That’s not a cynical statement. That’s just the way human beings work.”
  • One of the classic examples of selective reporting concerns the testing of acupuncture in different countries. While acupuncture is widely accepted as a medical treatment in various Asian countries, its use is much more contested in the West. These cultural differences have profoundly influenced the results of clinical trials. Between 1966 and 1995, there were forty-seven studies of acupuncture in China, Taiwan, and Japan, and every single trial concluded that acupuncture was an effective treatment. During the same period, there were ninety-four clinical trials of acupuncture in the United States, Sweden, and the U.K., and only fifty-six per cent of these studies found any therapeutic benefits. As Palmer notes, this wide discrepancy suggests that scientists find ways to confirm their preferred hypothesis, disregarding what they don’t want to see. Our beliefs are a form of blindness.
  • John Ioannidis, an epidemiologist at Stanford University, argues that such distortions are a serious issue in biomedical research. “These exaggerations are why the decline has become so common,” he says. “It’d be really great if the initial studies gave us an accurate summary of things. But they don’t. And so what happens is we waste a lot of money treating millions of patients and doing lots of follow-up studies on other themes based on results that are misleading.”
  • In 2005, Ioannidis published an article in the Journal of the American Medical Association that looked at the forty-nine most cited clinical-research studies in three major medical journals. Forty-five of these studies reported positive results, suggesting that the intervention being tested was effective. Because most of these studies were randomized controlled trials—the “gold standard” of medical evidence—they tended to have a significant impact on clinical practice, and led to the spread of treatments such as hormone replacement therapy for menopausal women and daily low-dose aspirin to prevent heart attacks and strokes. Nevertheless, the data Ioannidis found were disturbing: of the thirty-four claims that had been subject to replication, forty-one per cent had either been directly contradicted or had their effect sizes significantly downgraded.
  • The situation is even worse when a subject is fashionable. In recent years, for instance, there have been hundreds of studies on the various genes that control the differences in disease risk between men and women. These findings have included everything from the mutations responsible for the increased risk of schizophrenia to the genes underlying hypertension. Ioannidis and his colleagues looked at four hundred and thirty-two of these claims. They quickly discovered that the vast majority had serious flaws. But the most troubling fact emerged when he looked at the test of replication: out of four hundred and thirty-two claims, only a single one was consistently replicable. “This doesn’t mean that none of these claims will turn out to be true,” he says. “But, given that most of them were done badly, I wouldn’t hold my breath.”
  • the main problem is that too many researchers engage in what he calls “significance chasing,” or finding ways to interpret the data so that it passes the statistical test of significance—the ninety-five-per-cent boundary invented by Ronald Fisher. “The scientists are so eager to pass this magical test that they start playing around with the numbers, trying to find anything that seems worthy,” Ioannidis says. In recent years, Ioannidis has become increasingly blunt about the pervasiveness of the problem. One of his most cited papers has a deliberately provocative title: “Why Most Published Research Findings Are False.”
  • The problem of selective reporting is rooted in a fundamental cognitive flaw, which is that we like proving ourselves right and hate being wrong. “It feels good to validate a hypothesis,” Ioannidis said. “It feels even better when you’ve got a financial interest in the idea or your career depends upon it. And that’s why, even after a claim has been systematically disproven”—he cites, for instance, the early work on hormone replacement therapy, or claims involving various vitamins—“you still see some stubborn researchers citing the first few studies that show a strong effect. They really want to believe that it’s true.”
  • scientists need to become more rigorous about data collection before they publish. “We’re wasting too much time chasing after bad studies and underpowered experiments,” he says. The current “obsession” with replicability distracts from the real problem, which is faulty design. He notes that nobody even tries to replicate most science papers—there are simply too many. (According to Nature, a third of all studies never even get cited, let alone repeated.)
  • Schooler recommends the establishment of an open-source database, in which researchers are required to outline their planned investigations and document all their results. “I think this would provide a huge increase in access to scientific work and give us a much better way to judge the quality of an experiment,” Schooler says. “It would help us finally deal with all these issues that the decline effect is exposing.”
  • Although such reforms would mitigate the dangers of publication bias and selective reporting, they still wouldn’t erase the decline effect. This is largely because scientific research will always be shadowed by a force that can’t be curbed, only contained: sheer randomness. Although little research has been done on the experimental dangers of chance and happenstance, the research that exists isn’t encouraging
  • John Crabbe, a neuroscientist at the Oregon Health and Science University, conducted an experiment that showed how unknowable chance events can skew tests of replicability. He performed a series of experiments on mouse behavior in three different science labs: in Albany, New York; Edmonton, Alberta; and Portland, Oregon. Before he conducted the experiments, he tried to standardize every variable he could think of. The same strains of mice were used in each lab, shipped on the same day from the same supplier. The animals were raised in the same kind of enclosure, with the same brand of sawdust bedding. They had been exposed to the same amount of incandescent light, were living with the same number of littermates, and were fed the exact same type of chow pellets. When the mice were handled, it was with the same kind of surgical glove, and when they were tested it was on the same equipment, at the same time in the morning.
  • The premise of this test of replicability, of course, is that each of the labs should have generated the same pattern of results. “If any set of experiments should have passed the test, it should have been ours,” Crabbe says. “But that’s not the way it turned out.” In one experiment, Crabbe injected a particular strain of mouse with cocaine. In Portland the mice given the drug moved, on average, six hundred centimetres more than they normally did; in Albany they moved seven hundred and one additional centimetres. But in the Edmonton lab they moved more than five thousand additional centimetres. Similar deviations were observed in a test of anxiety. Furthermore, these inconsistencies didn’t follow any detectable pattern. In Portland one strain of mouse proved most anxious, while in Albany another strain won that distinction.
  • The disturbing implication of the Crabbe study is that a lot of extraordinary scientific data are nothing but noise. The hyperactivity of those coked-up Edmonton mice wasn’t an interesting new fact—it was a meaningless outlier, a by-product of invisible variables we don’t understand. The problem, of course, is that such dramatic findings are also the most likely to get published in prestigious journals, since the data are both statistically significant and entirely unexpected. Grants get written, follow-up studies are conducted. The end result is a scientific accident that can take years to unravel.
  • This suggests that the decline effect is actually a decline of illusion.
  • While Karl Popper imagined falsification occurring with a single, definitive experiment—Galileo refuted Aristotelian mechanics in an afternoon—the process turns out to be much messier than that. Many scientific theories continue to be considered true even after failing numerous experimental tests. Verbal overshadowing might exhibit the decline effect, but it remains extensively relied upon within the field. The same holds for any number of phenomena, from the disappearing benefits of second-generation antipsychotics to the weak coupling ratio exhibited by decaying neutrons, which appears to have fallen by more than ten standard deviations between 1969 and 2001. Even the law of gravity hasn’t always been perfect at predicting real-world phenomena. (In one test, physicists measuring gravity by means of deep boreholes in the Nevada desert found a two-and-a-half-per-cent discrepancy between the theoretical predictions and the actual data.) Despite these findings, second-generation antipsychotics are still widely prescribed, and our model of the neutron hasn’t changed. The law of gravity remains the same.
  • Such anomalies demonstrate the slipperiness of empiricism. Although many scientific ideas generate conflicting results and suffer from falling effect sizes, they continue to get cited in the textbooks and drive standard medical practice. Why? Because these ideas seem true. Because they make sense. Because we can’t bear to let them go. And this is why the decline effect is so troubling. Not because it reveals the human fallibility of science, in which data are tweaked and beliefs shape perceptions. (Such shortcomings aren’t surprising, at least for scientists.) And not because it reveals that many of our most exciting theories are fleeting fads and will soon be rejected. (That idea has been around since Thomas Kuhn.) The decline effect is troubling because it reminds us how difficult it is to prove anything. We like to pretend that our experiments define the truth for us. But that’s often not the case. Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true. When the experiments are done, we still have to choose what to believe.
Weiye Loh

Rationally Speaking: On ethics, part III: Deontology - 0 views

  • Plato showed convincingly in his Euthyphro dialogue that even if gods existed they would not help at all settling the question of morality.
  • Broadly speaking, deontological approaches fall into the same category as consequentialism — they are concerned with what we ought to do, as opposed to what sort of persons we ought to be (the latter is, most famously, the concern of virtue ethics). That said, deontology is the chief rival of consequentialism, and the two have distinct advantages and disadvantages that seem so irreducible
  • Here is one way to understand the difference between consequentialism and deontology: for the former the consequences of an action are moral if they increase the Good (which, as we have seen, can be specified in different ways, including increasing happiness and/or decreasing pain). For the latter, the fundamental criterion is conformity to moral duties. You could say that for the deontologist the Right (sometimes) trumps the Good. Of course, as a result consequentialists have to go through the trouble of defining and justifying the Good, while deontologists have to tackle the task of defining and justifying the Right.
  • ...10 more annotations...
  • two major “modes” of deontology: agent-centered and victim-centered. Agent-centered deontology is concerned with permissions and obligations to act toward other agents, the typical example being parents’ duty to protect and nurture their children. Notice the immediate departure from consequentialism, here, since the latter is an agent-neutral type of ethics (we have seen that it has trouble justifying the idea of special treatment of relatives or friends). Where do such agent-relative obligations come from? From the fact that we make explicit or implicit promises to some agents but not others. By bringing my child into the world, for instance, I make a special promise to that particular individual, a promise that I do not make to anyone else’s children. While this certainly doesn’t mean that I don’t have duties toward other children (like inflicting no intentional harm), it does mean that I have additional duties toward my own children as a result of the simple fact that they are mine.
  • Agent-centered deontology gets into trouble because of its close philosophical association to some doctrines that originated within Catholic theology, like the idea of double effect. (I should immediately clarify that the trouble is not due to the fact that these doctrines are rooted in a religious framework, it’s their intrinsic moral logic that is at issue here.) For instance, for agent-centered deontologists we are morally forbidden from killing innocent others (reasonably enough), but this prohibition extends even to cases when so doing would actually save even more innocents.
  • Those familiar with trolleology will recognize one of the classic forms of the trolley dilemma here: is it right to throw an innocent person in front of the out of control trolley in order to save five others? For consequentialists the answer is a no-brainer: of course yes, you are saving a net of four lives! But for the deontologist you are now using another person (the innocent you are throwing to stop the trolley) as a means to an end, thus violating one of the forms of Kant’s imperative:“Act in such a way that you treat humanity, whether in your own person or in the person of any other, always at the same time as an end and never merely as a means to an end.”
  • The other form, in case you are wondering, is: “Act only according to that maxim whereby you can at the same time will that it should become a universal law without contradiction.”
  • Victim-centered deontologies are right- rather than duty-based, which of course does raise the question of why we think of them as deontological to begin with.
  • The fundamental idea about victim-centered deontology is the right that people have not to be used by others without their consent. This is were we find Robert Nozick-style libertarianism, which I have already criticized on this blog. One of the major implications of this version of deontology is that there is no strong moral duty to help others.
  • contractarian deontological theories. These deal with social contracts of the type, for instance, discussed by John Rawls in his theory of justice. However, I will devote a separate post to contractarianism, in part because it is so important in ethics, and in part because one can argue that contractarianism is really a meta-ethical theory, and therefore does not strictly fall under deontology per se.
  • deontological theories have the advantage over consequentialism in that they account for special concerns for one’s relatives and friends, as we have seen above. Consequentialism, by comparison, comes across as alienating and unreasonably demanding. Another advantage of deontology over consequentialism is that it accounts for the intuition that even if an act is not morally demanded it may still be praiseworthy. For a consequentialist, on the contrary, if something is not morally demanded it is then morally forbidden. (Another way to put this is that consequentialism is a more minimalist approach to ethics than deontology.) Moreover, deontology also deals much better than consequentialism with the idea of rights.
  • deontological theories run into the problem that they seem to give us permission, and sometimes even require, to make things actually morally worse in the world. Indeed, a strict deontologist could actually cause human catastrophes by adhering to Kant’s imperative and still think he acted morally (Kant at one point remarked that it is “better the whole people should perish” than that injustice be done — one wonders injustice to whom, since nobody would be left standing). Deontologists also have trouble dealing with the seemingly contradictory ideas that our duties are categorical (i.e., they do not admit of exceptions), and yet that some duties are more important than others. (Again, Kant famously stated that “a conflict of duties is inconceivable” while forgetting to provide any argument in defense of such a bold statement.)
  • . One famous attempt at this reconciliation was proposed by Thomas Nagel (he of “what is it like to be a bat?” fame). Nagel suggested that perhaps we should be consequentialists when it comes to agent-neutral reasoning, and deontologists when we engage in agent-relative reasoning. He neglected to specify, however, any non-mysterious way to decide what to do in those situations in which the same moral dilemma can be seen from both perspectives.
Weiye Loh

Genetic Sequencing Will Have to Wait: Links Between Genes and Behavior Still Largely Un... - 0 views

  • A recent article in The New York Times reported that over 100 studies show a relationship between genes and criminality but that the environment plays a key role in the effects of this relationship: “Kevin Beaver, an associate professor at Florida State University’s College of Criminology and Criminal Justice, said genetics may account for, say, half of a person’s aggressive behavior, but that 50 percent comprises hundreds or thousands of genes that express themselves differently depending on the environment. He has tried to measure which circumstances — having delinquent friends, living in a disadvantaged neighborhood — influence whether a predisposition to violence surfaces. After studying twins and siblings, he came up with an astonishing result: In boys not exposed to the risk factors, genetics played no role in any of their violent behavior. The positive environment had prevented the genetic switches — to use Mr. Pinker’s word — that affect aggression from being turned on. In boys with eight or more risk factors, however, genes explained 80 percent of their violence. Their switches had been flipped.”
  • “This idea that if something is genetic it’s deterministic is a misconception that we have to get over because saying that genes are involved in depression does not necessarily mean that someone who has certain genetic variants is doomed to become depressed, it just means that under certain circumstances, he or she may have to do certain things to help alleviate it, but it’s not unchangeable. You can change your brain, you can change your brain in many different ways and genetics is just one of many of these ways.”
  • In fact, environment plays the same crucial role for criminality as it does for obesity and depression. In an interview I did for a story in The Michigan Daily on depression research, Dr. Margit Burmeister, a professor of human genetics and a researcher in the Molecular and Biological Neuroscience Institute at the University of Michigan, explained the dangers the public oversimplifying the link between genetics and depression:
1 - 14 of 14
Showing 20 items per page