Skip to main content

Home/ New Media Ethics 2009 course/ Group items tagged Psychology

Rss Feed Group items tagged

Weiye Loh

Political - or politicized? - psychology » Scienceline - 0 views

  • The idea that your personal characteristics could be linked to your political ideology has intrigued political psychologists for decades. Numerous studies suggest that liberals and conservatives differ not only in their views toward government and society, but also in their behavior, their personality, and even how they travel, decorate, clean and spend their leisure time. In today’s heated political climate, understanding people on the “other side” — whether that side is left or right — takes on new urgency. But as researchers study the personal side of politics, could they be influenced by political biases of their own?
  • Consider the following 2006 study by the late California psychologists Jeanne and Jack Block, which compared the personalities of nursery school children to their political leanings as 23-year olds. Preschoolers who went on to identify as liberal were described by the authors as self-reliant, energetic, somewhat dominating and resilient. The children who later identified as conservative were described as easily offended, indecisive, fearful, rigid, inhibited and vulnerable. The negative descriptions of conservatives in this study strike Jacob Vigil, a psychologist at the University of New Mexico, as morally loaded. Studies like this one, he said, use language that suggests the researchers are “motivated to present liberals with more ideal descriptions as compared to conservatives.”
  • Most of the researchers in this field are, in fact, liberal. In 2007 UCLA’s Higher Education Research Institute conducted a survey of faculty at four-year colleges and universities in the United States. About 68 percent of the faculty in history, political science and social science departments characterized themselves as liberal, 22 percent characterized themselves as moderate, and only 10 percent as conservative. Some social psychologists, like Jonathan Haidt of the University of Virginia, have charged that this liberal majority distorts the research in political psychology.
  • ...9 more annotations...
  • It’s a charge that John Jost, a social psychologist at New York University, flatly denies. Findings in political psychology bear upon deeply held personal beliefs and attitudes, he said, so they are bound to spark controversy. Research showing that conservatives score higher on measures of “intolerance of ambiguity” or the “need for cognitive closure” might bother some people, said Jost, but that does not make it biased.
  • “The job of the behavioral scientist is not to try to find something to say that couldn’t possibly be offensive,” said Jost. “Our job is to say what we think is true, and why.
  • Jost and his colleagues in 2003 compiled a meta-analysis of 88 studies from 12 different countries conducted over a 40-year period. They found strong evidence that conservatives tend to have higher needs to reduce uncertainty and threat. Conservatives also share psychological factors like fear, aggression, dogmatism, and the need for order, structure and closure. Political conservatism, they explained, could serve as a defense against anxieties and threats that arise out of everyday uncertainty, by justifying the status quo and preserving conditions that are comfortable and familiar.
  • The study triggered quite a public reaction, particularly within the conservative blogosphere. But the criticisms, according to Jost, were mistakenly focused on the researchers themselves; the findings were not disputed by the scientific community and have since been replicated. For example, a 2009 study followed college students over the span of their undergraduate experience and found that higher perceptions of threat did indeed predict political conservatism. Another 2009 study found that when confronted with a threat, liberals actually become more psychologically and politically conservative. Some studies even suggest that physiological traits like sensitivity to sudden noises or threatening images are associated with conservative political attitudes.
  • “The debate should always be about the data and its proper interpretation,” said Jost, “and never about the characteristics or motives of the researchers.” Phillip Tetlock, a psychologist at the University of California, Berkeley, agrees. However, Tetlock thinks that identifying the proper interpretation can be tricky, since personality measures can be described in many ways. “One observer’s ‘dogmatism’ can be another’s ‘principled,’ and one observer’s ‘open-mindedness’ can be another’s ‘flaccid and vacillating,’” Tetlock explained.
  • Richard Redding, a professor of law and psychology at Chapman University in Orange, California, points to a more general, indirect bias in political psychology. “It’s not the case that researchers are intentionally skewing the data,” which rarely happens, Redding said. Rather, the problem may lie in what sorts of questions are or are not asked.
  • For example, a conservative might be more inclined to undertake research on affirmative action in a way that would identify any negative outcomes, whereas a liberal probably wouldn’t, said Redding. Likewise, there may be aspects of personality that liberals simply haven’t considered. Redding is currently conducting a large-scale study on self-righteousness, which he suspects may be associated more highly with liberals than conservatives.
  • “The way you frame a problem is to some extent dictated by what you think the problem is,” said David Sears, a political psychologist at the University of California, Los Angeles. People’s strong feelings about issues like prejudice, sexism, authoritarianism, aggression, and nationalism — the bread and butter of political psychology — may influence how they design a study or present a problem.
  • The indirect bias that Sears and Redding identify is a far cry from the liberal groupthink others warn against. But given that psychology departments are predominantly left leaning, it’s important to seek out alternative viewpoints and explanations, said Jesse Graham, a social psychologist at the University of Southern California. A self-avowed liberal, Graham thinks it would be absurd to say he couldn’t do fair science because of his political preferences. “But,” he said, “it is something that I try to keep in mind.”
  •  
    The idea that your personal characteristics could be linked to your political ideology has intrigued political psychologists for decades. Numerous studies suggest that liberals and conservatives differ not only in their views toward government and society, but also in their behavior, their personality, and even how they travel, decorate, clean and spend their leisure time. In today's heated political climate, understanding people on the "other side" - whether that side is left or right - takes on new urgency. But as researchers study the personal side of politics, could they be influenced by political biases of their own?
Weiye Loh

Morality, with limits | Russell Blackford | Comment is free | guardian.co.uk - 0 views

  • What can Darwin teach us about morality?At least to some extent, we are a species with an evolved psychology. Like other animals, we have inherited behavioural tendencies from our ancestors, since these were adaptive for them in the sense that they tended to lead to reproductive success in past environments.
  • But what follows from this?
  • we are not evolution's slaves. All other things being equal, we should act in accordance with the desires that we actually have
  • ...6 more annotations...
  • Generally speaking, it is rational for us to act in ways that accord with our reflectively-endorsed desires or values, rather than in ways that maximise our reproductive chances or in whatever ways we tend to respond without thinking.
  • Admittedly, our evolved nature may affect this, in the sense that any workable system of moral norms must be practical for the needs of beings like us, who are, it seems, naturally inclined to be neither angelically selfless nor utterly uncaring about others.
  • our evolved psychology may impose limits on what real-world moral systems can realistically demand of human beings, perhaps defeating some of the more extreme ambitions of both conservatives and liberals. It may not be realistic to expect each other to be either as self-denying as moral conservatives seem to want or as altruistic as some liberals seem to want.
  • realistic moral systems will allow considerable scope for individuals to act in accordance with whatever they actually value.
  • A rational and realistic approach to morality, based on our actual, reflectively-endorsed desires and values, and how they are best realised in current circumstances, might deflate some expectations. It might also diverge from familiar moral teachings, handed down through religious and cultural traditions. Much that is found in traditional Christian morality
  • But realising all this need not be shocking. If it leads to some deflation of extreme political expectations and to some reason-based correction of traditional morality, we should welcome it.
  •  
    Morality, with limits We can't expect people to be either as self-denying as conservatives or as altruistic as liberals seem to want
Inosha Wickrama

ethical porn? - 50 views

I've seen that video recently. Anyway, some points i need to make. 1. different countries have different ages of consent. Does that mean children mature faster in some countries and not in other...

pornography

Weiye Loh

The Decline Effect and the Scientific Method : The New Yorker - 0 views

  • On September 18, 2007, a few dozen neuroscientists, psychiatrists, and drug-company executives gathered in a hotel conference room in Brussels to hear some startling news. It had to do with a class of drugs known as atypical or second-generation antipsychotics, which came on the market in the early nineties.
  • the therapeutic power of the drugs appeared to be steadily waning. A recent study showed an effect that was less than half of that documented in the first trials, in the early nineteen-nineties. Many researchers began to argue that the expensive pharmaceuticals weren’t any better than first-generation antipsychotics, which have been in use since the fifties. “In fact, sometimes they now look even worse,” John Davis, a professor of psychiatry at the University of Illinois at Chicago, told me.
  • Before the effectiveness of a drug can be confirmed, it must be tested and tested again. Different scientists in different labs need to repeat the protocols and publish their results. The test of replicability, as it’s known, is the foundation of modern research. Replicability is how the community enforces itself. It’s a safeguard for the creep of subjectivity. Most of the time, scientists know what results they want, and that can influence the results they get. The premise of replicability is that the scientific community can correct for these flaws.
  • ...30 more annotations...
  • But now all sorts of well-established, multiply confirmed findings have started to look increasingly uncertain. It’s as if our facts were losing their truth: claims that have been enshrined in textbooks are suddenly unprovable. This phenomenon doesn’t yet have an official name, but it’s occurring across a wide range of fields, from psychology to ecology. In the field of medicine, the phenomenon seems extremely widespread, affecting not only antipsychotics but also therapies ranging from cardiac stents to Vitamin E and antidepressants: Davis has a forthcoming analysis demonstrating that the efficacy of antidepressants has gone down as much as threefold in recent decades.
  • In private, Schooler began referring to the problem as “cosmic habituation,” by analogy to the decrease in response that occurs when individuals habituate to particular stimuli. “Habituation is why you don’t notice the stuff that’s always there,” Schooler says. “It’s an inevitable process of adjustment, a ratcheting down of excitement. I started joking that it was like the cosmos was habituating to my ideas. I took it very personally.”
  • At first, he assumed that he’d made an error in experimental design or a statistical miscalculation. But he couldn’t find anything wrong with his research. He then concluded that his initial batch of research subjects must have been unusually susceptible to verbal overshadowing. (John Davis, similarly, has speculated that part of the drop-off in the effectiveness of antipsychotics can be attributed to using subjects who suffer from milder forms of psychosis which are less likely to show dramatic improvement.) “It wasn’t a very satisfying explanation,” Schooler says. “One of my mentors told me that my real mistake was trying to replicate my work. He told me doing that was just setting myself up for disappointment.”
  • the effect is especially troubling because of what it exposes about the scientific process. If replication is what separates the rigor of science from the squishiness of pseudoscience, where do we put all these rigorously validated findings that can no longer be proved? Which results should we believe? Francis Bacon, the early-modern philosopher and pioneer of the scientific method, once declared that experiments were essential, because they allowed us to “put nature to the question.” But it appears that nature often gives us different answers.
  • The most likely explanation for the decline is an obvious one: regression to the mean. As the experiment is repeated, that is, an early statistical fluke gets cancelled out. The extrasensory powers of Schooler’s subjects didn’t decline—they were simply an illusion that vanished over time. And yet Schooler has noticed that many of the data sets that end up declining seem statistically solid—that is, they contain enough data that any regression to the mean shouldn’t be dramatic. “These are the results that pass all the tests,” he says. “The odds of them being random are typically quite remote, like one in a million. This means that the decline effect should almost never happen. But it happens all the time!
  • this is why Schooler believes that the decline effect deserves more attention: its ubiquity seems to violate the laws of statistics. “Whenever I start talking about this, scientists get very nervous,” he says. “But I still want to know what happened to my results. Like most scientists, I assumed that it would get easier to document my effect over time. I’d get better at doing the experiments, at zeroing in on the conditions that produce verbal overshadowing. So why did the opposite happen? I’m convinced that we can use the tools of science to figure this out. First, though, we have to admit that we’ve got a problem.”
  • In 2001, Michael Jennions, a biologist at the Australian National University, set out to analyze “temporal trends” across a wide range of subjects in ecology and evolutionary biology. He looked at hundreds of papers and forty-four meta-analyses (that is, statistical syntheses of related studies), and discovered a consistent decline effect over time, as many of the theories seemed to fade into irrelevance. In fact, even when numerous variables were controlled for—Jennions knew, for instance, that the same author might publish several critical papers, which could distort his analysis—there was still a significant decrease in the validity of the hypothesis, often within a year of publication. Jennions admits that his findings are troubling, but expresses a reluctance to talk about them publicly. “This is a very sensitive issue for scientists,” he says. “You know, we’re supposed to be dealing with hard facts, the stuff that’s supposed to stand the test of time. But when you see these trends you become a little more skeptical of things.”
  • the worst part was that when I submitted these null results I had difficulty getting them published. The journals only wanted confirming data. It was too exciting an idea to disprove, at least back then.
  • the steep rise and slow fall of fluctuating asymmetry is a clear example of a scientific paradigm, one of those intellectual fads that both guide and constrain research: after a new paradigm is proposed, the peer-review process is tilted toward positive results. But then, after a few years, the academic incentives shift—the paradigm has become entrenched—so that the most notable results are now those that disprove the theory.
  • Jennions, similarly, argues that the decline effect is largely a product of publication bias, or the tendency of scientists and scientific journals to prefer positive data over null results, which is what happens when no effect is found. The bias was first identified by the statistician Theodore Sterling, in 1959, after he noticed that ninety-seven per cent of all published psychological studies with statistically significant data found the effect they were looking for. A “significant” result is defined as any data point that would be produced by chance less than five per cent of the time. This ubiquitous test was invented in 1922 by the English mathematician Ronald Fisher, who picked five per cent as the boundary line, somewhat arbitrarily, because it made pencil and slide-rule calculations easier. Sterling saw that if ninety-seven per cent of psychology studies were proving their hypotheses, either psychologists were extraordinarily lucky or they published only the outcomes of successful experiments. In recent years, publication bias has mostly been seen as a problem for clinical trials, since pharmaceutical companies are less interested in publishing results that aren’t favorable. But it’s becoming increasingly clear that publication bias also produces major distortions in fields without large corporate incentives, such as psychology and ecology.
  • While publication bias almost certainly plays a role in the decline effect, it remains an incomplete explanation. For one thing, it fails to account for the initial prevalence of positive results among studies that never even get submitted to journals. It also fails to explain the experience of people like Schooler, who have been unable to replicate their initial data despite their best efforts
  • an equally significant issue is the selective reporting of results—the data that scientists choose to document in the first place. Palmer’s most convincing evidence relies on a statistical tool known as a funnel graph. When a large number of studies have been done on a single subject, the data should follow a pattern: studies with a large sample size should all cluster around a common value—the true result—whereas those with a smaller sample size should exhibit a random scattering, since they’re subject to greater sampling error. This pattern gives the graph its name, since the distribution resembles a funnel.
  • The funnel graph visually captures the distortions of selective reporting. For instance, after Palmer plotted every study of fluctuating asymmetry, he noticed that the distribution of results with smaller sample sizes wasn’t random at all but instead skewed heavily toward positive results.
  • Palmer has since documented a similar problem in several other contested subject areas. “Once I realized that selective reporting is everywhere in science, I got quite depressed,” Palmer told me. “As a researcher, you’re always aware that there might be some nonrandom patterns, but I had no idea how widespread it is.” In a recent review article, Palmer summarized the impact of selective reporting on his field: “We cannot escape the troubling conclusion that some—perhaps many—cherished generalities are at best exaggerated in their biological significance and at worst a collective illusion nurtured by strong a-priori beliefs often repeated.”
  • Palmer emphasizes that selective reporting is not the same as scientific fraud. Rather, the problem seems to be one of subtle omissions and unconscious misperceptions, as researchers struggle to make sense of their results. Stephen Jay Gould referred to this as the “shoehorning” process. “A lot of scientific measurement is really hard,” Simmons told me. “If you’re talking about fluctuating asymmetry, then it’s a matter of minuscule differences between the right and left sides of an animal. It’s millimetres of a tail feather. And so maybe a researcher knows that he’s measuring a good male”—an animal that has successfully mated—“and he knows that it’s supposed to be symmetrical. Well, that act of measurement is going to be vulnerable to all sorts of perception biases. That’s not a cynical statement. That’s just the way human beings work.”
  • One of the classic examples of selective reporting concerns the testing of acupuncture in different countries. While acupuncture is widely accepted as a medical treatment in various Asian countries, its use is much more contested in the West. These cultural differences have profoundly influenced the results of clinical trials. Between 1966 and 1995, there were forty-seven studies of acupuncture in China, Taiwan, and Japan, and every single trial concluded that acupuncture was an effective treatment. During the same period, there were ninety-four clinical trials of acupuncture in the United States, Sweden, and the U.K., and only fifty-six per cent of these studies found any therapeutic benefits. As Palmer notes, this wide discrepancy suggests that scientists find ways to confirm their preferred hypothesis, disregarding what they don’t want to see. Our beliefs are a form of blindness.
  • John Ioannidis, an epidemiologist at Stanford University, argues that such distortions are a serious issue in biomedical research. “These exaggerations are why the decline has become so common,” he says. “It’d be really great if the initial studies gave us an accurate summary of things. But they don’t. And so what happens is we waste a lot of money treating millions of patients and doing lots of follow-up studies on other themes based on results that are misleading.”
  • In 2005, Ioannidis published an article in the Journal of the American Medical Association that looked at the forty-nine most cited clinical-research studies in three major medical journals. Forty-five of these studies reported positive results, suggesting that the intervention being tested was effective. Because most of these studies were randomized controlled trials—the “gold standard” of medical evidence—they tended to have a significant impact on clinical practice, and led to the spread of treatments such as hormone replacement therapy for menopausal women and daily low-dose aspirin to prevent heart attacks and strokes. Nevertheless, the data Ioannidis found were disturbing: of the thirty-four claims that had been subject to replication, forty-one per cent had either been directly contradicted or had their effect sizes significantly downgraded.
  • The situation is even worse when a subject is fashionable. In recent years, for instance, there have been hundreds of studies on the various genes that control the differences in disease risk between men and women. These findings have included everything from the mutations responsible for the increased risk of schizophrenia to the genes underlying hypertension. Ioannidis and his colleagues looked at four hundred and thirty-two of these claims. They quickly discovered that the vast majority had serious flaws. But the most troubling fact emerged when he looked at the test of replication: out of four hundred and thirty-two claims, only a single one was consistently replicable. “This doesn’t mean that none of these claims will turn out to be true,” he says. “But, given that most of them were done badly, I wouldn’t hold my breath.”
  • the main problem is that too many researchers engage in what he calls “significance chasing,” or finding ways to interpret the data so that it passes the statistical test of significance—the ninety-five-per-cent boundary invented by Ronald Fisher. “The scientists are so eager to pass this magical test that they start playing around with the numbers, trying to find anything that seems worthy,” Ioannidis says. In recent years, Ioannidis has become increasingly blunt about the pervasiveness of the problem. One of his most cited papers has a deliberately provocative title: “Why Most Published Research Findings Are False.”
  • The problem of selective reporting is rooted in a fundamental cognitive flaw, which is that we like proving ourselves right and hate being wrong. “It feels good to validate a hypothesis,” Ioannidis said. “It feels even better when you’ve got a financial interest in the idea or your career depends upon it. And that’s why, even after a claim has been systematically disproven”—he cites, for instance, the early work on hormone replacement therapy, or claims involving various vitamins—“you still see some stubborn researchers citing the first few studies that show a strong effect. They really want to believe that it’s true.”
  • scientists need to become more rigorous about data collection before they publish. “We’re wasting too much time chasing after bad studies and underpowered experiments,” he says. The current “obsession” with replicability distracts from the real problem, which is faulty design. He notes that nobody even tries to replicate most science papers—there are simply too many. (According to Nature, a third of all studies never even get cited, let alone repeated.)
  • Schooler recommends the establishment of an open-source database, in which researchers are required to outline their planned investigations and document all their results. “I think this would provide a huge increase in access to scientific work and give us a much better way to judge the quality of an experiment,” Schooler says. “It would help us finally deal with all these issues that the decline effect is exposing.”
  • Although such reforms would mitigate the dangers of publication bias and selective reporting, they still wouldn’t erase the decline effect. This is largely because scientific research will always be shadowed by a force that can’t be curbed, only contained: sheer randomness. Although little research has been done on the experimental dangers of chance and happenstance, the research that exists isn’t encouraging
  • John Crabbe, a neuroscientist at the Oregon Health and Science University, conducted an experiment that showed how unknowable chance events can skew tests of replicability. He performed a series of experiments on mouse behavior in three different science labs: in Albany, New York; Edmonton, Alberta; and Portland, Oregon. Before he conducted the experiments, he tried to standardize every variable he could think of. The same strains of mice were used in each lab, shipped on the same day from the same supplier. The animals were raised in the same kind of enclosure, with the same brand of sawdust bedding. They had been exposed to the same amount of incandescent light, were living with the same number of littermates, and were fed the exact same type of chow pellets. When the mice were handled, it was with the same kind of surgical glove, and when they were tested it was on the same equipment, at the same time in the morning.
  • The premise of this test of replicability, of course, is that each of the labs should have generated the same pattern of results. “If any set of experiments should have passed the test, it should have been ours,” Crabbe says. “But that’s not the way it turned out.” In one experiment, Crabbe injected a particular strain of mouse with cocaine. In Portland the mice given the drug moved, on average, six hundred centimetres more than they normally did; in Albany they moved seven hundred and one additional centimetres. But in the Edmonton lab they moved more than five thousand additional centimetres. Similar deviations were observed in a test of anxiety. Furthermore, these inconsistencies didn’t follow any detectable pattern. In Portland one strain of mouse proved most anxious, while in Albany another strain won that distinction.
  • The disturbing implication of the Crabbe study is that a lot of extraordinary scientific data are nothing but noise. The hyperactivity of those coked-up Edmonton mice wasn’t an interesting new fact—it was a meaningless outlier, a by-product of invisible variables we don’t understand. The problem, of course, is that such dramatic findings are also the most likely to get published in prestigious journals, since the data are both statistically significant and entirely unexpected. Grants get written, follow-up studies are conducted. The end result is a scientific accident that can take years to unravel.
  • This suggests that the decline effect is actually a decline of illusion.
  • While Karl Popper imagined falsification occurring with a single, definitive experiment—Galileo refuted Aristotelian mechanics in an afternoon—the process turns out to be much messier than that. Many scientific theories continue to be considered true even after failing numerous experimental tests. Verbal overshadowing might exhibit the decline effect, but it remains extensively relied upon within the field. The same holds for any number of phenomena, from the disappearing benefits of second-generation antipsychotics to the weak coupling ratio exhibited by decaying neutrons, which appears to have fallen by more than ten standard deviations between 1969 and 2001. Even the law of gravity hasn’t always been perfect at predicting real-world phenomena. (In one test, physicists measuring gravity by means of deep boreholes in the Nevada desert found a two-and-a-half-per-cent discrepancy between the theoretical predictions and the actual data.) Despite these findings, second-generation antipsychotics are still widely prescribed, and our model of the neutron hasn’t changed. The law of gravity remains the same.
  • Such anomalies demonstrate the slipperiness of empiricism. Although many scientific ideas generate conflicting results and suffer from falling effect sizes, they continue to get cited in the textbooks and drive standard medical practice. Why? Because these ideas seem true. Because they make sense. Because we can’t bear to let them go. And this is why the decline effect is so troubling. Not because it reveals the human fallibility of science, in which data are tweaked and beliefs shape perceptions. (Such shortcomings aren’t surprising, at least for scientists.) And not because it reveals that many of our most exciting theories are fleeting fads and will soon be rejected. (That idea has been around since Thomas Kuhn.) The decline effect is troubling because it reminds us how difficult it is to prove anything. We like to pretend that our experiments define the truth for us. But that’s often not the case. Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true. When the experiments are done, we still have to choose what to believe.
Weiye Loh

Is Pure Altruism Possible? - NYTimes.com - 0 views

  • It’s undeniable that people sometimes act in a way that benefits others, but it may seem that they always get something in return — at the very least, the satisfaction of having their desire to help fulfilled.
  • Contemporary discussions of altruism quickly turn to evolutionary explanations. Reciprocal altruism and kin selection are the two main theories. According to reciprocal altruism, evolution favors organisms that sacrifice their good for others in order to gain a favor in return. Kin selection — the famous “selfish gene” theory popularized by Richard Dawkins — says that an individual who behaves altruistically towards others who share its genes will tend to reproduce those genes. Organisms may be altruistic; genes are selfish. The feeling that loving your children more than yourself is hard-wired lends plausibility to the theory of kin selection.
  • The defect of reciprocal altruism is clear. If a person acts to benefit another in the expectation that the favor will be returned, the natural response is: “That’s not altruism!”  Pure altruism, we think, requires a person to sacrifice for another without consideration of personal gain. Doing good for another person because something’s in it for the do-er is the very opposite of what we have in mind. Kin selection does better by allowing that organisms may genuinely sacrifice their interests for another, but it fails to explain why they sometimes do so for those with whom they share no genes
  • ...12 more annotations...
  • When we ask whether human beings are altruistic, we want to know about their motives or intentions. Biological altruism explains how unselfish behavior might have evolved but, as Frans de Waal suggested in his column in The Stone on Sunday, it implies nothing about the motives or intentions of the agent: after all, birds and bats and bees can act altruistically. This fact helps to explain why, despite these evolutionary theories, the view that people never intentionally act to benefit others except to obtain some good for themselves still possesses a powerful lure over our thinking.
  • The lure of this view — egoism — has two sources, one psychological, the other logical. Consider first the psychological. One reason people deny that altruism exists is that, looking inward, they doubt the purity of their own motives. We know that even when we appear to act unselfishly, other reasons for our behavior often rear their heads: the prospect of a future favor, the boost to reputation, or simply the good feeling that comes from appearing to act unselfishly. As Kant and Freud observed, people’s true motives may be hidden, even (or perhaps especially) from themselves. Even if we think we’re acting solely to further another person’s good, that might not be the real reason. (There might be no single “real reason” — actions can have multiple motives.)
  • So the psychological lure of egoism as a theory of human action is partly explained by a certain humility or skepticism people have about their own or others’ motives
  • There’s also a less flattering reason: denying the possibility of pure altruism provides a convenient excuse for selfish behavior.
  • The logical lure of egoism is different: the view seems impossible to disprove. No matter how altruistic a person appears to be, it’s possible to conceive of her motive in egoistic terms.
  • The impossibility of disproving egoism may sound like a virtue of the theory, but, as philosophers of science know, it’s really a fatal drawback. A theory that purports to tell us something about the world, as egoism does, should be falsifiable. Not false, of course, but capable of being tested and thus proved false. If every state of affairs is compatible with egoism, then egoism doesn’t tell us anything distinctive about how things are.
  • s ambiguity in the concepts of desire and the satisfaction of desire. If people possess altruistic motives, then they sometimes act to benefit others without the prospect of gain to themselves. In other words, they desire the good of others for its own sake, not simply as a means to their own satisfaction.
  • Still, when our desires are satisfied we normally experience satisfaction; we feel good when we do good. But that doesn’t mean we do good only in order to get that “warm glow” — that our true incentives are self-interested (as economists tend to claim). Indeed, as de Waal argues, if we didn’t desire the good of others for its own sake, then attaining it wouldn’t produce the warm glow.
  • Common sense tells us that some people are more altruistic than others. Egoism’s claim that these differences are illusory — that deep down, everybody acts only to further their own interests — contradicts our observations and deep-seated human practices of moral evaluation.
  • At the same time, we may notice that generous people don’t necessarily suffer more or flourish less than those who are more self-interested.
  • The point is rather that the kind of altruism we ought to encourage, and probably the only kind with staying power, is satisfying to those who practice it. Studies of rescuers show that they don’t believe their behavior is extraordinary; they feel they must do what they do, because it’s just part of who they are. The same holds for more common, less newsworthy acts — working in soup kitchens, taking pets to people in nursing homes, helping strangers find their way, being neighborly. People who act in these ways believe that they ought to help others, but they also want to help, because doing so affirms who they are and want to be and the kind of world they want to exist. As Prof. Neera Badhwar has argued, their identity is tied up with their values, thus tying self-interest and altruism together. The correlation between doing good and feeling good is not inevitable— inevitability lands us again with that empty, unfalsifiable egoism — but it is more than incidental.
  • Altruists should not be confused with people who automatically sacrifice their own interests for others.
  •  
    Is Pure Altruism Possible?
Weiye Loh

The Guardian - 0 views

  • We can't expect people to be either as self-denying as conservatives or as altruistic as liberals seem to wantThe question: What can Darwin teach us about morality?
  • to some extent, we are a species with an evolved psychology. Like other animals, we have inherited behavioural tendencies from our ancestors, since these were adaptive for them in the sense that they tended to lead to reproductive success in past environments.
  • It does not follow that we should now do whatever maximises our ability to reproduce and pass down our genes. For example, evolution may have honed us to desire and enjoy sex, through a process in which creatures that did so reproduced more often than their evolutionary competitors. But evolution has not equipped us with an abstract desire to pass down our genes.
  • ...4 more annotations...
  • All other things being equal, we should act in accordance with the desires that we actually have, in this case the desire for sex. We may also desire to have children, but perhaps only one or two: in that case, we should act in such a way as to have as much sex as possible while also producing children in this small number.
  • Generally speaking, it is rational for us to act in ways that accord with our reflectively-endorsed desires or values, rather than in ways that maximise our reproductive chances or in whatever ways we tend to respond without thinking. If we value the benefits of social living, this may require that we support and conform to socially-developed norms of conduct that constrain individuals from acting in ruthless pursuit of self-interest.
  • Admittedly, our evolved nature may affect this, in the sense that any workable system of moral norms must be practical for the needs of beings like us, who are, it seems, naturally inclined to be neither angelically selfless nor utterly uncaring about others. Thus, our evolved psychology may impose limits on what real-world moral systems can realistically demand of human beings, perhaps defeating some of the more extreme ambitions of both conservatives and liberals. It may not be realistic to expect each other to be either as self-denying as moral conservatives seem to want or as altruistic as some liberals seem to want.
  • realistic moral systems will allow considerable scope for individuals to act in accordance with whatever they actually value. However, they will also impose constraints, since truly ruthless competition among individuals would lead to widespread insecurity, suffering, and disorder. Allowing it would be inconsistent with many values that most of us adhere to, on reflection, such as the values of loving and trusting relationships, social survival, and the amelioration of suffering in the world. If, however, we are social animals that already have an evolved sympathetic responsiveness to each other, the yoke of a realistic moral system may be relatively light for most of us most of the time.
  •  
    Morality, with limits | Russell Blackford Russell Blackford guardian.co.uk Comment Thu 18 Mar 2010 09:00 GMT
Weiye Loh

Rationally Speaking: Response to Jonathan Haidt's response, on the academy's liberal bias - 0 views

  • Dear Prof. Haidt,You understandably got upset by my harsh criticism of your recent claims about the mechanisms behind the alleged anti-conservative bias that apparently so permeates the modern academy. I find it amusing that you simply assumed I had not looked at your talk and was therefore speaking without reason. Yet, I have indeed looked at it (it is currently published at Edge, a non-peer reviewed webzine), and found that it simply doesn’t add much to the substance (such as it is) of Tierney’s summary.
  • Yes, you do acknowledge that there may be multiple reasons for the imbalance between the number of conservative and liberal leaning academics, but then you go on to characterize the academy, at least in your field, as a tribe having a serious identity issue, with no data whatsoever to back up your preferred subset of causal explanations for the purported problem.
  • your talk is simply an extended op-ed piece, which starts out with a summary of your findings about the different moral outlooks of conservatives and liberals (which I have criticized elsewhere on this blog), and then proceeds to build a flimsy case based on a couple of anecdotes and some badly flawed data.
  • ...4 more annotations...
  • For instance, slide 23 shows a Google search for “liberal social psychologist,” highlighting the fact that one gets a whopping 2,740 results (which, actually, by Google standards is puny; a search under my own name yields 145,000, and I ain’t no Lady Gaga). You then compared this search to one for “conservative social psychologist” and get only three entries.
  • First of all, if Google searches are the main tool of social psychology these days, I fear for the entire field. Second, I actually re-did your searches — at the prompting of one of my readers — and came up with quite different results. As the photo here shows, if you actually bother to scroll through the initial Google search for “liberal social psychologist” you will find that there are in fact only 24 results, to be compared to 10 (not 3) if you search for “conservative social psychologist.” Oops. From this scant data I would simply conclude that political orientation isn’t a big deal in social psychology.
  • Your talk continues with some pretty vigorous hand-waving: “We rely on our peers to find flaws in our arguments, but when there is essentially nobody out there to challenge liberal assumptions and interpretations of experimental findings, the peer review process breaks down, at least for work that is related to those sacred values.” Right, except that I would like to see a systematic survey of exactly how the lack of conservative peer review has affected the quality of academic publications. Oh, wait, it hasn’t, at least according to what you yourself say in the next sentence: “The great majority of work in social psychology is excellent, and is unaffected by these problems.” I wonder how you know this, and why — if true — you then think that there is a problem. Philosophers call this an inherent contradiction, it’s a common example of bad argument.
  • Finally, let me get to your outrage at the fact that I have allegedly accused you of academic misconduct and lying. I have done no such thing, and you really ought (in the ethical sense) to be careful when throwing those words around. I have simply raised the logical possibility that you (and Tierney) have an agenda, a possibility based on reading several of the things both you and Tierney have written of late. As a psychologist, I’m sure you are aware that biases can be unconscious, and therefore need not imply that the person in question is lying or engaging in any form of purposeful misconduct. Or were you implying in your own talk that your colleagues’ bias was conscious? Because if so, you have just accused an entire profession of misconduct.
Weiye Loh

Haidt Requests Apology from Pigliucci « YourMorals.Org Moral Psychology Blog - 0 views

  • Here is my response to Pigliucci, which I posted as a comment on his blog. (Well, I submitted it as a comment on Feb 13 at 4pm EST, but he has not approved it yet, so it doesn’t show yet over there.)
  • Massimo Pigliucci, the chair of the philosophy department at CUNY-Lehman, wrote a critique of me on his blog, Rationally Speaking, in which he accused me of professional misconduct.
  • Dear Prof. Pigliucci: Let me be certain that I have understood you. You did not watch my talk, even though a link to it was embedded in the Tierney article. Instead, you picked out one piece of my argument (that the near-total absence of conservatives in social psychology is evidence of discrimination) and you made the standard response, the one that most bloggers have made: underrepresentation of any group is not, by itself, evidence of discrimination. That’s a good point; I made it myself quite explicitly in my talk: Of course there are many reasons why conservatives would be underrepresented in social psychology, and most of them have nothing to do with discrimination or hostile climate. Research on personality consistently shows that liberals are higher on openness to experience. They’re more interested in novel ideas, and in trying to use science to improve society. So of course our field is and always will be mostly liberal. I don’t think we should ever strive for exact proportional representation.
  • ...6 more annotations...
  • I made it clear that I’m not concerned about simple underrepresentation. I did not even make the moral argument that we need ideological diversity to right an injustice. Rather, I focused on what happens when a scientific community shares sacred values. A tribal moral community arises, one that actively suppresses ideas that are sacrilegious, and that discourages non-believers from entering. I argued that my field has become a tribal moral community, and the absence of conservatives (not just their underrepresentation) has serious consequences for the quality of our science. We rely on our peers to find flaws in our arguments, but when there is essentially nobody out there to challenge liberal assumptions and interpretations of experimental findings, the peer review process breaks down, at least for work that is related to those sacred values. (
  • The fact that you criticized me without making an effort to understand me is not surprising.
  • Rather, what sets you apart from all other bloggers who are members of the academy is what you did next. You accused me of professional misconduct—lying, essentially–and you speculated as to my true motive: I suspect that Haidt is either an incompetent psychologist (not likely) or is disingenuously saying the sort of things controversial enough to get him in the New York Times (more likely).
  • As far as I can tell your evidence for these accusations is that my argument was so bad that I couldn’t have believed it myself. Here is how you justified your accusations: A serious social scientist doesn’t go around crying out discrimination just on the basis of unequal numbers. If that were the case, the NBA would be sued for discriminating against short people, dance companies against people without spatial coordination, and newspapers against dyslexics
  • Accusations of professional misconduct are sensibly made only if one has a reasonable and detailed understanding of the facts of the case, and can bring forth evidence of misconduct. Pigliucci has made no effort to acquire such an understanding, nor has he presented any evidence to support his accusation. He simply took one claim from the Tierney article and then ran wild with speculation about Haidt’s motives. It was pretty silly of him, and down right irresponsible of Pigliucci to publish that garbage without even knowing what Haidt said.
  • I challenge you to watch the video of my talk (click here) and then either 1) Retract your blog post and apologize publicly for calling me a liar or 2) State on your blog that you stand by your original post. If you do stand by your post, even after hearing my argument, then the world can decide for itself which of us is right, and which of us best models the ideals of science, philosophy, and the Enlightenment which you claim for yourself in the header of your blog, “Rationally Speaking.” Jonathan Haidt
Weiye Loh

Skepticblog » Further Thoughts on the Ethics of Skepticism - 0 views

  • My recent post “The War Over ‘Nice’” (describing the blogosphere’s reaction to Phil Plait’s “Don’t Be a Dick” speech) has topped out at more than 200 comments.
  • Many readers appear to object (some strenuously) to the very ideas of discussing best practices, seeking evidence of efficacy for skeptical outreach, matching strategies to goals, or encouraging some methods over others. Some seem to express anger that a discussion of best practices would be attempted at all. 
  • No Right or Wrong Way? The milder forms of these objections run along these lines: “Everyone should do their own thing.” “Skepticism needs all kinds of approaches.” “There’s no right or wrong way to do skepticism.” “Why are we wasting time on these abstract meta-conversations?”
  • ...12 more annotations...
  • More critical, in my opinion, is the implication that skeptical research and communication happens in an ethical vacuum. That just isn’t true. Indeed, it is dangerous for a field which promotes and attacks medical treatments, accuses people of crimes, opines about law enforcement practices, offers consumer advice, and undertakes educational projects to pretend that it is free from ethical implications — or obligations.
  • there is no monolithic “one true way to do skepticism.” No, the skeptical world does not break down to nice skeptics who get everything right, and mean skeptics who get everything wrong. (I’m reminded of a quote: “If only there were evil people somewhere insidiously committing evil deeds, and it were necessary only to separate them from the rest of us and destroy them. But the line dividing good and evil cuts through the heart of every human being.”) No one has all the answers. Certainly I don’t, and neither does Phil Plait. Nor has anyone actually proposed a uniform, lockstep approach to skepticism. (No one has any ability to enforce such a thing, in any event.)
  • However, none of that implies that all approaches to skepticism are equally valid, useful, or good. As in other fields, various skeptical practices do more or less good, cause greater or lesser harm, or generate various combinations of both at the same time. For that reason, skeptics should strive to find ways to talk seriously about the practices and the ethics of our field. Skepticism has blossomed into something that touches a lot of lives — and yet it is an emerging field, only starting to come into its potential. We need to be able to talk about that potential, and about the pitfalls too.
  • All of the fields from which skepticism borrows (such as medicine, education, psychology, journalism, history, and even arts like stage magic and graphic design) have their own standards of professional ethics. In some cases those ethics are well-explored professional fields in their own right (consider medical ethics, a field with its own academic journals and doctoral programs). In other cases those ethical guidelines are contested, informal, vague, or honored more in the breach. But in every case, there are serious conversations about the ethical implications of professional practice, because those practices impact people’s lives. Why would skepticism be any different?
  • , Skeptrack speaker Barbara Drescher (a cognitive pyschologist who teaches research methodology) described the complexity of research ethics in her own field. Imagine, she said, that a psychologist were to ask research subjects a question like, “Do your parents like the color red?” Asking this may seem trivial and harmless, but it is nonetheless an ethical trade-off with associated risks (however small) that psychological researchers are ethically obliged to confront. What harm might that question cause if a research subject suffers from erythrophobia, or has a sick parent — or saw their parents stabbed to death?
  • When skeptics undertake scientific, historical, or journalistic research, we should (I argue) consider ourselves bound by some sort of research ethics. For now, we’ll ignore the deeper, detailed question of what exactly that looks like in practical terms (when can skeptics go undercover or lie to get information? how much research does due diligence require? and so on). I’d ask only that we agree on the principle that skeptical research is not an ethical free-for-all.
  • when skeptics communicate with the public, we take on further ethical responsibilities — as do doctors, journalists, and teachers. We all accept that doctors are obliged to follow some sort of ethical code, not only of due diligence and standard of care, but also in their confidentiality, manner, and the factual information they disclose to patients. A sentence that communicates a diagnosis, prescription, or piece of medical advice (“you have cancer” or “undertake this treatment”) is not a contextless statement, but a weighty, risky, ethically serious undertaking that affects people’s lives. It matters what doctors say, and it matters how they say it.
  • Grassroots Ethics It happens that skepticism is my professional field. It’s natural that I should feel bound by the central concerns of that field. How can we gain reliable knowledge about weird things? How can we communicate that knowledge effectively? And, how can we pursue that practice ethically?
  • At the same time, most active skeptics are not professionals. To what extent should grassroots skeptics feel obligated to consider the ethics of skeptical activism? Consider my own status as a medical amateur. I almost need super-caps-lock to explain how much I am not a doctor. My medical training began and ended with a couple First Aid courses (and those way back in the day). But during those short courses, the instructors drummed into us the ethical considerations of our minimal training. When are we obligated to perform first aid? When are we ethically barred from giving aid? What if the injured party is unconscious or delirious? What if we accidentally kill or injure someone in our effort to give aid? Should we risk exposure to blood-borne illnesses? And so on. In a medical context, ethics are determined less by professional status, and more by the harm we can cause or prevent by our actions.
  • police officers are barred from perjury, and journalists from libel — and so are the lay public. We expect schoolteachers not to discuss age-inappropriate topics with our young children, or to persuade our children to adopt their religion; when we babysit for a neighbor, we consider ourselves bound by similar rules. I would argue that grassroots skeptics take on an ethical burden as soon as they speak out on medical matters, legal matters, or other matters of fact, whether from platforms as large as network television, or as small as a dinner party. The size of that burden must depend somewhat on the scale of the risks: the number of people reached, the certainty expressed, the topics tackled.
  • tu-quoque argument.
  • How much time are skeptics going to waste, arguing in a circular firing squad about each other’s free speech? Like it or not, there will always be confrontational people. You aren’t going to get a group of people as varied as skeptics are, and make them all agree to “be nice”. It’s a pipe dream, and a waste of time.
  •  
    FURTHER THOUGHTS ON THE ETHICS OF SKEPTICISM
Weiye Loh

Sam Harris: Toward a Science of Morality - 0 views

  • What about depression? Is it impossible to define or study this state of mind empirically? I'm not sure how deep Carroll's skepticism runs, but much of psychology now appears to hang in the balance. Of course, Carroll might want to say that the problem of access to the data of first-person experience is what makes psychology often seem to teeter at the margin of science. He might have a point -- but, if so, it would be a methodological point, not a point about the limits of scientific truth. Remember, the science of determining exactly which books were in the Library of Alexandria is stillborn and going absolutely nowhere, methodologically speaking. But this doesn't mean we can't be absolutely right or absolutely wrong about the relevant facts.
    • Weiye Loh
       
      What kind of science are we discussing if there's no methodology? Popperian? Certainly not Kuhnian. 
  • While I'm happy to admit that people are morally confused, I see no evidence whatsoever that they all ultimately want the same thing. The position doesn't even seem coherent. Is it a priori necessary that people ultimately have the same idea about human well-being, or is it a contingent truth about actual human beings?
  • I might find that brain state X242358B is my absolute favorite, and Carroll might prefer X979793L, but the fear that we will radically diverge in our judgments about what constitutes well-being seems pretty far-fetched. The possibility that my hell will be someone else's heaven, and vice versa, seems hardly worth considering. And yet, whatever divergence did occur must also depend on facts about the brains in question.
  •  
    Toward a Science of Morality Sam HarrisPosted: May 7, 2010 12:47 AM
Weiye Loh

On Forgiveness - NYTimes.com - 0 views

  • What is forgiveness? When is it appropriate? Why is it considered to be commendable?  Some claim that forgiveness is merely about ridding oneself of vengeful anger; do that, and you have forgiven.  But if you were able to banish anger from your soul simply by taking a pill, would the result really be forgiveness?
  • The timing of forgiveness is also disputed. Some say that it should wait for the offender to take responsibility and suffer due punishment, others hold that the victim must first overcome anger altogether, and still others that forgiveness should be unilaterally bestowed at the earliest possible moment.  But what if you have every good reason to be angry and even to take your sweet revenge as well?  Is forgiveness then really to be commended? Some object that it lets the offender off the hook, confesses to one’s own weakness and vulnerability, and papers over the legitimate demands of vengeful anger.  And yet, legions praise forgiveness and think of it as an indispensable virtue
  • Many people assume that the notion of forgiveness is Christian in origin, at least in the West, and that the contemporary understanding of interpersonal forgiveness has always been the core Christian teaching on the subject.  These contestable assumptions are explored by David Konstan in “Before Forgiveness: The Origins of a Moral Idea.”  Religious origins of the notion would not invalidate a secular philosophical approach to the topic, any more than a secular origin of some idea precludes a religious appropriation of it.  While religious and secular perspectives on forgiveness are not necessarily consistent with each other, however, they agree in their attempt to address the painful fact of the pervasiveness of moral wrong in human life. They also agree on this: few of us are altogether innocent of the need for forgiveness.
  • ...2 more annotations...
  • It’s not simply a matter of lifting the burden of toxic resentment or of immobilizing guilt, however beneficial that may be ethically and psychologically.  It is not a merely therapeutic matter, as though this were just about you.  Rather, when the requisite conditions are met, forgiveness is what a good person would seek because it expresses fundamental moral ideals.  These include ideals of spiritual growth and renewal; truth-telling; mutual respectful address; responsibility and respect; reconciliation and peace.
  • Are any wrongdoers unforgivable?  People who have committed heinous acts such as torture or child molestation are often cited as examples.  The question is not primarily about the psychological ability of the victim to forswear anger, but whether a wrongdoer can rightly be judged not-to-be-forgiven no matter what offender and victim say or do.  I do not see that a persuasive argument for that thesis can be made; there is no such thing as the unconditionally unforgivable.  For else we would be faced with the bizarre situation of declaring illegitimate the forgiveness reached by victim and perpetrator after each has taken every step one could possibly wish for.  The implication may distress you: Osama bin Laden, for example, is not unconditionally unforgivable for his role in the attacks of 9/11.  That being said, given the extent of the injury done by grave wrongs, their author may be rightly unforgiven for an appropriate period even if he or she has taken all reasonable steps.  There is no mathematically precise formula for determining when it is appropriate to forgive.
Weiye Loh

Roger Pielke Jr.'s Blog: Ideological Diversity in Academia - 0 views

  • Jonathan Haidt's talk (above) at the annual meeting of the Society for Personality and Social Psychology was written up last week in a column by John Tierney in the NY Times.  This was soon followed by a dismissal of the work by Paul Krugman.  The entire sequence is interesting, but for me the best part, and the one that gets to the nub of the issue, is Haight's response to Krugman: My research, like so much research in social psychology, demonstrates that we humans are experts at using reasoning to find evidence for whatever conclusions we want to reach. We are terrible at searching for contradictory evidence. Science works because our peers are so darn good at finding that contradictory evidence for us. Social science — at least my corner of it — is broken because there is nobody to look for contradictory evidence regarding sacralized issues, particularly those related to race, gender, and class. I urged my colleagues to increase our ideological diversity not for any moral reason, but because it will make us better scientists. You do not have that problem in economics where the majority is liberal but there is a substantial and vocal minority of libertarians and conservatives. Your field is healthy, mine is not. Do you think I was wrong to call for my professional organization to seek out a modicum of ideological diversity?
  • On a related note, the IMF review of why the institution failed to warn of the global financial crisis identified a lack of intellectual diversity as being among the factors responsible (PDF): Several cognitive biases seem to have played an important role. Groupthink refers to the tendency among homogeneous, cohesive groups to consider issues only within a certain paradigm and not challenge its basic premises (Janis, 1982). The prevailing view among IMF staff—a cohesive group of macroeconomists—was that market discipline and self-regulation would be sufficient to stave off serious problems in financial institutions. They also believed that crises were unlikely to happen in advanced economies, where “sophisticated” financial markets could thrive safely with minimal regulation of a large and growing portion of the financial system.Everyyone in academia has seen similar dynamics at work.
Weiye Loh

TPM: The Philosophers' Magazine | Is morality relative? Depends on your personality - 0 views

  • no real evidence is ever offered for the original assumption that ordinary moral thought and talk has this objective character. Instead, philosophers tend simply to assert that people’s ordinary practice is objectivist and then begin arguing from there.
  • If we really want to go after these issues in a rigorous way, it seems that we should adopt a different approach. The first step is to engage in systematic empirical research to figure out how the ordinary practice actually works. Then, once we have the relevant data in hand, we can begin looking more deeply into the philosophical implications – secure in the knowledge that we are not just engaging in a philosophical fiction but rather looking into the philosophical implications of people’s actual practices.
  • in the past few years, experimental philosophers have been gathering a wealth of new data on these issues, and we now have at least the first glimmerings of a real empirical research program here
  • ...8 more annotations...
  • when researchers took up these questions experimentally, they did not end up confirming the traditional view. They did not find that people overwhelmingly favoured objectivism. Instead, the results consistently point to a more complex picture. There seems to be a striking degree of conflict even in the intuitions of ordinary folks, with some people under some circumstances offering objectivist answers, while other people under other circumstances offer more relativist views. And that is not all. The experimental results seem to be giving us an ever deeper understanding of why it is that people are drawn in these different directions, what it is that makes some people move toward objectivism and others toward more relativist views.
  • consider a study by Adam Feltz and Edward Cokely. They were interested in the relationship between belief in moral relativism and the personality trait openness to experience. Accordingly, they conducted a study in which they measured both openness to experience and belief in moral relativism. To get at people’s degree of openness to experience, they used a standard measure designed by researchers in personality psychology. To get at people’s agreement with moral relativism, they told participants about two characters – John and Fred – who held opposite opinions about whether some given act was morally bad. Participants were then asked whether one of these two characters had to be wrong (the objectivist answer) or whether it could be that neither of them was wrong (the relativist answer). What they found was a quite surprising result. It just wasn’t the case that participants overwhelmingly favoured the objectivist answer. Instead, people’s answers were correlated with their personality traits. The higher a participant was in openness to experience, the more likely that participant was to give a relativist answer.
  • Geoffrey Goodwin and John Darley pursued a similar approach, this time looking at the relationship between people’s belief in moral relativism and their tendency to approach questions by considering a whole variety of possibilities. They proceeded by giving participants mathematical puzzles that could only be solved by looking at multiple different possibilities. Thus, participants who considered all these possibilities would tend to get these problems right, whereas those who failed to consider all the possibilities would tend to get the problems wrong. Now comes the surprising result: those participants who got these problems right were significantly more inclined to offer relativist answers than were those participants who got the problems wrong.
  • Shaun Nichols and Tricia Folds-Bennett looked at how people’s moral conceptions develop as they grow older. Research in developmental psychology has shown that as children grow up, they develop different understandings of the physical world, of numbers, of other people’s minds. So what about morality? Do people have a different understanding of morality when they are twenty years old than they do when they are only four years old? What the results revealed was a systematic developmental difference. Young children show a strong preference for objectivism, but as they grow older, they become more inclined to adopt relativist views. In other words, there appears to be a developmental shift toward increasing relativism as children mature. (In an exciting new twist on this approach, James Beebe and David Sackris have shown that this pattern eventually reverses, with middle-aged people showing less inclination toward relativism than college students do.)
  • People are more inclined to be relativists when they score highly in openness to experience, when they have an especially good ability to consider multiple possibilities, when they have matured past childhood (but not when they get to be middle-aged). Looking at these various effects, my collaborators and I thought that it might be possible to offer a single unifying account that explained them all. Specifically, our thought was that people might be drawn to relativism to the extent that they open their minds to alternative perspectives. There could be all sorts of different factors that lead people to open their minds in this way (personality traits, cognitive dispositions, age), but regardless of the instigating factor, researchers seemed always to be finding the same basic effect. The more people have a capacity to truly engage with other perspectives, the more they seem to turn toward moral relativism.
  • To really put this hypothesis to the test, Hagop Sarkissian, Jennifer Wright, John Park, David Tien and I teamed up to run a series of new studies. Our aim was to actually manipulate the degree to which people considered alternative perspectives. That is, we wanted to randomly assign people to different conditions in which they would end up thinking in different ways, so that we could then examine the impact of these different conditions on their intuitions about moral relativism.
  • The results of the study showed a systematic difference between conditions. In particular, as we moved toward more distant cultures, we found a steady shift toward more relativist answers – with people in the first condition tending to agree with the statement that at least one of them had to be wrong, people in the second being pretty evenly split between the two answers, and people in the third tending to reject the statement quite decisively.
  • If we learn that people’s ordinary practice is not an objectivist one – that it actually varies depending on the degree to which people take other perspectives into account – how can we then use this information to address the deeper philosophical issues about the true nature of morality? The answer here is in one way very complex and in another very simple. It is complex in that one can answer such questions only by making use of very sophisticated and subtle philosophical methods. Yet, at the same time, it is simple in that such methods have already been developed and are being continually refined and elaborated within the literature in analytic philosophy. The trick now is just to take these methods and apply them to working out the implications of an ordinary practice that actually exists.
Weiye Loh

Religion: Faith in science : Nature News - 0 views

  • The Templeton Foundation claims to be a friend of science. So why does it make so many researchers uneasy?
  • With a current endowment estimated at US$2.1 billion, the organization continues to pursue Templeton's goal of building bridges between science and religion. Each year, it doles out some $70 million in grants, more than $40 million of which goes to research in fields such as cosmology, evolutionary biology and psychology.
  • however, many scientists find it troubling — and some see it as a threat. Jerry Coyne, an evolutionary biologist at the University of Chicago, Illinois, calls the foundation "sneakier than the creationists". Through its grants to researchers, Coyne alleges, the foundation is trying to insinuate religious values into science. "It claims to be on the side of science, but wants to make faith a virtue," he says.
  • ...25 more annotations...
  • But other researchers, both with and without Templeton grants, say that they find the foundation remarkably open and non-dogmatic. "The Templeton Foundation has never in my experience pressured, suggested or hinted at any kind of ideological slant," says Michael Shermer, editor of Skeptic, a magazine that debunks pseudoscience, who was hired by the foundation to edit an essay series entitled 'Does science make belief in God obsolete?'
  • The debate highlights some of the challenges facing the Templeton Foundation after the death of its founder in July 2008, at the age of 95.
  • With the help of a $528-million bequest from Templeton, the foundation has been radically reframing its research programme. As part of that effort, it is reducing its emphasis on religion to make its programmes more palatable to the broader scientific community. Like many of his generation, Templeton was a great believer in progress, learning, initiative and the power of human imagination — not to mention the free-enterprise system that allowed him, a middle-class boy from Winchester, Tennessee, to earn billions of dollars on Wall Street. The foundation accordingly allocates 40% of its annual grants to programmes with names such as 'character development', 'freedom and free enterprise' and 'exceptional cognitive talent and genius'.
  • Unlike most of his peers, however, Templeton thought that the principles of progress should also apply to religion. He described himself as "an enthusiastic Christian" — but was also open to learning from Hinduism, Islam and other religious traditions. Why, he wondered, couldn't religious ideas be open to the type of constructive competition that had produced so many advances in science and the free market?
  • That question sparked Templeton's mission to make religion "just as progressive as medicine or astronomy".
  • Early Templeton prizes had nothing to do with science: the first went to the Catholic missionary Mother Theresa of Calcutta in 1973.
  • By the 1980s, however, Templeton had begun to realize that fields such as neuroscience, psychology and physics could advance understanding of topics that are usually considered spiritual matters — among them forgiveness, morality and even the nature of reality. So he started to appoint scientists to the prize panel, and in 1985 the award went to a research scientist for the first time: Alister Hardy, a marine biologist who also investigated religious experience. Since then, scientists have won with increasing frequency.
  • "There's a distinct feeling in the research community that Templeton just gives the award to the most senior scientist they can find who's willing to say something nice about religion," says Harold Kroto, a chemist at Florida State University in Tallahassee, who was co-recipient of the 1996 Nobel Prize in Chemistry and describes himself as a devout atheist.
  • Yet Templeton saw scientists as allies. They had what he called "the humble approach" to knowledge, as opposed to the dogmatic approach. "Almost every scientist will agree that they know so little and they need to learn," he once said.
  • Templeton wasn't interested in funding mainstream research, says Barnaby Marsh, the foundation's executive vice-president. Templeton wanted to explore areas — such as kindness and hatred — that were not well known and did not attract major funding agencies. Marsh says Templeton wondered, "Why is it that some conflicts go on for centuries, yet some groups are able to move on?"
  • Templeton's interests gave the resulting list of grants a certain New Age quality (See Table 1). For example, in 1999 the foundation gave $4.6 million for forgiveness research at the Virginia Commonwealth University in Richmond, and in 2001 it donated $8.2 million to create an Institute for Research on Unlimited Love (that is, altruism and compassion) at Case Western Reserve University in Cleveland, Ohio. "A lot of money wasted on nonsensical ideas," says Kroto. Worse, says Coyne, these projects are profoundly corrupting to science, because the money tempts researchers into wasting time and effort on topics that aren't worth it. If someone is willing to sell out for a million dollars, he says, "Templeton is there to oblige him".
  • At the same time, says Marsh, the 'dean of value investing', as Templeton was known on Wall Street, had no intention of wasting his money on junk science or unanswerables such as whether God exists. So before pursuing a scientific topic he would ask his staff to get an assessment from appropriate scholars — a practice that soon evolved into a peer-review process drawing on experts from across the scientific community.
  • Because Templeton didn't like bureaucracy, adds Marsh, the foundation outsourced much of its peer review and grant giving. In 1996, for example, it gave $5.3 million to the American Association for the Advancement of Science (AAAS) in Washington DC, to fund efforts that work with evangelical groups to find common ground on issues such as the environment, and to get more science into seminary curricula. In 2006, Templeton gave $8.8 million towards the creation of the Foundational Questions Institute (FQXi), which funds research on the origins of the Universe and other fundamental issues in physics, under the leadership of Anthony Aguirre, an astrophysicist at the University of California, Santa Cruz, and Max Tegmark, a cosmologist at the Massachusetts Institute of Technology in Cambridge.
  • But external peer review hasn't always kept the foundation out of trouble. In the 1990s, for example, Templeton-funded organizations gave book-writing grants to Guillermo Gonzalez, an astrophysicist now at Grove City College in Pennsylvania, and William Dembski, a philosopher now at the Southwestern Baptist Theological Seminary in Fort Worth, Texas. After obtaining the grants, both later joined the Discovery Institute — a think-tank based in Seattle, Washington, that promotes intelligent design. Other Templeton grants supported a number of college courses in which intelligent design was discussed. Then, in 1999, the foundation funded a conference at Concordia University in Mequon, Wisconsin, in which intelligent-design proponents confronted critics. Those awards became a major embarrassment in late 2005, during a highly publicized court fight over the teaching of intelligent design in schools in Dover, Pennsylvania. A number of media accounts of the intelligent design movement described the Templeton Foundation as a major supporter — a charge that Charles Harper, then senior vice-president, was at pains to deny.
  • Some foundation officials were initially intrigued by intelligent design, Harper told The New York Times. But disillusionment set in — and Templeton funding stopped — when it became clear that the theory was part of a political movement from the Christian right wing, not science. Today, the foundation website explicitly warns intelligent-design researchers not to bother submitting proposals: they will not be considered.
  • Avowedly antireligious scientists such as Coyne and Kroto see the intelligent-design imbroglio as a symptom of their fundamental complaint that religion and science should not mix at all. "Religion is based on dogma and belief, whereas science is based on doubt and questioning," says Coyne, echoing an argument made by many others. "In religion, faith is a virtue. In science, faith is a vice." The purpose of the Templeton Foundation is to break down that wall, he says — to reconcile the irreconcilable and give religion scholarly legitimacy.
  • Foundation officials insist that this is backwards: questioning is their reason for being. Religious dogma is what they are fighting. That does seem to be the experience of many scientists who have taken Templeton money. During the launch of FQXi, says Aguirre, "Max and I were very suspicious at first. So we said, 'We'll try this out, and the minute something smells, we'll cut and run.' It never happened. The grants we've given have not been connected with religion in any way, and they seem perfectly happy about that."
  • John Cacioppo, a psychologist at the University of Chicago, also had concerns when he started a Templeton-funded project in 2007. He had just published a paper with survey data showing that religious affiliation had a negative correlation with health among African-Americans — the opposite of what he assumed the foundation wanted to hear. He was bracing for a protest when someone told him to look at the foundation's website. They had displayed his finding on the front page. "That made me relax a bit," says Cacioppo.
  • Yet, even scientists who give the foundation high marks for openness often find it hard to shake their unease. Sean Carroll, a physicist at the California Institute of Technology in Pasadena, is willing to participate in Templeton-funded events — but worries about the foundation's emphasis on research into 'spiritual' matters. "The act of doing science means that you accept a purely material explanation of the Universe, that no spiritual dimension is required," he says.
  • It hasn't helped that Jack Templeton is much more politically and religiously conservative than his father was. The foundation shows no obvious rightwards trend in its grant-giving and other activities since John Templeton's death — and it is barred from supporting political activities by its legal status as a not-for-profit corporation. Still, many scientists find it hard to trust an organization whose president has used his personal fortune to support right-leaning candidates and causes such as the 2008 ballot initiative that outlawed gay marriage in California.
  • Scientists' discomfort with the foundation is probably inevitable in the current political climate, says Scott Atran, an anthropologist at the University of Michigan in Ann Arbor. The past 30 years have seen the growing power of the Christian religious right in the United States, the rise of radical Islam around the world, and religiously motivated terrorist attacks such as those in the United States on 11 September 2001. Given all that, says Atran, many scientists find it almost impossible to think of religion as anything but fundamentalism at war with reason.
  • the foundation has embraced the theme of 'science and the big questions' — an open-ended list that includes topics such as 'Does the Universe have a purpose?'
  • Towards the end of Templeton's life, says Marsh, he became increasingly concerned that this reaction was getting in the way of the foundation's mission: that the word 'religion' was alienating too many good scientists.
  • The peer-review and grant-making system has also been revamped: whereas in the past the foundation ran an informal mix of projects generated by Templeton and outside grant seekers, the system is now organized around an annual list of explicit funding priorities.
  • The foundation is still a work in progress, says Jack Templeton — and it always will be. "My father believed," he says, "we were all called to be part of an ongoing creative process. He was always trying to make people think differently." "And he always said, 'If you're still doing today what you tried to do two years ago, then you're not making progress.'" 
Weiye Loh

Happiness: Do we have a choice? » Scienceline - 0 views

  • “Objective choices make a difference to happiness over and above genetics and personality,” said Bruce Headey, a psychologist at Melbourne University in Australia. Headey and his colleagues analyzed annual self-reports of life satisfaction from over 20,000 Germans who have been interviewed every year since 1984. He compared five-year averages of people’s reported life satisfaction, and plotted their relative happiness on a percentile scale from 1 to 100. Heady found that as time went on, more and more people recorded substantial changes in their life satisfaction. By 2008, more than a third had moved up or down on the happiness scale by at least 25 percent, compared to where they had started in 1984.
  • Headey’s findings, published in the October 19th issue of Proceedings of the National Academy of Sciences, run contrary to what is known as the happiness set-point theory — the idea that even if you win the lottery or become a paraplegic, you’ll revert back to the same fixed level of happiness within a year or two. This psychological theory was widely accepted in the 1990s because it explained why happiness levels seemed to remain stable over the long term: They were mainly determined early in life by genetic factors including personality traits.
  • But even this dynamic choice-driven picture does not fully capture the nuance of what it means to be happy, said Jerome Kagan, a Harvard University developmental psychologist. He warns against conflating two distinct dimensions of happiness: everyday emotional experience (an assessment of how you feel at the moment) and life evaluation (a judgment of how satisfied you are with your life). It’s the difference between “how often did you smile yesterday?” and “how does your life compare to the best possible life you can imagine?”
  • ...4 more annotations...
  • Kagan suggests that we may have more choice over the latter, because life evaluation is not a function of how we currently feel — it is a comparison of our life to what we decide the good life should be.
  • Kagan has found that young children differ biologically in the ease with which they can feel happy, or tense, or distressed, or sad — what he calls temperament. People establish temperament early in life and have little capacity to change it. But they can change their life evaluation, which Kagan describes as an ethical concept synonymous with “how good of a life have I led?” The answer will depend on individual choices and the purpose they create for themselves. A painter who is constantly stressed and moody (unhappy in the moment) may still feel validation in creating good artwork and may be very satisfied with his life (happy as a judgment).
  • when it comes to happiness, our choices may matter — but it depends on what the choices are about, and how we define what we want to change.
  • Graham thinks that people may evaluate their happiness based on whichever dimension — happiness at the moment, or life evaluation — they have a choice over.
  •  
    Instead of existing as a stable equilibrium, Headey suggests that happiness is much more dynamic, and that individual choices - about one's partner, working hours, social participation and lifestyle - make substantial and permanent changes to reported happiness levels. For example, doing more or fewer paid hours of work than you want, or exercising regularly, can have just as much impact on life satisfaction as having an extroverted personality.
Weiye Loh

Epiphenom: If God loves you, why take medicine? - 0 views

  • Sarah Finocchario-Kessler, at the University of Kansas, used data from one such drug trial to see what the effect of religious beliefs (and other psychological factors) was on medication taking.
  • One recent study looked at whether people with HIV took their medicine as they were supposed to. Most trials of new drugs monitor this, and it can be done very easily simply using special bottles that record each time they're opened.
  • people who used a passive religious deferral coping style (e.g. "I don’t try much of anything; simply expect God to take control") were less likely to take their medicine as often as they were supposed to.  On the other hand,  collaborative religious coping "I work together with God as partners" or self-directing religious coping (e.g., "I make decisions about what to do without God’s help" had no effect on whether people took their medicines.
  • ...4 more annotations...
  • The biggest effect was with those people who scored high on the "God as locus of health control" measure - that means people who agreed with statements like "Whether or not my HIV disease improves is up to God." Although this had no effect on medication taking at 3 months, the halfway point of the study, by the end of the study (at 6 months) people who scored high on this measure were 42% less likely to be taking their medication regularly.
  • This study is interesting because these aren't folks who have any crazy ideas that medicine is useless. Remember, they signed up to take part in a drug study, presumably because they thought they might benefit. What's more, they stayed in the study right to the end, and did take their medicine most of the time. It's just that they were more likely than others to 'forget' it.
  • Now, this is a complicated picture in other ways. People who are at death's door (unlike the mostly healthy people in this study) seem to be more likely to ask for 'heroic' interventions to try to keep them alive if they have strong beliefs in God's will.
  • Maybe confronting your own imminent death triggers some reconsiderations about the mysterious workings of the almighty!
Weiye Loh

Meta-analysis - PsychWiki - A Collaborative Psychology Wiki - 0 views

  • A meta-analysis is only informative if it adequately summarizes the existing literature, so a thorough literature search is critical to retrieve every relevant study, such as database searches, ancestry approach, descendancy approach, hand searching, and the invisible college (i.e., network of researchers who know about unpublished studies, conference proceedings, etc). For more information see (Johnson & Eagly, 2000) (Handbook of Research Methods in Social and Personality Psychology) which details five general ways to retrieve relevant articles.
    • Weiye Loh
       
      How is one able to know that one has exhausted the "invisible college?" Perhaps we need an official record or a database of unpublished studies, conference proceedings, etc. 
Weiye Loh

Talking Philosophy | Ethicists, Courtesy & Morals - 0 views

  • research raises questions about the extent to which studying ethics improves moral behavior. To the extent that practical effect is among one’s aims in studying (or as an administrator, in requiring) philosophy, I think there is reason for concern. I’m inclined to think that either philosophy should be justified differently, or we should work harder to try to figure out whether there is a *way* of studying philosophy that is more effective in changing moral behavior than the ordinary (21st century, Anglophone) way of studying philosophy is.”
  • I think it’s fairly common that professionals in any field are skeptical about it. Professional politicians are much more skeptical or even cynical about politics than your average informed citizen. Most of the doctors whom I’ve talked to off the record are fairly skeptical about the merits of medical care. Those who specialize in giving investment “advice” will generally admit that they have no idea about the future of markets with the inevitable comment: “if I really knew how the market will react, I’d be on my yacht, not advising you”.
  •  
    For all their pondering on matters moral, ethicists are no better mannered than other philosophers, and they behave no better morally than other philosophers or other academics either. Or such, at least, are the conclusions suggested by the research of philosophers Eric Schwitzgebel (at the University of California, at Riverside) and Joshua Rust (of Stetson University, Florida). On Ethicists' courtesy at philosophy conferences as recently published in Philosophical Psychology', Schwitzgebel & Rust report on a study that suggests that audiences in ethics sessions do not behave any better than those attending seminars on other areas of philosophy. Not when it comes to talking audibly whilst a speaker is addressing the room and not when it comes to 'allowing the door to slam shut while entering or exiting mid-session'. And though, appropriately enough "audiences in environmental ethics sessions … appear to leave behind less trash" generally speaking, the ethicists are just as likely to leave a mess as the epistemologists and metaphysicians.
Weiye Loh

Being Bilingual: Beneficial Workout for the Brain - Research - The Chronicle of Higher ... - 0 views

  • "Bilingual babies pay attention to visual information whether it is specific to their language or not," said Janet F. Werker, director of the Infant Studies Centre at the University of British Columbia.
  •  
    In the latest research, described Friday at the American Association for the Advancement of Science, the onset of the symptoms of Alzheimer's disease was delayed by more than four years in elderly bilingual adults, even though they had identical brain damage compared with a group of adults in the study who spoke only one language. "It's not that being bilingual prevents Alzheimer's," said Ellen Bialystok, a professor of psychology at York University, in Toronto. "It's just that you are better able to cope."
Weiye Loh

"Cancer by the Numbers" by John Allen Paulos | Project Syndicate - 0 views

  • The USPSTF recently issued an even sharper warning about the prostate-specific antigen test for prostate cancer, after concluding that the test’s harms outweigh its benefits. Chest X-rays for lung cancer and Pap tests for cervical cancer have received similar, albeit less definitive, criticism.CommentsView/Create comment on this paragraphThe next step in the reevaluation of cancer screening was taken last year, when researchers at the Dartmouth Institute for Health Policy announced that the costs of screening for breast cancer were often minimized, and that the benefits were much exaggerated. Indeed, even a mammogram (almost 40 million are given annually in the US) that detects a cancer does not necessarily save a life.CommentsView/Create comment on this paragraphThe Dartmouth researchers found that, of the estimated 138,000 breast cancers detected annually in the US, the test did not help 120,000-134,000 of the afflicted women. The cancers either were growing so slowly that they did not pose a problem, or they would have been treated successfully if discovered clinically later (or they were so aggressive that little could be done).
1 - 20 of 48 Next › Last »
Showing 20 items per page