Skip to main content

Home/ New Media Ethics 2009 course/ Group items tagged Bias

Rss Feed Group items tagged

Weiye Loh

Skepticblog » Cognitive Biases and Handedness - 0 views

  • A recent study concerns the bias of being left or right-handed. Our handedness affects our judgments regarding the quality and “goodness” of things in our environment. There is a clear language bias favoring the dominant right-handers: “right” is correct, while left-handed complements are undesirable, for example. It turns out this is not mere cultural bias, but reflects an underlying cognitive bias. For example: In experiments by psychologist Daniel Casasanto, when people were asked which of two products to buy, which of two job applicants to hire, or which of two alien creatures looks more intelligent, right-handers tended to choose the product, person, or creature they saw on their right, but most left-handers chose the one on their left.
  • when put into a situation where we have to make a judgment based mostly on our gut feelings or intuition, biases will tend to come out. (It is probably difficult for most people to come up with an evidence-based system for assessing which alien looks more intelligent.) It is possible the common evolved sensibilities will dominate in such situations – most people, for example, might pick the alien with the larger eyes. But that is not what the researchers found – simple handedness was the determining factor.
  • This is a subconscious bias. If a subject were asked why they chose the alien on the right, they would probably not say, “because I am right-handed and have an inherent bias toward things in the right side of my visual field.” Rather, they would justify their judgment post-hoc – pointing out features that had nothing to do with their actual decision-making, but giving the illusion of a rational choice.
  • ...2 more annotations...
  • Casasanto found, in the new study, that these biases are also easily manipulated. First he studies stroke patients who were paralyzed on one side of the body or the other. If a right-hander were weak on the left side (as a control) this had no effect on their choice. But if their right side were weak, then their preference shifted to their intact left side. This, however, can be due to damage to the brain, rather than the fact that they are now obligate left-handers. So he did a follow up experiment in which subjects were made to perform a task with a ski-glove on one hand. If right-handers wore the glove on their left hand, again this had no effect on their choices. But if they wore it on their right hand while performing tasks for as little as 12 minutes, then their cognitive bias shifted to that of a left-hander.
  • Casasanto observes: ‘People generally think their judgments are rational, and their concepts are stable. But if wearing a glove for a few minutes can reverse people’s usual judgments about what’s good and bad, perhaps the mind is more malleable than we thought.’
  •  
    believers generally operate under the paradigm of seeing is believing, while skeptics operate under the paradigm that often believing is seeing.
Weiye Loh

The Decline Effect and the Scientific Method : The New Yorker - 0 views

  • On September 18, 2007, a few dozen neuroscientists, psychiatrists, and drug-company executives gathered in a hotel conference room in Brussels to hear some startling news. It had to do with a class of drugs known as atypical or second-generation antipsychotics, which came on the market in the early nineties.
  • the therapeutic power of the drugs appeared to be steadily waning. A recent study showed an effect that was less than half of that documented in the first trials, in the early nineteen-nineties. Many researchers began to argue that the expensive pharmaceuticals weren’t any better than first-generation antipsychotics, which have been in use since the fifties. “In fact, sometimes they now look even worse,” John Davis, a professor of psychiatry at the University of Illinois at Chicago, told me.
  • Before the effectiveness of a drug can be confirmed, it must be tested and tested again. Different scientists in different labs need to repeat the protocols and publish their results. The test of replicability, as it’s known, is the foundation of modern research. Replicability is how the community enforces itself. It’s a safeguard for the creep of subjectivity. Most of the time, scientists know what results they want, and that can influence the results they get. The premise of replicability is that the scientific community can correct for these flaws.
  • ...30 more annotations...
  • But now all sorts of well-established, multiply confirmed findings have started to look increasingly uncertain. It’s as if our facts were losing their truth: claims that have been enshrined in textbooks are suddenly unprovable. This phenomenon doesn’t yet have an official name, but it’s occurring across a wide range of fields, from psychology to ecology. In the field of medicine, the phenomenon seems extremely widespread, affecting not only antipsychotics but also therapies ranging from cardiac stents to Vitamin E and antidepressants: Davis has a forthcoming analysis demonstrating that the efficacy of antidepressants has gone down as much as threefold in recent decades.
  • In private, Schooler began referring to the problem as “cosmic habituation,” by analogy to the decrease in response that occurs when individuals habituate to particular stimuli. “Habituation is why you don’t notice the stuff that’s always there,” Schooler says. “It’s an inevitable process of adjustment, a ratcheting down of excitement. I started joking that it was like the cosmos was habituating to my ideas. I took it very personally.”
  • At first, he assumed that he’d made an error in experimental design or a statistical miscalculation. But he couldn’t find anything wrong with his research. He then concluded that his initial batch of research subjects must have been unusually susceptible to verbal overshadowing. (John Davis, similarly, has speculated that part of the drop-off in the effectiveness of antipsychotics can be attributed to using subjects who suffer from milder forms of psychosis which are less likely to show dramatic improvement.) “It wasn’t a very satisfying explanation,” Schooler says. “One of my mentors told me that my real mistake was trying to replicate my work. He told me doing that was just setting myself up for disappointment.”
  • the effect is especially troubling because of what it exposes about the scientific process. If replication is what separates the rigor of science from the squishiness of pseudoscience, where do we put all these rigorously validated findings that can no longer be proved? Which results should we believe? Francis Bacon, the early-modern philosopher and pioneer of the scientific method, once declared that experiments were essential, because they allowed us to “put nature to the question.” But it appears that nature often gives us different answers.
  • The most likely explanation for the decline is an obvious one: regression to the mean. As the experiment is repeated, that is, an early statistical fluke gets cancelled out. The extrasensory powers of Schooler’s subjects didn’t decline—they were simply an illusion that vanished over time. And yet Schooler has noticed that many of the data sets that end up declining seem statistically solid—that is, they contain enough data that any regression to the mean shouldn’t be dramatic. “These are the results that pass all the tests,” he says. “The odds of them being random are typically quite remote, like one in a million. This means that the decline effect should almost never happen. But it happens all the time!
  • this is why Schooler believes that the decline effect deserves more attention: its ubiquity seems to violate the laws of statistics. “Whenever I start talking about this, scientists get very nervous,” he says. “But I still want to know what happened to my results. Like most scientists, I assumed that it would get easier to document my effect over time. I’d get better at doing the experiments, at zeroing in on the conditions that produce verbal overshadowing. So why did the opposite happen? I’m convinced that we can use the tools of science to figure this out. First, though, we have to admit that we’ve got a problem.”
  • In 2001, Michael Jennions, a biologist at the Australian National University, set out to analyze “temporal trends” across a wide range of subjects in ecology and evolutionary biology. He looked at hundreds of papers and forty-four meta-analyses (that is, statistical syntheses of related studies), and discovered a consistent decline effect over time, as many of the theories seemed to fade into irrelevance. In fact, even when numerous variables were controlled for—Jennions knew, for instance, that the same author might publish several critical papers, which could distort his analysis—there was still a significant decrease in the validity of the hypothesis, often within a year of publication. Jennions admits that his findings are troubling, but expresses a reluctance to talk about them publicly. “This is a very sensitive issue for scientists,” he says. “You know, we’re supposed to be dealing with hard facts, the stuff that’s supposed to stand the test of time. But when you see these trends you become a little more skeptical of things.”
  • the worst part was that when I submitted these null results I had difficulty getting them published. The journals only wanted confirming data. It was too exciting an idea to disprove, at least back then.
  • the steep rise and slow fall of fluctuating asymmetry is a clear example of a scientific paradigm, one of those intellectual fads that both guide and constrain research: after a new paradigm is proposed, the peer-review process is tilted toward positive results. But then, after a few years, the academic incentives shift—the paradigm has become entrenched—so that the most notable results are now those that disprove the theory.
  • Jennions, similarly, argues that the decline effect is largely a product of publication bias, or the tendency of scientists and scientific journals to prefer positive data over null results, which is what happens when no effect is found. The bias was first identified by the statistician Theodore Sterling, in 1959, after he noticed that ninety-seven per cent of all published psychological studies with statistically significant data found the effect they were looking for. A “significant” result is defined as any data point that would be produced by chance less than five per cent of the time. This ubiquitous test was invented in 1922 by the English mathematician Ronald Fisher, who picked five per cent as the boundary line, somewhat arbitrarily, because it made pencil and slide-rule calculations easier. Sterling saw that if ninety-seven per cent of psychology studies were proving their hypotheses, either psychologists were extraordinarily lucky or they published only the outcomes of successful experiments. In recent years, publication bias has mostly been seen as a problem for clinical trials, since pharmaceutical companies are less interested in publishing results that aren’t favorable. But it’s becoming increasingly clear that publication bias also produces major distortions in fields without large corporate incentives, such as psychology and ecology.
  • While publication bias almost certainly plays a role in the decline effect, it remains an incomplete explanation. For one thing, it fails to account for the initial prevalence of positive results among studies that never even get submitted to journals. It also fails to explain the experience of people like Schooler, who have been unable to replicate their initial data despite their best efforts
  • an equally significant issue is the selective reporting of results—the data that scientists choose to document in the first place. Palmer’s most convincing evidence relies on a statistical tool known as a funnel graph. When a large number of studies have been done on a single subject, the data should follow a pattern: studies with a large sample size should all cluster around a common value—the true result—whereas those with a smaller sample size should exhibit a random scattering, since they’re subject to greater sampling error. This pattern gives the graph its name, since the distribution resembles a funnel.
  • The funnel graph visually captures the distortions of selective reporting. For instance, after Palmer plotted every study of fluctuating asymmetry, he noticed that the distribution of results with smaller sample sizes wasn’t random at all but instead skewed heavily toward positive results.
  • Palmer has since documented a similar problem in several other contested subject areas. “Once I realized that selective reporting is everywhere in science, I got quite depressed,” Palmer told me. “As a researcher, you’re always aware that there might be some nonrandom patterns, but I had no idea how widespread it is.” In a recent review article, Palmer summarized the impact of selective reporting on his field: “We cannot escape the troubling conclusion that some—perhaps many—cherished generalities are at best exaggerated in their biological significance and at worst a collective illusion nurtured by strong a-priori beliefs often repeated.”
  • Palmer emphasizes that selective reporting is not the same as scientific fraud. Rather, the problem seems to be one of subtle omissions and unconscious misperceptions, as researchers struggle to make sense of their results. Stephen Jay Gould referred to this as the “shoehorning” process. “A lot of scientific measurement is really hard,” Simmons told me. “If you’re talking about fluctuating asymmetry, then it’s a matter of minuscule differences between the right and left sides of an animal. It’s millimetres of a tail feather. And so maybe a researcher knows that he’s measuring a good male”—an animal that has successfully mated—“and he knows that it’s supposed to be symmetrical. Well, that act of measurement is going to be vulnerable to all sorts of perception biases. That’s not a cynical statement. That’s just the way human beings work.”
  • One of the classic examples of selective reporting concerns the testing of acupuncture in different countries. While acupuncture is widely accepted as a medical treatment in various Asian countries, its use is much more contested in the West. These cultural differences have profoundly influenced the results of clinical trials. Between 1966 and 1995, there were forty-seven studies of acupuncture in China, Taiwan, and Japan, and every single trial concluded that acupuncture was an effective treatment. During the same period, there were ninety-four clinical trials of acupuncture in the United States, Sweden, and the U.K., and only fifty-six per cent of these studies found any therapeutic benefits. As Palmer notes, this wide discrepancy suggests that scientists find ways to confirm their preferred hypothesis, disregarding what they don’t want to see. Our beliefs are a form of blindness.
  • John Ioannidis, an epidemiologist at Stanford University, argues that such distortions are a serious issue in biomedical research. “These exaggerations are why the decline has become so common,” he says. “It’d be really great if the initial studies gave us an accurate summary of things. But they don’t. And so what happens is we waste a lot of money treating millions of patients and doing lots of follow-up studies on other themes based on results that are misleading.”
  • In 2005, Ioannidis published an article in the Journal of the American Medical Association that looked at the forty-nine most cited clinical-research studies in three major medical journals. Forty-five of these studies reported positive results, suggesting that the intervention being tested was effective. Because most of these studies were randomized controlled trials—the “gold standard” of medical evidence—they tended to have a significant impact on clinical practice, and led to the spread of treatments such as hormone replacement therapy for menopausal women and daily low-dose aspirin to prevent heart attacks and strokes. Nevertheless, the data Ioannidis found were disturbing: of the thirty-four claims that had been subject to replication, forty-one per cent had either been directly contradicted or had their effect sizes significantly downgraded.
  • The situation is even worse when a subject is fashionable. In recent years, for instance, there have been hundreds of studies on the various genes that control the differences in disease risk between men and women. These findings have included everything from the mutations responsible for the increased risk of schizophrenia to the genes underlying hypertension. Ioannidis and his colleagues looked at four hundred and thirty-two of these claims. They quickly discovered that the vast majority had serious flaws. But the most troubling fact emerged when he looked at the test of replication: out of four hundred and thirty-two claims, only a single one was consistently replicable. “This doesn’t mean that none of these claims will turn out to be true,” he says. “But, given that most of them were done badly, I wouldn’t hold my breath.”
  • the main problem is that too many researchers engage in what he calls “significance chasing,” or finding ways to interpret the data so that it passes the statistical test of significance—the ninety-five-per-cent boundary invented by Ronald Fisher. “The scientists are so eager to pass this magical test that they start playing around with the numbers, trying to find anything that seems worthy,” Ioannidis says. In recent years, Ioannidis has become increasingly blunt about the pervasiveness of the problem. One of his most cited papers has a deliberately provocative title: “Why Most Published Research Findings Are False.”
  • The problem of selective reporting is rooted in a fundamental cognitive flaw, which is that we like proving ourselves right and hate being wrong. “It feels good to validate a hypothesis,” Ioannidis said. “It feels even better when you’ve got a financial interest in the idea or your career depends upon it. And that’s why, even after a claim has been systematically disproven”—he cites, for instance, the early work on hormone replacement therapy, or claims involving various vitamins—“you still see some stubborn researchers citing the first few studies that show a strong effect. They really want to believe that it’s true.”
  • scientists need to become more rigorous about data collection before they publish. “We’re wasting too much time chasing after bad studies and underpowered experiments,” he says. The current “obsession” with replicability distracts from the real problem, which is faulty design. He notes that nobody even tries to replicate most science papers—there are simply too many. (According to Nature, a third of all studies never even get cited, let alone repeated.)
  • Schooler recommends the establishment of an open-source database, in which researchers are required to outline their planned investigations and document all their results. “I think this would provide a huge increase in access to scientific work and give us a much better way to judge the quality of an experiment,” Schooler says. “It would help us finally deal with all these issues that the decline effect is exposing.”
  • Although such reforms would mitigate the dangers of publication bias and selective reporting, they still wouldn’t erase the decline effect. This is largely because scientific research will always be shadowed by a force that can’t be curbed, only contained: sheer randomness. Although little research has been done on the experimental dangers of chance and happenstance, the research that exists isn’t encouraging
  • John Crabbe, a neuroscientist at the Oregon Health and Science University, conducted an experiment that showed how unknowable chance events can skew tests of replicability. He performed a series of experiments on mouse behavior in three different science labs: in Albany, New York; Edmonton, Alberta; and Portland, Oregon. Before he conducted the experiments, he tried to standardize every variable he could think of. The same strains of mice were used in each lab, shipped on the same day from the same supplier. The animals were raised in the same kind of enclosure, with the same brand of sawdust bedding. They had been exposed to the same amount of incandescent light, were living with the same number of littermates, and were fed the exact same type of chow pellets. When the mice were handled, it was with the same kind of surgical glove, and when they were tested it was on the same equipment, at the same time in the morning.
  • The premise of this test of replicability, of course, is that each of the labs should have generated the same pattern of results. “If any set of experiments should have passed the test, it should have been ours,” Crabbe says. “But that’s not the way it turned out.” In one experiment, Crabbe injected a particular strain of mouse with cocaine. In Portland the mice given the drug moved, on average, six hundred centimetres more than they normally did; in Albany they moved seven hundred and one additional centimetres. But in the Edmonton lab they moved more than five thousand additional centimetres. Similar deviations were observed in a test of anxiety. Furthermore, these inconsistencies didn’t follow any detectable pattern. In Portland one strain of mouse proved most anxious, while in Albany another strain won that distinction.
  • The disturbing implication of the Crabbe study is that a lot of extraordinary scientific data are nothing but noise. The hyperactivity of those coked-up Edmonton mice wasn’t an interesting new fact—it was a meaningless outlier, a by-product of invisible variables we don’t understand. The problem, of course, is that such dramatic findings are also the most likely to get published in prestigious journals, since the data are both statistically significant and entirely unexpected. Grants get written, follow-up studies are conducted. The end result is a scientific accident that can take years to unravel.
  • This suggests that the decline effect is actually a decline of illusion.
  • While Karl Popper imagined falsification occurring with a single, definitive experiment—Galileo refuted Aristotelian mechanics in an afternoon—the process turns out to be much messier than that. Many scientific theories continue to be considered true even after failing numerous experimental tests. Verbal overshadowing might exhibit the decline effect, but it remains extensively relied upon within the field. The same holds for any number of phenomena, from the disappearing benefits of second-generation antipsychotics to the weak coupling ratio exhibited by decaying neutrons, which appears to have fallen by more than ten standard deviations between 1969 and 2001. Even the law of gravity hasn’t always been perfect at predicting real-world phenomena. (In one test, physicists measuring gravity by means of deep boreholes in the Nevada desert found a two-and-a-half-per-cent discrepancy between the theoretical predictions and the actual data.) Despite these findings, second-generation antipsychotics are still widely prescribed, and our model of the neutron hasn’t changed. The law of gravity remains the same.
  • Such anomalies demonstrate the slipperiness of empiricism. Although many scientific ideas generate conflicting results and suffer from falling effect sizes, they continue to get cited in the textbooks and drive standard medical practice. Why? Because these ideas seem true. Because they make sense. Because we can’t bear to let them go. And this is why the decline effect is so troubling. Not because it reveals the human fallibility of science, in which data are tweaked and beliefs shape perceptions. (Such shortcomings aren’t surprising, at least for scientists.) And not because it reveals that many of our most exciting theories are fleeting fads and will soon be rejected. (That idea has been around since Thomas Kuhn.) The decline effect is troubling because it reminds us how difficult it is to prove anything. We like to pretend that our experiments define the truth for us. But that’s often not the case. Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true. When the experiments are done, we still have to choose what to believe.
Weiye Loh

Political - or politicized? - psychology » Scienceline - 0 views

  • The idea that your personal characteristics could be linked to your political ideology has intrigued political psychologists for decades. Numerous studies suggest that liberals and conservatives differ not only in their views toward government and society, but also in their behavior, their personality, and even how they travel, decorate, clean and spend their leisure time. In today’s heated political climate, understanding people on the “other side” — whether that side is left or right — takes on new urgency. But as researchers study the personal side of politics, could they be influenced by political biases of their own?
  • Consider the following 2006 study by the late California psychologists Jeanne and Jack Block, which compared the personalities of nursery school children to their political leanings as 23-year olds. Preschoolers who went on to identify as liberal were described by the authors as self-reliant, energetic, somewhat dominating and resilient. The children who later identified as conservative were described as easily offended, indecisive, fearful, rigid, inhibited and vulnerable. The negative descriptions of conservatives in this study strike Jacob Vigil, a psychologist at the University of New Mexico, as morally loaded. Studies like this one, he said, use language that suggests the researchers are “motivated to present liberals with more ideal descriptions as compared to conservatives.”
  • Most of the researchers in this field are, in fact, liberal. In 2007 UCLA’s Higher Education Research Institute conducted a survey of faculty at four-year colleges and universities in the United States. About 68 percent of the faculty in history, political science and social science departments characterized themselves as liberal, 22 percent characterized themselves as moderate, and only 10 percent as conservative. Some social psychologists, like Jonathan Haidt of the University of Virginia, have charged that this liberal majority distorts the research in political psychology.
  • ...9 more annotations...
  • It’s a charge that John Jost, a social psychologist at New York University, flatly denies. Findings in political psychology bear upon deeply held personal beliefs and attitudes, he said, so they are bound to spark controversy. Research showing that conservatives score higher on measures of “intolerance of ambiguity” or the “need for cognitive closure” might bother some people, said Jost, but that does not make it biased.
  • “The job of the behavioral scientist is not to try to find something to say that couldn’t possibly be offensive,” said Jost. “Our job is to say what we think is true, and why.
  • Jost and his colleagues in 2003 compiled a meta-analysis of 88 studies from 12 different countries conducted over a 40-year period. They found strong evidence that conservatives tend to have higher needs to reduce uncertainty and threat. Conservatives also share psychological factors like fear, aggression, dogmatism, and the need for order, structure and closure. Political conservatism, they explained, could serve as a defense against anxieties and threats that arise out of everyday uncertainty, by justifying the status quo and preserving conditions that are comfortable and familiar.
  • The study triggered quite a public reaction, particularly within the conservative blogosphere. But the criticisms, according to Jost, were mistakenly focused on the researchers themselves; the findings were not disputed by the scientific community and have since been replicated. For example, a 2009 study followed college students over the span of their undergraduate experience and found that higher perceptions of threat did indeed predict political conservatism. Another 2009 study found that when confronted with a threat, liberals actually become more psychologically and politically conservative. Some studies even suggest that physiological traits like sensitivity to sudden noises or threatening images are associated with conservative political attitudes.
  • “The debate should always be about the data and its proper interpretation,” said Jost, “and never about the characteristics or motives of the researchers.” Phillip Tetlock, a psychologist at the University of California, Berkeley, agrees. However, Tetlock thinks that identifying the proper interpretation can be tricky, since personality measures can be described in many ways. “One observer’s ‘dogmatism’ can be another’s ‘principled,’ and one observer’s ‘open-mindedness’ can be another’s ‘flaccid and vacillating,’” Tetlock explained.
  • Richard Redding, a professor of law and psychology at Chapman University in Orange, California, points to a more general, indirect bias in political psychology. “It’s not the case that researchers are intentionally skewing the data,” which rarely happens, Redding said. Rather, the problem may lie in what sorts of questions are or are not asked.
  • For example, a conservative might be more inclined to undertake research on affirmative action in a way that would identify any negative outcomes, whereas a liberal probably wouldn’t, said Redding. Likewise, there may be aspects of personality that liberals simply haven’t considered. Redding is currently conducting a large-scale study on self-righteousness, which he suspects may be associated more highly with liberals than conservatives.
  • “The way you frame a problem is to some extent dictated by what you think the problem is,” said David Sears, a political psychologist at the University of California, Los Angeles. People’s strong feelings about issues like prejudice, sexism, authoritarianism, aggression, and nationalism — the bread and butter of political psychology — may influence how they design a study or present a problem.
  • The indirect bias that Sears and Redding identify is a far cry from the liberal groupthink others warn against. But given that psychology departments are predominantly left leaning, it’s important to seek out alternative viewpoints and explanations, said Jesse Graham, a social psychologist at the University of Southern California. A self-avowed liberal, Graham thinks it would be absurd to say he couldn’t do fair science because of his political preferences. “But,” he said, “it is something that I try to keep in mind.”
  •  
    The idea that your personal characteristics could be linked to your political ideology has intrigued political psychologists for decades. Numerous studies suggest that liberals and conservatives differ not only in their views toward government and society, but also in their behavior, their personality, and even how they travel, decorate, clean and spend their leisure time. In today's heated political climate, understanding people on the "other side" - whether that side is left or right - takes on new urgency. But as researchers study the personal side of politics, could they be influenced by political biases of their own?
Weiye Loh

Why is the media so pro-American? - 0 views

  •  
    "And why is it that a terrorist attack in Boston that kills three people can dominate the news for days while a series of bombings in Iraq, which have thus far killed 279 people, gets relegated to the 'World' section on most Australian news websites? I suspect there are two things going on. One is a cultural bias; Americans are more 'like us' so the media and the audience is more interested in what happens to Americans. The second is differences in the supply of news; Bangladesh and Iraq get less attention because there are fewer journalists working there to cover the story."
Weiye Loh

Why Evolution May Favor Irrationality - Newsweek - 0 views

  • The reason we succumb to confirmation bias, why we are blind to counterexamples, and why we fall short of CartesianCartesian logic in so many other ways is that these lapses have a purpose: they help us “devise and evaluate arguments that are intended to persuade other people,” says psychologist Hugo Mercier of the University of Pennsylvania. Failures of logic, he and cognitive scientist Dan Sperber of the Institut Jean Nicod in Paris propose, are in fact effective ploys to win arguments.
  • That puts poor reasoning in a completely different light. Arguing, after all, is less about seeking truth than about overcoming opposing views.
  • while confirmation bias, for instance, may mislead us about what’s true and real, by letting examples that support our view monopolize our memory and perception, it maximizes the artillery we wield when trying to convince someone that, say, he really is “late all the time.” Confirmation bias “has a straightforward explanation,” argues Mercier. “It contributes to effective argumentation.”
  • ...2 more annotations...
  • finding counterexamples can, in general, weaken our confidence in our own arguments. Forms of reasoning that are good for solving logic puzzles but bad for winning arguments lost out, over the course of evolution, to those that help us be persuasive but cause us to struggle with abstract syllogisms. Interestingly, syllogisms are easier to evaluate in the form “No flying things are penguins; all penguins are birds; so some birds are not fliers.” That’s because we are more likely to argue about animals than A, B, and C.
  • The sort of faulty thinking called motivated reasoning also impedes our search for truth but advances arguments. For instance, we tend to look harder for flaws in a study when we don’t agree with its conclusions and are more critical of evidence that undermines our point of view.
  •  
    The Limits of Reason Why evolution may favor irrationality.
Weiye Loh

Skepticblog » The Decline Effect - 0 views

  • The first group are those with an overly simplistic or naive sense of how science functions. This is a view of science similar to those films created in the 1950s and meant to be watched by students, with the jaunty music playing in the background. This view generally respects science, but has a significant underappreciation for the flaws and complexity of science as a human endeavor. Those with this view are easily scandalized by revelations of the messiness of science.
  • The second cluster is what I would call scientific skepticism – which combines a respect for science and empiricism as a method (really “the” method) for understanding the natural world, with a deep appreciation for all the myriad ways in which the endeavor of science can go wrong. Scientific skeptics, in fact, seek to formally understand the process of science as a human endeavor with all its flaws. It is therefore often skeptics pointing out phenomena such as publication bias, the placebo effect, the need for rigorous controls and blinding, and the many vagaries of statistical analysis. But at the end of the day, as complex and messy the process of science is, a reliable picture of reality is slowly ground out.
  • The third group, often frustrating to scientific skeptics, are the science-deniers (for lack of a better term). They may take a postmodernist approach to science – science is just one narrative with no special relationship to the truth. Whatever you call it, what the science-deniers in essence do is describe all of the features of science that the skeptics do (sometimes annoyingly pretending that they are pointing these features out to skeptics) but then come to a different conclusion at the end – that science (essentially) does not work.
  • ...13 more annotations...
  • this third group – the science deniers – started out in the naive group, and then were so scandalized by the realization that science is a messy human endeavor that the leap right to the nihilistic conclusion that science must therefore be bunk.
  • The article by Lehrer falls generally into this third category. He is discussing what has been called “the decline effect” – the fact that effect sizes in scientific studies tend to decrease over time, sometime to nothing.
  • This term was first applied to the parapsychological literature, and was in fact proposed as a real phenomena of ESP – that ESP effects literally decline over time. Skeptics have criticized this view as magical thinking and hopelessly naive – Occam’s razor favors the conclusion that it is the flawed measurement of ESP, not ESP itself, that is declining over time. 
  • Lehrer, however, applies this idea to all of science, not just parapsychology. He writes: And this is why the decline effect is so troubling. Not because it reveals the human fallibility of science, in which data are tweaked and beliefs shape perceptions. (Such shortcomings aren’t surprising, at least for scientists.) And not because it reveals that many of our most exciting theories are fleeting fads and will soon be rejected. (That idea has been around since Thomas Kuhn.) The decline effect is troubling because it reminds us how difficult it is to prove anything. We like to pretend that our experiments define the truth for us. But that’s often not the case. Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true. When the experiments are done, we still have to choose what to believe.
  • Lehrer is ultimately referring to aspects of science that skeptics have been pointing out for years (as a way of discerning science from pseudoscience), but Lehrer takes it to the nihilistic conclusion that it is difficult to prove anything, and that ultimately “we still have to choose what to believe.” Bollocks!
  • Lehrer is describing the cutting edge or the fringe of science, and then acting as if it applies all the way down to the core. I think the problem is that there is so much scientific knowledge that we take for granted – so much so that we forget it is knowledge that derived from the scientific method, and at one point was not known.
  • It is telling that Lehrer uses as his primary examples of the decline effect studies from medicine, psychology, and ecology – areas where the signal to noise ratio is lowest in the sciences, because of the highly variable and complex human element. We don’t see as much of a decline effect in physics, for example, where phenomena are more objective and concrete.
  • If the truth itself does not “wear off”, as the headline of Lehrer’s article provocatively states, then what is responsible for this decline effect?
  • it is no surprise that effect science in preliminary studies tend to be positive. This can be explained on the basis of experimenter bias – scientists want to find positive results, and initial experiments are often flawed or less than rigorous. It takes time to figure out how to rigorously study a question, and so early studies will tend not to control for all the necessary variables. There is further publication bias in which positive studies tend to be published more than negative studies.
  • Further, some preliminary research may be based upon chance observations – a false pattern based upon a quirky cluster of events. If these initial observations are used in the preliminary studies, then the statistical fluke will be carried forward. Later studies are then likely to exhibit a regression to the mean, or a return to more statistically likely results (which is exactly why you shouldn’t use initial data when replicating a result, but should use entirely fresh data – a mistake for which astrologers are infamous).
  • skeptics are frequently cautioning against new or preliminary scientific research. Don’t get excited by every new study touted in the lay press, or even by a university’s press release. Most new findings turn out to be wrong. In science, replication is king. Consensus and reliable conclusions are built upon multiple independent lines of evidence, replicated over time, all converging on one conclusion.
  • Lehrer does make some good points in his article, but they are points that skeptics are fond of making. In order to have a  mature and functional appreciation for the process and findings of science, it is necessary to understand how science works in the real world, as practiced by flawed scientists and scientific institutions. This is the skeptical message.
  • But at the same time reliable findings in science are possible, and happen frequently – when results can be replicated and when they fit into the expanding intricate weave of the picture of the natural world being generated by scientific investigation.
Weiye Loh

Odds Are, It's Wrong - Science News - 0 views

  • science has long been married to mathematics. Generally it has been for the better. Especially since the days of Galileo and Newton, math has nurtured science. Rigorous mathematical methods have secured science’s fidelity to fact and conferred a timeless reliability to its findings.
  • a mutant form of math has deflected science’s heart from the modes of calculation that had long served so faithfully. Science was seduced by statistics, the math rooted in the same principles that guarantee profits for Las Vegas casinos. Supposedly, the proper use of statistics makes relying on scientific results a safe bet. But in practice, widespread misuse of statistical methods makes science more like a crapshoot.
  • science’s dirtiest secret: The “scientific method” of testing hypotheses by statistical analysis stands on a flimsy foundation. Statistical tests are supposed to guide scientists in judging whether an experimental result reflects some real effect or is merely a random fluke, but the standard methods mix mutually inconsistent philosophies and offer no meaningful basis for making such decisions. Even when performed correctly, statistical tests are widely misunderstood and frequently misinterpreted. As a result, countless conclusions in the scientific literature are erroneous, and tests of medical dangers or treatments are often contradictory and confusing.
  • ...24 more annotations...
  • Experts in the math of probability and statistics are well aware of these problems and have for decades expressed concern about them in major journals. Over the years, hundreds of published papers have warned that science’s love affair with statistics has spawned countless illegitimate findings. In fact, if you believe what you read in the scientific literature, you shouldn’t believe what you read in the scientific literature.
  • “There are more false claims made in the medical literature than anybody appreciates,” he says. “There’s no question about that.”Nobody contends that all of science is wrong, or that it hasn’t compiled an impressive array of truths about the natural world. Still, any single scientific study alone is quite likely to be incorrect, thanks largely to the fact that the standard statistical system for drawing conclusions is, in essence, illogical. “A lot of scientists don’t understand statistics,” says Goodman. “And they don’t understand statistics because the statistics don’t make sense.”
  • In 2007, for instance, researchers combing the medical literature found numerous studies linking a total of 85 genetic variants in 70 different genes to acute coronary syndrome, a cluster of heart problems. When the researchers compared genetic tests of 811 patients that had the syndrome with a group of 650 (matched for sex and age) that didn’t, only one of the suspect gene variants turned up substantially more often in those with the syndrome — a number to be expected by chance.“Our null results provide no support for the hypothesis that any of the 85 genetic variants tested is a susceptibility factor” for the syndrome, the researchers reported in the Journal of the American Medical Association.How could so many studies be wrong? Because their conclusions relied on “statistical significance,” a concept at the heart of the mathematical analysis of modern scientific experiments.
  • Statistical significance is a phrase that every science graduate student learns, but few comprehend. While its origins stretch back at least to the 19th century, the modern notion was pioneered by the mathematician Ronald A. Fisher in the 1920s. His original interest was agriculture. He sought a test of whether variation in crop yields was due to some specific intervention (say, fertilizer) or merely reflected random factors beyond experimental control.Fisher first assumed that fertilizer caused no difference — the “no effect” or “null” hypothesis. He then calculated a number called the P value, the probability that an observed yield in a fertilized field would occur if fertilizer had no real effect. If P is less than .05 — meaning the chance of a fluke is less than 5 percent — the result should be declared “statistically significant,” Fisher arbitrarily declared, and the no effect hypothesis should be rejected, supposedly confirming that fertilizer works.Fisher’s P value eventually became the ultimate arbiter of credibility for science results of all sorts
  • But in fact, there’s no logical basis for using a P value from a single study to draw any conclusion. If the chance of a fluke is less than 5 percent, two possible conclusions remain: There is a real effect, or the result is an improbable fluke. Fisher’s method offers no way to know which is which. On the other hand, if a study finds no statistically significant effect, that doesn’t prove anything, either. Perhaps the effect doesn’t exist, or maybe the statistical test wasn’t powerful enough to detect a small but real effect.
  • Soon after Fisher established his system of statistical significance, it was attacked by other mathematicians, notably Egon Pearson and Jerzy Neyman. Rather than testing a null hypothesis, they argued, it made more sense to test competing hypotheses against one another. That approach also produces a P value, which is used to gauge the likelihood of a “false positive” — concluding an effect is real when it actually isn’t. What  eventually emerged was a hybrid mix of the mutually inconsistent Fisher and Neyman-Pearson approaches, which has rendered interpretations of standard statistics muddled at best and simply erroneous at worst. As a result, most scientists are confused about the meaning of a P value or how to interpret it. “It’s almost never, ever, ever stated correctly, what it means,” says Goodman.
  • experimental data yielding a P value of .05 means that there is only a 5 percent chance of obtaining the observed (or more extreme) result if no real effect exists (that is, if the no-difference hypothesis is correct). But many explanations mangle the subtleties in that definition. A recent popular book on issues involving science, for example, states a commonly held misperception about the meaning of statistical significance at the .05 level: “This means that it is 95 percent certain that the observed difference between groups, or sets of samples, is real and could not have arisen by chance.”
  • That interpretation commits an egregious logical error (technical term: “transposed conditional”): confusing the odds of getting a result (if a hypothesis is true) with the odds favoring the hypothesis if you observe that result. A well-fed dog may seldom bark, but observing the rare bark does not imply that the dog is hungry. A dog may bark 5 percent of the time even if it is well-fed all of the time. (See Box 2)
    • Weiye Loh
       
      Does the problem then, lie not in statistics, but the interpretation of statistics? Is the fallacy of appeal to probability is at work in such interpretation? 
  • Another common error equates statistical significance to “significance” in the ordinary use of the word. Because of the way statistical formulas work, a study with a very large sample can detect “statistical significance” for a small effect that is meaningless in practical terms. A new drug may be statistically better than an old drug, but for every thousand people you treat you might get just one or two additional cures — not clinically significant. Similarly, when studies claim that a chemical causes a “significantly increased risk of cancer,” they often mean that it is just statistically significant, possibly posing only a tiny absolute increase in risk.
  • Statisticians perpetually caution against mistaking statistical significance for practical importance, but scientific papers commit that error often. Ziliak studied journals from various fields — psychology, medicine and economics among others — and reported frequent disregard for the distinction.
  • “I found that eight or nine of every 10 articles published in the leading journals make the fatal substitution” of equating statistical significance to importance, he said in an interview. Ziliak’s data are documented in the 2008 book The Cult of Statistical Significance, coauthored with Deirdre McCloskey of the University of Illinois at Chicago.
  • Multiplicity of mistakesEven when “significance” is properly defined and P values are carefully calculated, statistical inference is plagued by many other problems. Chief among them is the “multiplicity” issue — the testing of many hypotheses simultaneously. When several drugs are tested at once, or a single drug is tested on several groups, chances of getting a statistically significant but false result rise rapidly.
  • Recognizing these problems, some researchers now calculate a “false discovery rate” to warn of flukes disguised as real effects. And genetics researchers have begun using “genome-wide association studies” that attempt to ameliorate the multiplicity issue (SN: 6/21/08, p. 20).
  • Many researchers now also commonly report results with confidence intervals, similar to the margins of error reported in opinion polls. Such intervals, usually given as a range that should include the actual value with 95 percent confidence, do convey a better sense of how precise a finding is. But the 95 percent confidence calculation is based on the same math as the .05 P value and so still shares some of its problems.
  • Statistical problems also afflict the “gold standard” for medical research, the randomized, controlled clinical trials that test drugs for their ability to cure or their power to harm. Such trials assign patients at random to receive either the substance being tested or a placebo, typically a sugar pill; random selection supposedly guarantees that patients’ personal characteristics won’t bias the choice of who gets the actual treatment. But in practice, selection biases may still occur, Vance Berger and Sherri Weinstein noted in 2004 in ControlledClinical Trials. “Some of the benefits ascribed to randomization, for example that it eliminates all selection bias, can better be described as fantasy than reality,” they wrote.
  • Randomization also should ensure that unknown differences among individuals are mixed in roughly the same proportions in the groups being tested. But statistics do not guarantee an equal distribution any more than they prohibit 10 heads in a row when flipping a penny. With thousands of clinical trials in progress, some will not be well randomized. And DNA differs at more than a million spots in the human genetic catalog, so even in a single trial differences may not be evenly mixed. In a sufficiently large trial, unrandomized factors may balance out, if some have positive effects and some are negative. (See Box 3) Still, trial results are reported as averages that may obscure individual differences, masking beneficial or harm­ful effects and possibly leading to approval of drugs that are deadly for some and denial of effective treatment to others.
  • nother concern is the common strategy of combining results from many trials into a single “meta-analysis,” a study of studies. In a single trial with relatively few participants, statistical tests may not detect small but real and possibly important effects. In principle, combining smaller studies to create a larger sample would allow the tests to detect such small effects. But statistical techniques for doing so are valid only if certain criteria are met. For one thing, all the studies conducted on the drug must be included — published and unpublished. And all the studies should have been performed in a similar way, using the same protocols, definitions, types of patients and doses. When combining studies with differences, it is necessary first to show that those differences would not affect the analysis, Goodman notes, but that seldom happens. “That’s not a formal part of most meta-analyses,” he says.
  • Meta-analyses have produced many controversial conclusions. Common claims that antidepressants work no better than placebos, for example, are based on meta-analyses that do not conform to the criteria that would confer validity. Similar problems afflicted a 2007 meta-analysis, published in the New England Journal of Medicine, that attributed increased heart attack risk to the diabetes drug Avandia. Raw data from the combined trials showed that only 55 people in 10,000 had heart attacks when using Avandia, compared with 59 people per 10,000 in comparison groups. But after a series of statistical manipulations, Avandia appeared to confer an increased risk.
  • combining small studies in a meta-analysis is not a good substitute for a single trial sufficiently large to test a given question. “Meta-analyses can reduce the role of chance in the interpretation but may introduce bias and confounding,” Hennekens and DeMets write in the Dec. 2 Journal of the American Medical Association. “Such results should be considered more as hypothesis formulating than as hypothesis testing.”
  • Some studies show dramatic effects that don’t require sophisticated statistics to interpret. If the P value is 0.0001 — a hundredth of a percent chance of a fluke — that is strong evidence, Goodman points out. Besides, most well-accepted science is based not on any single study, but on studies that have been confirmed by repetition. Any one result may be likely to be wrong, but confidence rises quickly if that result is independently replicated.“Replication is vital,” says statistician Juliet Shaffer, a lecturer emeritus at the University of California, Berkeley. And in medicine, she says, the need for replication is widely recognized. “But in the social sciences and behavioral sciences, replication is not common,” she noted in San Diego in February at the annual meeting of the American Association for the Advancement of Science. “This is a sad situation.”
  • Most critics of standard statistics advocate the Bayesian approach to statistical reasoning, a methodology that derives from a theorem credited to Bayes, an 18th century English clergyman. His approach uses similar math, but requires the added twist of a “prior probability” — in essence, an informed guess about the expected probability of something in advance of the study. Often this prior probability is more than a mere guess — it could be based, for instance, on previous studies.
  • it basically just reflects the need to include previous knowledge when drawing conclusions from new observations. To infer the odds that a barking dog is hungry, for instance, it is not enough to know how often the dog barks when well-fed. You also need to know how often it eats — in order to calculate the prior probability of being hungry. Bayesian math combines a prior probability with observed data to produce an estimate of the likelihood of the hunger hypothesis. “A scientific hypothesis cannot be properly assessed solely by reference to the observational data,” but only by viewing the data in light of prior belief in the hypothesis, wrote George Diamond and Sanjay Kaul of UCLA’s School of Medicine in 2004 in the Journal of the American College of Cardiology. “Bayes’ theorem is ... a logically consistent, mathematically valid, and intuitive way to draw inferences about the hypothesis.” (See Box 4)
  • In many real-life contexts, Bayesian methods do produce the best answers to important questions. In medical diagnoses, for instance, the likelihood that a test for a disease is correct depends on the prevalence of the disease in the population, a factor that Bayesian math would take into account.
  • But Bayesian methods introduce a confusion into the actual meaning of the mathematical concept of “probability” in the real world. Standard or “frequentist” statistics treat probabilities as objective realities; Bayesians treat probabilities as “degrees of belief” based in part on a personal assessment or subjective decision about what to include in the calculation. That’s a tough placebo to swallow for scientists wedded to the “objective” ideal of standard statistics. “Subjective prior beliefs are anathema to the frequentist, who relies instead on a series of ad hoc algorithms that maintain the facade of scientific objectivity,” Diamond and Kaul wrote.Conflict between frequentists and Bayesians has been ongoing for two centuries. So science’s marriage to mathematics seems to entail some irreconcilable differences. Whether the future holds a fruitful reconciliation or an ugly separation may depend on forging a shared understanding of probability.“What does probability mean in real life?” the statistician David Salsburg asked in his 2001 book The Lady Tasting Tea. “This problem is still unsolved, and ... if it remains un­solved, the whole of the statistical approach to science may come crashing down from the weight of its own inconsistencies.”
  •  
    Odds Are, It's Wrong Science fails to face the shortcomings of statistics
Weiye Loh

Rationally Speaking: The problem of replicability in science - 0 views

  • The problem of replicability in science from xkcdby Massimo Pigliucci
  • In recent months much has been written about the apparent fact that a surprising, indeed disturbing, number of scientific findings cannot be replicated, or when replicated the effect size turns out to be much smaller than previously thought.
  • Arguably, the recent streak of articles on this topic began with one penned by David Freedman in The Atlantic, and provocatively entitled “Lies, Damned Lies, and Medical Science.” In it, the major character was John Ioannidis, the author of some influential meta-studies about the low degree of replicability and high number of technical flaws in a significant portion of published papers in the biomedical literature.
  • ...18 more annotations...
  • As Freedman put it in The Atlantic: “80 percent of non-randomized studies (by far the most common type) turn out to be wrong, as do 25 percent of supposedly gold-standard randomized trials, and as much as 10 percent of the platinum-standard large randomized trials.” Ioannidis himself was quoted uttering some sobering words for the medical community (and the public at large): “Science is a noble endeavor, but it’s also a low-yield endeavor. I’m not sure that more than a very small percentage of medical research is ever likely to lead to major improvements in clinical outcomes and quality of life. We should be very comfortable with that fact.”
  • Julia and I actually addressed this topic during a Rationally Speaking podcast, featuring as guest our friend Steve Novella, of Skeptics’ Guide to the Universe and Science-Based Medicine fame. But while Steve did quibble with the tone of the Atlantic article, he agreed that Ioannidis’ results are well known and accepted by the medical research community. Steve did point out that it should not be surprising that results get better and better as one moves toward more stringent protocols like large randomized trials, but it seems to me that one should be surprised (actually, appalled) by the fact that even there the percentage of flawed studies is high — not to mention the fact that most studies are in fact neither large nor properly randomized.
  • The second big recent blow to public perception of the reliability of scientific results is an article published in The New Yorker by Jonah Lehrer, entitled “The truth wears off.” Lehrer also mentions Ioannidis, but the bulk of his essay is about findings in psychiatry, psychology and evolutionary biology (and even in research on the paranormal!).
  • In these disciplines there are now several documented cases of results that were initially spectacularly positive — for instance the effects of second generation antipsychotic drugs, or the hypothesized relationship between a male’s body symmetry and the quality of his genes — that turned out to be increasingly difficult to replicate over time, with the original effect sizes being cut down dramatically, or even disappearing altogether.
  • As Lehrer concludes at the end of his article: “Such anomalies demonstrate the slipperiness of empiricism. Although many scientific ideas generate conflicting results and suffer from falling effect sizes, they continue to get cited in the textbooks and drive standard medical practice. Why? Because these ideas seem true. Because they make sense. Because we can’t bear to let them go. And this is why the decline effect is so troubling.”
  • None of this should actually be particularly surprising to any practicing scientist. If you have spent a significant time of your life in labs and reading the technical literature, you will appreciate the difficulties posed by empirical research, not to mention a number of issues such as the fact that few scientists ever actually bother to replicate someone else’s results, for the simple reason that there is no Nobel (or even funded grant, or tenured position) waiting for the guy who arrived second.
  • n the midst of this I was directed by a tweet by my colleague Neil deGrasse Tyson (who has also appeared on the RS podcast, though in a different context) to a recent ABC News article penned by John Allen Paulos, which meant to explain the decline effect in science.
  • Paulos’ article is indeed concise and on the mark (though several of the explanations he proposes were already brought up in both the Atlantic and New Yorker essays), but it doesn’t really make things much better.
  • Paulos suggests that one explanation for the decline effect is the well known statistical phenomenon of the regression toward the mean. This phenomenon is responsible, among other things, for a fair number of superstitions: you’ve probably heard of some athletes’ and other celebrities’ fear of being featured on the cover of a magazine after a particularly impressive series of accomplishments, because this brings “bad luck,” meaning that the following year one will not be able to repeat the performance at the same level. This is actually true, not because of magical reasons, but simply as a result of the regression to the mean: extraordinary performances are the result of a large number of factors that have to line up just right for the spectacular result to be achieved. The statistical chances of such an alignment to repeat itself are low, so inevitably next year’s performance will likely be below par. Paulos correctly argues that this also explains some of the decline effect of scientific results: the first discovery might have been the result of a number of factors that are unlikely to repeat themselves in exactly the same way, thus reducing the effect size when the study is replicated.
  • nother major determinant of the unreliability of scientific results mentioned by Paulos is the well know problem of publication bias: crudely put, science journals (particularly the high-profile ones, like Nature and Science) are interested only in positive, spectacular, “sexy” results. Which creates a powerful filter against negative, or marginally significant results. What you see in science journals, in other words, isn’t a statistically representative sample of scientific results, but a highly biased one, in favor of positive outcomes. No wonder that when people try to repeat the feat they often come up empty handed.
  • A third cause for the problem, not mentioned by Paulos but addressed in the New Yorker article, is the selective reporting of results by scientists themselves. This is essentially the same phenomenon as the publication bias, except that this time it is scientists themselves, not editors and reviewers, who don’t bother to submit for publication results that are either negative or not strongly conclusive. Again, the outcome is that what we see in the literature isn’t all the science that we ought to see. And it’s no good to argue that it is the “best” science, because the quality of scientific research is measured by the appropriateness of the experimental protocols (including the use of large samples) and of the data analyses — not by whether the results happen to confirm the scientist’s favorite theory.
  • The conclusion of all this is not, of course, that we should throw the baby (science) out with the bath water (bad or unreliable results). But scientists should also be under no illusion that these are rare anomalies that do not affect scientific research at large. Too much emphasis is being put on the “publish or perish” culture of modern academia, with the result that graduate students are explicitly instructed to go for the SPU’s — Smallest Publishable Units — when they have to decide how much of their work to submit to a journal. That way they maximize the number of their publications, which maximizes the chances of landing a postdoc position, and then a tenure track one, and then of getting grants funded, and finally of getting tenure. The result is that, according to statistics published by Nature, it turns out that about ⅓ of published studies is never cited (not to mention replicated!).
  • “Scientists these days tend to keep up the polite fiction that all science is equal. Except for the work of the misguided opponent whose arguments we happen to be refuting at the time, we speak as though every scientist’s field and methods of study are as good as every other scientist’s, and perhaps a little better. This keeps us all cordial when it comes to recommending each other for government grants. ... We speak piously of taking measurements and making small studies that will ‘add another brick to the temple of science.’ Most such bricks lie around the brickyard.”
    • Weiye Loh
       
      Written by John Platt in a "Science" article published in 1964
  • Most damning of all, however, is the potential effect that all of this may have on science’s already dubious reputation with the general public (think evolution-creation, vaccine-autism, or climate change)
  • “If we don’t tell the public about these problems, then we’re no better than non-scientists who falsely claim they can heal. If the drugs don’t work and we’re not sure how to treat something, why should we claim differently? Some fear that there may be less funding because we stop claiming we can prove we have miraculous treatments. But if we can’t really provide those miracles, how long will we be able to fool the public anyway? The scientific enterprise is probably the most fantastic achievement in human history, but that doesn’t mean we have a right to overstate what we’re accomplishing.”
  • Joseph T. Lapp said... But is any of this new for science? Perhaps science has operated this way all along, full of fits and starts, mostly duds. How do we know that this isn't the optimal way for science to operate?My issues are with the understanding of science that high school graduates have, and with the reporting of science.
    • Weiye Loh
       
      It's the media at fault again.
  • What seems to have emerged in recent decades is a change in the institutional setting that got science advancing spectacularly since the establishment of the Royal Society. Flaws in the system such as corporate funded research, pal-review instead of peer-review, publication bias, science entangled with policy advocacy, and suchlike, may be distorting the environment, making it less suitable for the production of good science, especially in some fields.
  • Remedies should exist, but they should evolve rather than being imposed on a reluctant sociological-economic science establishment driven by powerful motives such as professional advance or funding. After all, who or what would have the authority to impose those rules, other than the scientific establishment itself?
Weiye Loh

Rationally Speaking: Response to Jonathan Haidt's response, on the academy's liberal bias - 0 views

  • Dear Prof. Haidt,You understandably got upset by my harsh criticism of your recent claims about the mechanisms behind the alleged anti-conservative bias that apparently so permeates the modern academy. I find it amusing that you simply assumed I had not looked at your talk and was therefore speaking without reason. Yet, I have indeed looked at it (it is currently published at Edge, a non-peer reviewed webzine), and found that it simply doesn’t add much to the substance (such as it is) of Tierney’s summary.
  • Yes, you do acknowledge that there may be multiple reasons for the imbalance between the number of conservative and liberal leaning academics, but then you go on to characterize the academy, at least in your field, as a tribe having a serious identity issue, with no data whatsoever to back up your preferred subset of causal explanations for the purported problem.
  • your talk is simply an extended op-ed piece, which starts out with a summary of your findings about the different moral outlooks of conservatives and liberals (which I have criticized elsewhere on this blog), and then proceeds to build a flimsy case based on a couple of anecdotes and some badly flawed data.
  • ...4 more annotations...
  • For instance, slide 23 shows a Google search for “liberal social psychologist,” highlighting the fact that one gets a whopping 2,740 results (which, actually, by Google standards is puny; a search under my own name yields 145,000, and I ain’t no Lady Gaga). You then compared this search to one for “conservative social psychologist” and get only three entries.
  • First of all, if Google searches are the main tool of social psychology these days, I fear for the entire field. Second, I actually re-did your searches — at the prompting of one of my readers — and came up with quite different results. As the photo here shows, if you actually bother to scroll through the initial Google search for “liberal social psychologist” you will find that there are in fact only 24 results, to be compared to 10 (not 3) if you search for “conservative social psychologist.” Oops. From this scant data I would simply conclude that political orientation isn’t a big deal in social psychology.
  • Your talk continues with some pretty vigorous hand-waving: “We rely on our peers to find flaws in our arguments, but when there is essentially nobody out there to challenge liberal assumptions and interpretations of experimental findings, the peer review process breaks down, at least for work that is related to those sacred values.” Right, except that I would like to see a systematic survey of exactly how the lack of conservative peer review has affected the quality of academic publications. Oh, wait, it hasn’t, at least according to what you yourself say in the next sentence: “The great majority of work in social psychology is excellent, and is unaffected by these problems.” I wonder how you know this, and why — if true — you then think that there is a problem. Philosophers call this an inherent contradiction, it’s a common example of bad argument.
  • Finally, let me get to your outrage at the fact that I have allegedly accused you of academic misconduct and lying. I have done no such thing, and you really ought (in the ethical sense) to be careful when throwing those words around. I have simply raised the logical possibility that you (and Tierney) have an agenda, a possibility based on reading several of the things both you and Tierney have written of late. As a psychologist, I’m sure you are aware that biases can be unconscious, and therefore need not imply that the person in question is lying or engaging in any form of purposeful misconduct. Or were you implying in your own talk that your colleagues’ bias was conscious? Because if so, you have just accused an entire profession of misconduct.
Weiye Loh

Read Aubrey McClendon's response to "misleading" New York Times article (1) - 0 views

  • Since the shale gas revolution and resulting confirmation of enormous domestic gas reserves, there has been a relatively small group of analysts and geologists who have doubted the future of shale gas.  Their doubts have become very convenient to the environmental activists I mentioned earlier. This particular NYT reporter has apparently sought out a few of the doubters to fashion together a negative view of the U.S. natural gas industry. We also believe certain media outlets, especially the once venerable NYT, are being manipulated by those whose environmental or economic interests are being threatened by abundant natural gas supplies. We have seen for example today an email from a leader of a group called the Environmental Working Group who claimed today’s articles as this NYT reporter’s "second great story" (the first one declaring that produced water disposal from shale gas wells was unsafe) and that “we've been working with him for over 8 months. Much more to come. . .”
  • this reporter’s claim of impending scarcity of natural gas supply contradicts the facts and the scientific extrapolation of those facts by the most sophisticated reservoir engineers and geoscientists in the world. Not just at Chesapeake, but by experts at many of the world’s leading energy companies that have made multi-billion-dollar, long-term investments in U.S. shale gas plays, with us and many other companies. Notable examples of these companies, besides the leading independents such as Chesapeake, Devon, Anadarko, EOG, EnCana, Talisman and others, include these leading global energy giants:  Exxon, Shell, BP, Chevron, Conoco, Statoil, BHP, Total, CNOOC, Marathon, BG, KNOC, Reliance, PetroChina, Mitsui, Mitsubishi and ENI, among others.  Is it really possible that all of these companies, with a combined market cap of almost $2 trillion, know less about shale gas than a NYT reporter, a few environmental activists and a handful of shale gas doubters?
  •  
    Administrator's Note: This email was sent to all Chesapeake employees from CEO Aubrey McClendon, in response to a Sunday New York Times piece by Ian Urbina entitled "Insiders Sound an Alarm Amid a Natural Gas Rush."   FW: CHK's response to 6.26.11 NYT article on shale gas   From: Aubrey McClendon Sent: Sunday, June 26, 2011 8:37 PM To: All Employees   Dear CHK Employees:  By now many of you may have read or heard about a story in today's New York Times (NYT) that questioned the productive capacity and economic quality of U.S. natural gas shale reserves, as well as energy reserve accounting practices used by E&P companies, including Chesapeake.  The story is misleading, at best, and is the latest in a series of articles produced by this publication that obviously have an anti-industry bias.  We know for a fact that today's NYT story is the handiwork of the same group of environmental activists who have been the driving force behind the NYT's ongoing series of negative articles about the use of fracking and its importance to the US natural gas supply growth revolution - which is changing the future of our nation for the better in multiple areas.  It is not clear to me exactly what these environmental activists are seeking to offer as their alternative energy plan, but most that I have talked to continue to naively presume that our great country need only rely on wind and solar energy to meet our current and future energy needs. They always seem to forget that wind and solar produce less than 2% of America electricity today and are completely non-economic without ongoing government and ratepayer subsidies.
Weiye Loh

Harvard professor spots Web search bias - Business - The Boston Globe - 0 views

  • Sweeney said she has no idea why Google searches seem to single out black-sounding names. There could be myriad issues at play, some associated with the software, some with the people searching Google. For example, the more often searchers click on a particular ad, the more frequently it is displayed subsequently. “Since we don’t know the reason for it,” she said, “it’s hard to say what you need to do.”
  • But Danny Sullivan, editor of SearchEngineLand.com, an online trade publication that tracks the Internet search and advertising business, said Sweeney’s research has stirred a tempest in a teapot. “It looks like this fairly isolated thing that involves one advertiser.” He also said that the results could be caused by black Google users clicking on those ads as much as white users. “It could be that black people themselves could be causing the stuff that causes the negative copy to be selected more,” said Sullivan. “If most of the searches for black names are done by black people . . . is that racially biased?”
  • On the other hand, Sullivan said Sweeney has uncovered a problem with online searching — the casual display of information that might put someone in a bad light. Rather than focusing on potential instances of racism, he said, search services such as Google might want to put more restrictions on displaying negative information about anyone, black or white.
Weiye Loh

Roger Pielke Jr.'s Blog: Science Impact - 0 views

  • The Guardian has a blog post up by three neuroscientists decrying the state of hype in the media related to their field, which is fueled in part by their colleagues seeking "impact." 
  • Anyone who has followed recent media reports that electrical brain stimulation "sparks bright ideas" or "unshackles the genius within" could be forgiven for believing that we stand on the frontier of a brave new world. As James Gallagher of the BBC put it, "Are we entering the era of the thinking cap – a device to supercharge our brains?" The answer, we would suggest, is a categorical no. Such speculations begin and end in the colourful realm of science fiction. But we are also in danger of entering the era of the "neuro-myth", where neuroscientists sensationalise and distort their own findings in the name of publicity. The tendency for scientists to over-egg the cake when dealing with the media is nothing new, but recent examples are striking in their disregard for accurate reporting to the public. We believe the media and academic community share a collective responsibility to prevent pseudoscience from masquerading as neuroscience.
  • They identify an . . . . . . unacceptable gulf between, on the one hand, the evidence-bound conclusions reached in peer-reviewed scientific journals, and on the other, the heavy spin applied by scientists to achieve publicity in the media. Are we as neuroscientists so unskilled at communicating with the public, or so low in our estimation of the public's intelligence, that we see no alternative but to mislead and exaggerate?
  • ...1 more annotation...
  • Somewhere down the line, achieving an impact in the media seems to have become the goal in itself, rather than what it should be: a way to inform and engage the public with clarity and objectivity, without bias or prejudice. Our obsession with impact is not one-sided. The craving of scientists for publicity is fuelled by a hurried and unquestioning media, an academic community that disproportionately rewards publication in "high impact" journals such as Nature, and by research councils that emphasise the importance of achieving "impact" while at the same time delivering funding cuts. Academics are now pushed to attend media training courses, instructed about "pathways to impact", required to include detailed "impact summaries" when applying for grant funding, and constantly reminded about the importance of media engagement to further their careers. Yet where in all of this strategising and careerism is it made clear why public engagement is important? Where is it emphasised that the most crucial consideration in our interactions with the media is that we are accurate, honest and open about the limitations of our research?
  •  
    The Guardian has a blog post up by three neuroscientists decrying the state of hype in the media related to their field, which is fueled in part by their colleagues seeking "impact." 
Weiye Loh

Skepticblog » A Creationist Challenge - 0 views

  • The commenter starts with some ad hominems, asserting that my post is biased and emotional. They provide no evidence or argument to support this assertion. And of course they don’t even attempt to counter any of the arguments I laid out. They then follow up with an argument from authority – he can link to a PhD creationist – so there.
  • The article that the commenter links to is by Henry M. Morris, founder for the Institute for Creation Research (ICR) – a young-earth creationist organization. Morris was (he died in 2006 following a stroke) a PhD – in civil engineering. This point is irrelevant to his actual arguments. I bring it up only to put the commenter’s argument from authority into perspective. No disrespect to engineers – but they are not biologists. They have no expertise relevant to the question of evolution – no more than my MD. So let’s stick to the arguments themselves.
  • The article by Morris is an overview of so-called Creation Science, of which Morris was a major architect. The arguments he presents are all old creationist canards, long deconstructed by scientists. In fact I address many of them in my original refutation. Creationists generally are not very original – they recycle old arguments endlessly, regardless of how many times they have been destroyed.
  • ...26 more annotations...
  • Morris also makes heavy use of the “taking a quote out of context” strategy favored by creationists. His quotes are often from secondary sources and are incomplete.
  • A more scholarly (i.e. intellectually honest) approach would be to cite actual evidence to support a point. If you are going to cite an authority, then make sure the quote is relevant, in context, and complete.
  • And even better, cite a number of sources to show that the opinion is representative. Rather we get single, partial, and often outdated quotes without context.
  • (nature is not, it turns out, cleanly divided into “kinds”, which have no operational definition). He also repeats this canard: Such variation is often called microevolution, and these minor horizontal (or downward) changes occur fairly often, but such changes are not true “vertical” evolution. This is the microevolution/macroevolution false dichotomy. It is only “often called” this by creationists – not by actual evolutionary scientists. There is no theoretical or empirical division between macro and micro evolution. There is just evolution, which can result in the full spectrum of change from minor tweaks to major changes.
  • Morris wonders why there are no “dats” – dog-cat transitional species. He misses the hierarchical nature of evolution. As evolution proceeds, and creatures develop a greater and greater evolutionary history behind them, they increasingly are committed to their body plan. This results in a nestled hierarchy of groups – which is reflected in taxonomy (the naming scheme of living things).
  • once our distant ancestors developed the basic body plan of chordates, they were committed to that body plan. Subsequent evolution resulted in variations on that plan, each of which then developed further variations, etc. But evolution cannot go backward, undo evolutionary changes and then proceed down a different path. Once an evolutionary line has developed into a dog, evolution can produce variations on the dog, but it cannot go backwards and produce a cat.
  • Stephen J. Gould described this distinction as the difference between disparity and diversity. Disparity (the degree of morphological difference) actually decreases over evolutionary time, as lineages go extinct and the surviving lineages are committed to fewer and fewer basic body plans. Meanwhile, diversity (the number of variations on a body plan) within groups tends to increase over time.
  • the kind of evolutionary changes that were happening in the past, when species were relatively undifferentiated (compared to contemporary species) is indeed not happening today. Modern multi-cellular life has 600 million years of evolutionary history constraining their future evolution – which was not true of species at the base of the evolutionary tree. But modern species are indeed still evolving.
  • Here is a list of research documenting observed instances of speciation. The list is from 1995, and there are more recent examples to add to the list. Here are some more. And here is a good list with references of more recent cases.
  • Next Morris tries to convince the reader that there is no evidence for evolution in the past, focusing on the fossil record. He repeats the false claim (again, which I already dealt with) that there are no transitional fossils: Even those who believe in rapid evolution recognize that a considerable number of generations would be required for one distinct “kind” to evolve into another more complex kind. There ought, therefore, to be a considerable number of true transitional structures preserved in the fossils — after all, there are billions of non-transitional structures there! But (with the exception of a few very doubtful creatures such as the controversial feathered dinosaurs and the alleged walking whales), they are not there.
  • I deal with this question at length here, pointing out that there are numerous transitional fossils for the evolution of terrestrial vertebrates, mammals, whales, birds, turtles, and yes – humans from ape ancestors. There are many more examples, these are just some of my favorites.
  • Much of what follows (as you can see it takes far more space to correct the lies and distortions of Morris than it did to create them) is classic denialism – misinterpreting the state of the science, and confusing lack of information about the details of evolution with lack of confidence in the fact of evolution. Here are some examples – he quotes Niles Eldridge: “It is a simple ineluctable truth that virtually all members of a biota remain basically stable, with minor fluctuations, throughout their durations. . . .“ So how do evolutionists arrive at their evolutionary trees from fossils of organisms which didn’t change during their durations? Beware the “….” – that means that meaningful parts of the quote are being omitted. I happen to have the book (The Pattern of Evolution) from which Morris mined that particular quote. Here’s the rest of it: (Remember, by “biota” we mean the commonly preserved plants and animals of a particular geological interval, which occupy regions often as large as Roger Tory Peterson’s “eastern” region of North American birds.) And when these systems change – when the older species disappear, and new ones take their place – the change happens relatively abruptly and in lockstep fashion.”
  • Eldridge was one of the authors (with Gould) of punctuated equilibrium theory. This states that, if you look at the fossil record, what we see are species emerging, persisting with little change for a while, and then disappearing from the fossil record. They theorize that most species most of the time are at equilibrium with their environment, and so do not change much. But these periods of equilibrium are punctuated by disequilibrium – periods of change when species will have to migrate, evolve, or go extinct.
  • This does not mean that speciation does not take place. And if you look at the fossil record we see a pattern of descendant species emerging from ancestor species over time – in a nice evolutionary pattern. Morris gives a complete misrepresentation of Eldridge’s point – once again we see intellectual dishonesty in his methods of an astounding degree.
  • Regarding the atheism = religion comment, it reminds me of a great analogy that I first heard on twitter from Evil Eye. (paraphrase) “those that say atheism is a religion, is like saying ‘not collecting stamps’ is a hobby too.”
  • Morris next tackles the genetic evidence, writing: More often is the argument used that similar DNA structures in two different organisms proves common evolutionary ancestry. Neither argument is valid. There is no reason whatever why the Creator could not or would not use the same type of genetic code based on DNA for all His created life forms. This is evidence for intelligent design and creation, not evolution.
  • Here is an excellent summary of the multiple lines of molecular evidence for evolution. Basically, if we look at the sequence of DNA, the variations in trinucleotide codes for amino acids, and amino acids for proteins, and transposons within DNA we see a pattern that can only be explained by evolution (or a mischievous god who chose, for some reason, to make life look exactly as if it had evolved – a non-falsifiable notion).
  • The genetic code is essentially comprised of four letters (ACGT for DNA), and every triplet of three letters equates to a specific amino acid. There are 64 (4^3) possible three letter combinations, and 20 amino acids. A few combinations are used for housekeeping, like a code to indicate where a gene stops, but the rest code for amino acids. There are more combinations than amino acids, so most amino acids are coded for by multiple combinations. This means that a mutation that results in a one-letter change might alter from one code for a particular amino acid to another code for the same amino acid. This is called a silent mutation because it does not result in any change in the resulting protein.
  • It also means that there are very many possible codes for any individual protein. The question is – which codes out of the gazillions of possible codes do we find for each type of protein in different species. If each “kind” were created separately there would not need to be any relationship. Each kind could have it’s own variation, or they could all be identical if they were essentially copied (plus any mutations accruing since creation, which would be minimal). But if life evolved then we would expect that the exact sequence of DNA code would be similar in related species, but progressively different (through silent mutations) over evolutionary time.
  • This is precisely what we find – in every protein we have examined. This pattern is necessary if evolution were true. It cannot be explained by random chance (the probability is absurdly tiny – essentially zero). And it makes no sense from a creationist perspective. This same pattern (a branching hierarchy) emerges when we look at amino acid substitutions in proteins and other aspects of the genetic code.
  • Morris goes for the second law of thermodynamics again – in the exact way that I already addressed. He responds to scientists correctly pointing out that the Earth is an open system, by writing: This naive response to the entropy law is typical of evolutionary dissimulation. While it is true that local order can increase in an open system if certain conditions are met, the fact is that evolution does not meet those conditions. Simply saying that the earth is open to the energy from the sun says nothing about how that raw solar heat is converted into increased complexity in any system, open or closed. The fact is that the best known and most fundamental equation of thermodynamics says that the influx of heat into an open system will increase the entropy of that system, not decrease it. All known cases of decreased entropy (or increased organization) in open systems involve a guiding program of some sort and one or more energy conversion mechanisms.
  • Energy has to be transformed into a usable form in order to do the work necessary to decrease entropy. That’s right. That work is done by life. Plants take solar energy (again – I’m not sure what “raw solar heat” means) and convert it into food. That food fuels the processes of life, which include development and reproduction. Evolution emerges from those processes- therefore the conditions that Morris speaks of are met.
  • But Morris next makes a very confused argument: Evolution has neither of these. Mutations are not “organizing” mechanisms, but disorganizing (in accord with the second law). They are commonly harmful, sometimes neutral, but never beneficial (at least as far as observed mutations are concerned). Natural selection cannot generate order, but can only “sieve out” the disorganizing mutations presented to it, thereby conserving the existing order, but never generating new order.
  • The notion that evolution (as if it’s a thing) needs to use energy is hopelessly confused. Evolution is a process that emerges from the system of life – and life certainly can use solar energy to decrease its entropy, and by extension the entropy of the biosphere. Morris slips into what is often presented as an information argument.  (Yet again – already dealt with. The pattern here is that we are seeing a shuffling around of the same tired creationists arguments.) It is first not true that most mutations are harmful. Many are silent, and many of those that are not silent are not harmful. They may be neutral, they may be a mixed blessing, and their relative benefit vs harm is likely to be situational. They may be fatal. And they also may be simply beneficial.
  • Morris finishes with a long rambling argument that evolution is religion. Evolution is promoted by its practitioners as more than mere science. Evolution is promulgated as an ideology, a secular religion — a full-fledged alternative to Christianity, with meaning and morality . . . . Evolution is a religion. This was true of evolution in the beginning, and it is true of evolution still today. Morris ties evolution to atheism, which, he argues, makes it a religion. This assumes, of course, that atheism is a religion. That depends on how you define atheism and how you define religion – but it is mostly wrong. Atheism is a lack of belief in one particular supernatural claim – that does not qualify it as a religion.
  • But mutations are not “disorganizing” – that does not even make sense. It seems to be based on a purely creationist notion that species are in some privileged perfect state, and any mutation can only take them farther from that perfection. For those who actually understand biology, life is a kluge of compromises and variation. Mutations are mostly lateral moves from one chaotic state to another. They are not directional. But they do provide raw material, variation, for natural selection. Natural selection cannot generate variation, but it can select among that variation to provide differential survival. This is an old game played by creationists – mutations are not selective, and natural selection is not creative (does not increase variation). These are true but irrelevant, because mutations increase variation and information, and selection is a creative force that results in the differential survival of better adapted variation.
  •  
    One of my earlier posts on SkepticBlog was Ten Major Flaws in Evolution: A Refutation, published two years ago. Occasionally a creationist shows up to snipe at the post, like this one:i read this and found it funny. It supposedly gives a scientific refutation, but it is full of more bias than fox news, and a lot of emotion as well.here's a scientific case by an actual scientists, you know, one with a ph. D, and he uses statements by some of your favorite evolutionary scientists to insist evolution doesn't exist.i challenge you to write a refutation on this one.http://www.icr.org/home/resources/resources_tracts_scientificcaseagainstevolution/Challenge accepted.
Weiye Loh

Asia Times Online :: Southeast Asia news and business from Indonesia, Philippines, Thai... - 0 views

  • rather than being forced to wait for parliament to meet to air their dissent, now opposition parties are able to post pre-emptively their criticisms online, shifting the time and space of Singapore's political debate
  • Singapore's People's Action Party (PAP)-dominated politics are increasingly being contested online and over social media like blogs, Facebook and Twitter. Pushed by the perceived pro-PAP bias of the mainstream media, Singapore's opposition parties are using various new media to communicate with voters and express dissenting views. Alternative news websites, including The Online Citizen and Temasek Review, have won strong followings by presenting more opposition views in their news mix.
  • Despite its democratic veneer, Singapore rates poorly in global press freedom rankings due to a deeply entrenched culture of self-censorship and a pro-state bias in the mainstream media. Reporters Without Borders, a France-based press freedom advocacy group, recently ranked Singapore 136th in its global press freedom rankings, scoring below repressive countries like Iraq and Zimbabwe. The country's main media publishing house, Singapore Press Holdings, is owned by the state and its board of directors is made up largely of PAP members or other government-linked executives. Senior newspaper editors, including at the Straits Times, must be vetted and approved by the PAP-led government.
  • ...3 more annotations...
  • The local papers have a long record of publicly endorsing the PAP-led government's position, according to Tan Tarn How, a research fellow at the Institute of Policy Studies (IPS) and himself a former journalist. In his research paper "Singapore's print media policy - a national success?" published last year he quoted Leslie Fong, a former editor of the Straits Times, saying that the press "should resist the temptation to arrogate itself the role of a watchdog, or permanent critic, of the government of the day".
  • With regularly briefed and supportive editors, there is no need for pre-publication censorship, according to Tan. When the editors are perceived to get things "wrong", the government frequently takes to task, either publicly or privately, the newspaper's editors or individual journalists, he said.
  • The country's main newspaper, the Straits Times, has consistently stood by its editorial decision-making. Editor Han Fook Kwang said last year: "Our circulation is 380,000 and we have a readership of 1.4 million - these are people who buy the paper every day. We're aware people say we're a government mouthpiece or that we are biased but the test is if our readers believe in the paper and continue to buy it."
Weiye Loh

Do Americans trust the news media? (OneNewsNow.com) - 1 views

  • newly released poll by Sacred Heart Universitiy. The SHU Polling Institute recently conducted its third survey on "Trust and Satisfaction with the National News Media."  It's a national poll intended to answer the question of whether Americans trust the news media.  In a nutshell, the answer is a resounding "No!"
  • Pollsters asked which television news organizations people turned to most frequently.  CBS News didn't even make the top five!  Who did?  Fox News was first by a wide margin of 28.4 percent.  CNN was second, chosen by 14.9 percent.  NBC News, ABC News, and "local news" followed, while CBS News lagged way behind with only 7.4 percent.
  • On the question of media bias, a whopping 83.6 percent agree that national news media organizations are "very or somewhat biased." 
  • ...3 more annotations...
  • Which media outlet is most trusted to be accurate today?  Again, Fox News took first place with a healthy margin of 30 percent.  CNN followed with 19.5 percent, NBC News with 7.5 percent, and ABC News with 7.5 percent.
  • we see a strong degree of polarization and political partisanship in the country in general, we see a similar trend in the media."  That probably explains why Fox News is also considered the least trusted, according to the SHU poll.  Viewers seem to either love or hate Fox News.
    • Weiye Loh
       
      So is Fox News the most trusted or the least trusted according to the SHU poll? Or both? And if it's both, how exactly is the survey carried out? Aren't survey options supposed to be mutually exclusive and exhaustive? Maybe SHU has no course on research methods.
  • only 24.3 percent of the SHU respondents say they believe "all or most news media reporting."  They also overwhelmingly (86.6 percent) believe "that the news media have their own political and public policy positions and attempt to influence public opinion." 
    • Weiye Loh
       
      They believe that media attempts to influence. But they also believe that media is biased. Logically then, they don't trust and believe the media. Does that mean that media has no influence? If so, why are they worried then? Third-person perception? Or they simply believe that they are holier-than-thou? Are they really more objective? What is objectivity anyway if not a social construst.
  •  
    One biased news source reporting about the biasness of other news sources. Shows that (self-)reflexivity is key in reading.
Weiye Loh

Skepticblog » Global Warming Skeptic Changes His Tune - by Doing the Science ... - 0 views

  • To the global warming deniers, Muller had been an important scientific figure with good credentials who had expressed doubt about the temperature data used to track the last few decades of global warming. Muller was influenced by Anthony Watts, a former TV weatherman (not a trained climate scientist) and blogger who has argued that the data set is mostly from large cities, where the “urban heat island” effect might bias the overall pool of worldwide temperature data. Climate scientists have pointed out that they have accounted for this possible effect already, but Watts and Muller were unconvinced. With $150,000 (25% of their funding) from the Koch brothers (the nation’s largest supporters of climate denial research), as well as the Getty Foundation (their wealth largely based on oil money) and other funding sources, Muller set out to reanalyze all the temperature data by setting up the Berkeley Earth Surface Temperature Project.
  • Although only 2% of the data were analyzed by last month, the Republican climate deniers in Congress called him to testify in their March 31 hearing to attack global warming science, expecting him to give them scientific data supporting their biases. To their dismay, Muller behaved like a real scientist and not an ideologue—he followed his data and told them the truth, not what they wanted to hear. Muller pointed out that his analysis of the data set almost exactly tracked what the National Oceanographic and Atmospheric Administration (NOAA), the Goddard Institute of Space Science (GISS), and the Hadley Climate Research Unit at the University of East Anglia in the UK had already published (see figure).
  • Muller testified before the House Committee that: The Berkeley Earth Surface Temperature project was created to make the best possible estimate of global temperature change using as complete a record of measurements as possible and by applying novel methods for the estimation and elimination of systematic biases. We see a global warming trend that is very similar to that previously reported by the other groups. The world temperature data has sufficient integrity to be used to determine global temperature trends. Despite potential biases in the data, methods of analysis can be used to reduce bias effects well enough to enable us to measure long-term Earth temperature changes. Data integrity is adequate. Based on our initial work at Berkeley Earth, I believe that some of the most worrisome biases are less of a problem than I had previously thought.
  • ...4 more annotations...
  • The right-wing ideologues were sorely disappointed, and reacted viciously in the political sphere by attacking their own scientist, but Muller’s scientific integrity overcame any biases he might have harbored at the beginning. He “called ‘em as he saw ‘em” and told truth to power.
  • it speaks well of the scientific process when a prominent skeptic like Muller does his job properly and admits that his original biases were wrong. As reported in the Los Angeles Times : Ken Caldeira, an atmospheric scientist at the Carnegie Institution for Science, which contributed some funding to the Berkeley effort, said Muller’s statement to Congress was “honorable” in recognizing that “previous temperature reconstructions basically got it right…. Willingness to revise views in the face of empirical data is the hallmark of the good scientific process.”
  • This is the essence of the scientific method at its best. There may be biases in our perceptions, and we may want to find data that fits our preconceptions about the world, but if science is done properly, we get a real answer, often one we did not expect or didn’t want to hear. That’s the true test of when science is giving us a reality check: when it tells us “an inconvenient truth”, something we do not like, but is inescapable if one follows the scientific method and analyzes the data honestly.
  • Sit down before fact as a little child, be prepared to give up every preconceived notion, follow humbly wherever and to whatever abysses nature leads, or you shall learn nothing.
Weiye Loh

The Science of Why We Don't Believe Science | Mother Jones - 0 views

  • "A MAN WITH A CONVICTION is a hard man to change. Tell him you disagree and he turns away. Show him facts or figures and he questions your sources. Appeal to logic and he fails to see your point." So wrote the celebrated Stanford University psychologist Leon Festinger (PDF)
  • How would people so emotionally invested in a belief system react, now that it had been soundly refuted? At first, the group struggled for an explanation. But then rationalization set in. A new message arrived, announcing that they'd all been spared at the last minute. Festinger summarized the extraterrestrials' new pronouncement: "The little group, sitting all night long, had spread so much light that God had saved the world from destruction." Their willingness to believe in the prophecy had saved Earth from the prophecy!
  • This tendency toward so-called "motivated reasoning" helps explain why we find groups so polarized over matters where the evidence is so unequivocal: climate change, vaccines, "death panels," the birthplace and religion of the president (PDF), and much else. It would seem that expecting people to be convinced by the facts flies in the face of, you know, the facts.
  • ...4 more annotations...
  • The theory of motivated reasoning builds on a key insight of modern neuroscience (PDF): Reasoning is actually suffused with emotion (or what researchers often call "affect"). Not only are the two inseparable, but our positive or negative feelings about people, things, and ideas arise much more rapidly than our conscious thoughts, in a matter of milliseconds—fast enough to detect with an EEG device, but long before we're aware of it. That shouldn't be surprising: Evolution required us to react very quickly to stimuli in our environment. It's a "basic human survival skill," explains political scientist Arthur Lupia of the University of Michigan. We push threatening information away; we pull friendly information close. We apply fight-or-flight reflexes not only to predators, but to data itself.
  • We're not driven only by emotions, of course—we also reason, deliberate. But reasoning comes later, works slower—and even then, it doesn't take place in an emotional vacuum. Rather, our quick-fire emotions can set us on a course of thinking that's highly biased, especially on topics we care a great deal about.
  • Consider a person who has heard about a scientific discovery that deeply challenges her belief in divine creation—a new hominid, say, that confirms our evolutionary origins. What happens next, explains political scientist Charles Taber of Stony Brook University, is a subconscious negative response to the new information—and that response, in turn, guides the type of memories and associations formed in the conscious mind. "They retrieve thoughts that are consistent with their previous beliefs," says Taber, "and that will lead them to build an argument and challenge what they're hearing."
  • when we think we're reasoning, we may instead be rationalizing. Or to use an analogy offered by University of Virginia psychologist Jonathan Haidt: We may think we're being scientists, but we're actually being lawyers (PDF). Our "reasoning" is a means to a predetermined end—winning our "case"—and is shot through with biases. They include "confirmation bias," in which we give greater heed to evidence and arguments that bolster our beliefs, and "disconfirmation bias," in which we expend disproportionate energy trying to debunk or refute views and arguments that we find uncongenial.
Weiye Loh

Effect of alcohol on risk of coronary heart diseas... [Vasc Health Risk Manag. 2006] - ... - 0 views

  • Studies of the effects of alcohol consumption on health outcomes should recognise the methodological biases they are likely to face, and design, analyse and interpret their studies accordingly. While regular moderate alcohol consumption during middle-age probably does reduce vascular risk, care should be taken when making general recommendations about safe levels of alcohol intake. In particular, it is likely that any promotion of alcohol for health reasons would do substantially more harm than good.
  • . The consistency in the vascular benefit associated with moderate drinking (compared with non-drinking) observed across different studies, together with the existence of credible biological pathways, strongly suggests that at least some of this benefit is real.
  • However, because of biases introduced by: choice of reference categories; reverse causality bias; variations in alcohol intake over time; and confounding, some of it is likely to be an artefact. For heavy drinking, different study biases have the potential to act in opposing directions, and as such, the true effects of heavy drinking on vascular risk are uncertain. However, because of the known harmful effects of heavy drinking on non-vascular mortality, the problem is an academic one.
  •  
    Studies of the effects of alcohol consumption on health outcomes should recognise the methodological biases they are likely to face, and design, analyse and interpret their studies accordingly. While regular moderate alcohol consumption during middle-age probably does reduce vascular risk, care should be taken when making general recommendations about safe levels of alcohol intake.
Weiye Loh

Meet Science: What is "peer review"? - Boing Boing - 0 views

  • Scientists do complain about peer review. But let me set one thing straight: The biggest complaints scientists have about peer review are not that it stifles unpopular ideas. You've heard this truthy factoid from countless climate-change deniers, and purveyors of quack medicine. And peer review is a convenient scapegoat for their conspiracy theories. There's just enough truth to make the claims sound plausible.
  • Peer review is flawed. Peer review can be biased. In fact, really new, unpopular ideas might well have a hard time getting published in the biggest journals right at first. You saw an example of that in my interview with sociologist Harry Collins. But those sort of findings will often published by smaller, more obscure journals. And, if a scientist keeps finding more evidence to support her claims, and keeps submitting her work to peer review, more often than not she's going to eventually convince people that she's right. Plenty of scientists, including Harry Collins, have seen their once-shunned ideas published widely.
  • So what do scientists complain about? This shouldn't be too much of a surprise. It's the lack of training, the lack of feedback, the time constraints, and the fact that, the more specific your research gets, the fewer people there are with the expertise to accurately and thoroughly review your work.
  • ...5 more annotations...
  • Scientists are frustrated that most journals don't like to publish research that is solid, but not ground-breaking. They're frustrated that most journals don't like to publish studies where the scientist's hypothesis turned out to be wrong.
  • Some scientists would prefer that peer review not be anonymous—though plenty of others like that feature. Journals like the British Medical Journal have started requiring reviewers to sign their comments, and have produced evidence that this practice doesn't diminish the quality of the reviews.
  • There are also scientists who want to see more crowd-sourced, post-publication review of research papers. Because peer review is flawed, they say, it would be helpful to have centralized places where scientists can go to find critiques of papers, written by scientists other than the official peer-reviewers. Maybe the crowd can catch things the reviewers miss. We certainly saw that happen earlier this year, when microbiologist Rosie Redfield took a high-profile peer-reviewed paper about arsenic-based life to task on her blog. The website Faculty of 1000 is attempting to do something like this. You can go to that site, look up a previously published peer-reviewed paper, and see what other scientists are saying about it. And the Astrophysics Archive has been doing this same basic thing for years.
  • you shouldn't canonize everything a peer-reviewed journal article says just because it is a peer-reviewed journal article.
  • at the same time, being peer reviewed is a sign that the paper's author has done some level of due diligence in their work. Peer review is flawed, but it has value. There are improvements that could be made. But, like the old joke about democracy, peer review is the worst possible system except for every other system we've ever come up with.
  •  
    Being peer reviewed doesn't mean your results are accurate. Not being peer reviewed doesn't mean you're a crank. But the fact that peer review exists does weed out a lot of cranks, simply by saying, "There is a standard." Journals that don't have peer review do tend to be ones with an obvious agenda. White papers, which are not peer reviewed, do tend to contain more bias and self-promotion than peer-reviewed journal articles.
Weiye Loh

Roger Pielke Jr.'s Blog: IPCC and COI: Flashback 2004 - 0 views

  • In this case the NGOs and other groups represent environmental and humanitarian groups that have put together a report (in PDF) on what they see as needed and unnecessary policy actions related to climate change. They put together a nice glossy report with findings and recommendations such as: *Limit global temperature rise to 2 degrees (Celsius, p. 4) *Extracting the World Bank from fossil fuels (p. 15) *Opposing the inclusion of carbon sinks in the [Kyoto] Protocol (p. 22)
  • It is troubling that the Chair of the IPCC would lend his name and organizational affiliation to a set of groups with members engaged actively in political advocacy on climate change. Even if Dr. Pachauri feels strongly about the merit of the political agenda proposed by these groups, at a minimum his endorsement creates a potential perception that the IPCC has an unstated political agenda. This is compounded by the fact that the report Dr. Pachauri tacitly endorses contains statements that are scientifically at odds with those of the IPCC.
  • perhaps most troubling is that by endorsing this group’s agenda he has opened the door for those who would seek to discredit the IPCC by alleging exactly such a bias. (And don’t be surprised to see such statements forthcoming.) If the IPCC’s role is indeed to act as an honest broker, then it would seem to make sense that its leadership ought not blur that role by endorsing, tacitly or otherwise, the agendas of particular groups. There are plenty of appropriate places for political advocacy on climate change, but the IPCC does not seem to me to be among those places.
  • ...1 more annotation...
  • Organized by the New Economics Foundation and the Working Group on Climate and Development, the report (in PDF) is actually pretty good and contains much valuable information on climate change and development (that is, once you get past the hype of the press release and its lack of precision in disaggregating climate and vulnerability as sources of climate-related impacts). The participating organizations have done a nice job integrating considerations of climate change and development, a perspective that is certainly needed. More generally, the IPCC suffers because it no longer considers “policy options” under its mandate. Since its First Assessment Report when it did consider policy options, the IPCC has eschewed responsibility for developing and evaluating a wide range of possible policy options on climate change. By deciding to policy outside of its mandate since 1992, the IPCC, ironically, leaves itself more open to charges of political bias. It is time for the IPCC to bring policy back in, both because we need new and innovative options on climate, but also because the IPCC has great potential to serve as an honest broker. But until it does, its leadership would be well served to avoid either the perception or the reality of endorsing particular political perspectives.
  •  
    Consider the following imaginary scenario. NGOs and a few other representatives of the oil and gas industry decide to band together to produce a report on what they see as needed and unnecessary policy actions related to climate change. They put together a nice glossy report with findings and recommendations such as: *Coal is the fuel of the future, we must mine more. *CO2 regulations are too costly. *Climate change will be good for agriculture. In addition, the report contains some questionable scientific statements and associations. Imagine further that the report contains a preface authored by a prominent scientist who though unpaid for his work lends his name and credibility to the report. How might that scientist be viewed by the larger community? Answers that come to mind include: "A tool of industry," "Discredited," "Biased," "Political Advocate." It is likely that in such a scenario that connection of the scientist to the political advocacy efforts of the oil and gas industry would provide considerable grist for opponents of the oil and gas industry, and specifically a basis for highlighting the appearance or reality of a compromised position of the scientist. Fair enough?
1 - 20 of 62 Next › Last »
Showing 20 items per page