Skip to main content

Home/ New Media Ethics 2009 course/ Group items tagged Science Communication

Rss Feed Group items tagged

Weiye Loh

Adventures in Flay-land: Scepticism versus Denialism - Delingpole Part II - 0 views

  • wrote a piece about James Delingpole's unfortunate appearance on the BBC program Horizon on Monday. In that piece I refered to one of his own Telegraph articles in which he criticizes renowned sceptic Dr Ben Goldacre for betraying the principles of scepticism in his regard of the climate change debate. That article turns out to be rather instructional as it highlights perfectly the difference between real scepticism and the false scepticism commonly described as denialism.
  • It appears that James has tremendous respect for Ben Goldacre, who is a qualified medical doctor and has written a best-selling book about science scepticism called Bad Science and continues to write a popular Guardian science column. Here's what Delingpole has to say about Dr Goldacre: Many of Goldacre’s campaigns I support. I like and admire what he does. But where I don’t respect him one jot is in his views on ‘Climate Change,’ for they jar so very obviously with supposed stance of determined scepticism in the face of establishment lies.
  • Scepticism is not some sort of rebellion against the establishment as Delingpole claims. It is not in itself an ideology. It is merely an approach to evaluating new information. There are varying definitions of scepticism, but Goldacre's variety goes like this: A sceptic does not support or promote any new theory until it is proven to his or her satisfaction that the new theory is the best available. Evidence is examined and accepted or discarded depending on its persuasiveness and reliability. Sceptics like Ben Goldacre have a deep appreciation for the scientific method of testing a hypothesis through experimentation and are generally happy to change their minds when the evidence supports the opposing view. Sceptics are not true believers, but they search for the truth. Far from challenging the established scientific consensus, Goldacre in Bad Science typcially defends the scientific consensus against alternative medical views that fall back on untestable positions. In science the consensus is sometimes proven wrong, and while this process is imperfect it eventually results in the old consensus being replaced with a new one.
  • ...11 more annotations...
  • So the question becomes "what is denialism?" Denialism is a mindset that chooses to deny reality in order to avoid an uncomfortable truth. Denialism creates a false sense of truth through the subjective selection of evidence (cherry picking). Unhelpful evidence is rejected and excuses are made, while supporting evidence is accepted uncritically - its meaning and importance exaggerated. It is a common feature of denialism to claim the existence of some sort of powerful conspiracy to suppress the truth. Rejection by the mainstream of some piece of evidence supporting the denialist view, no matter how flawed, is taken as further proof of the supposed conspiracy. In this way the denialist always has a fallback position.
  • Delingpole makes the following claim: Whether Goldacre chooses to ignore it or not, there are many, many hugely talented, intelligent men and women out there – from mining engineer turned Hockey-Stick-breaker Steve McIntyre and economist Ross McKitrick to bloggers Donna LaFramboise and Jo Nova to physicist Richard Lindzen….and I really could go on and on – who have amassed a body of hugely powerful evidence to show that the AGW meme which has spread like a virus around the world these last 20 years is seriously flawed.
  • So he mentions a bunch of people who are intelligent and talented and have amassed evidence to the effect that the consensus of AGW (Anthropogenic Global Warming) is a myth. Should I take his word for it? No. I am a sceptic. I will examine the evidence and the people behind it.
  • MM claims that global temperatures are not accelerating. The claims have however been roundly disproved as explained here. It is worth noting at this point that neither man is a climate scientist. McKitrick is an economist and McIntyre is a mining industry policy analyst. It is clear from the very detailed rebuttal article that McIntrye and McKitrick have no qualifications to critique the earlier paper and betray fundamental misunderstandings of methodologies employed in that study.
  • This Wikipedia article explains in better laymens terms how the MM claims are faulty.
  • It is difficult for me to find out much about blogger Donna LaFrambois. As far as I can see she runs her own blog at http://nofrakkingconsensus.wordpress.com and is the founder of another site here http://www.noconsensus.org/. It's not very clear to me what her credentials are
  • She seems to be a critic of the so-called climate bible, a comprehensive report by the UN Intergovernmental Panel on Climate Change (IPCC)
  • I am familiar with some of the criticisms of this panel. Working Group 2 famously overstated the estimated rate of disappearance of the Himalayan glacier in 2007 and was forced to admit the error. Working Group 2 is a panel of biologists and sociologists whose job is to evaluate the impact of climate change. These people are not climate scientists. Their report takes for granted the scientific basis of climate change, which has been delivered by Working Group 1 (the climate scientists). The science revealed by Working Group 1 is regarded as sound (of course this is just a conspiracy, right?) At any rate, I don't know why I should pay attention to this blogger. Anyone can write a blog and anyone with money can own a domain. She may be intelligent, but I don't know anything about her and with all the millions of blogs out there I'm not convinced hers is of any special significance.
  • Richard Lindzen. Okay, there's information about this guy. He has a wiki page, which is more than I can say for the previous two. He is an atmospheric physicist and Professor of Meteorology at MIT.
  • According to Wikipedia, it would seem that Lindzen is well respected in his field and represents the 3% of the climate science community who disagree with the 97% consensus.
  • The second to last paragraph of Delingpole's article asks this: If  Goldacre really wants to stick his neck out, why doesn’t he try arguing against a rich, powerful, bullying Climate-Change establishment which includes all three British main political parties, the National Academy of Sciences, the Royal Society, the Prince of Wales, the Prime Minister, the President of the USA, the EU, the UN, most schools and universities, the BBC, most of the print media, the Australian Government, the New Zealand Government, CNBC, ABC, the New York Times, Goldman Sachs, Deutsche Bank, most of the rest of the City, the wind farm industry, all the Big Oil companies, any number of rich charitable foundations, the Church of England and so on?I hope Ben won't mind if I take this one for him (first of all, Big Oil companies? Are you serious?) The answer is a question and the question is "Where is your evidence?"
Weiye Loh

Roger Pielke Jr.'s Blog: Science Impact - 0 views

  • The Guardian has a blog post up by three neuroscientists decrying the state of hype in the media related to their field, which is fueled in part by their colleagues seeking "impact." 
  • Anyone who has followed recent media reports that electrical brain stimulation "sparks bright ideas" or "unshackles the genius within" could be forgiven for believing that we stand on the frontier of a brave new world. As James Gallagher of the BBC put it, "Are we entering the era of the thinking cap – a device to supercharge our brains?" The answer, we would suggest, is a categorical no. Such speculations begin and end in the colourful realm of science fiction. But we are also in danger of entering the era of the "neuro-myth", where neuroscientists sensationalise and distort their own findings in the name of publicity. The tendency for scientists to over-egg the cake when dealing with the media is nothing new, but recent examples are striking in their disregard for accurate reporting to the public. We believe the media and academic community share a collective responsibility to prevent pseudoscience from masquerading as neuroscience.
  • They identify an . . . . . . unacceptable gulf between, on the one hand, the evidence-bound conclusions reached in peer-reviewed scientific journals, and on the other, the heavy spin applied by scientists to achieve publicity in the media. Are we as neuroscientists so unskilled at communicating with the public, or so low in our estimation of the public's intelligence, that we see no alternative but to mislead and exaggerate?
  • ...1 more annotation...
  • Somewhere down the line, achieving an impact in the media seems to have become the goal in itself, rather than what it should be: a way to inform and engage the public with clarity and objectivity, without bias or prejudice. Our obsession with impact is not one-sided. The craving of scientists for publicity is fuelled by a hurried and unquestioning media, an academic community that disproportionately rewards publication in "high impact" journals such as Nature, and by research councils that emphasise the importance of achieving "impact" while at the same time delivering funding cuts. Academics are now pushed to attend media training courses, instructed about "pathways to impact", required to include detailed "impact summaries" when applying for grant funding, and constantly reminded about the importance of media engagement to further their careers. Yet where in all of this strategising and careerism is it made clear why public engagement is important? Where is it emphasised that the most crucial consideration in our interactions with the media is that we are accurate, honest and open about the limitations of our research?
  •  
    The Guardian has a blog post up by three neuroscientists decrying the state of hype in the media related to their field, which is fueled in part by their colleagues seeking "impact." 
Weiye Loh

nanopolitan: "Lies, Damned Lies, and Medical Science" - 0 views

  • That's the title of The Atlantic profile of Dr. John Ioannidis who "has spent his career challenging his peers by exposing their bad science." His 2005 paper in PLoS Medicine was on why most published research findings are false.
  • Ioannidis anticipated that the community might shrug off his findings: sure, a lot of dubious research makes it into journals, but we researchers and physicians know to ignore it and focus on the good stuff, so what’s the big deal? The other paper headed off that claim.
  • He zoomed in on 49 of the most highly regarded research findings in medicine over the previous 13 years, as judged by the science community’s two standard measures: the papers had appeared in the journals most widely cited in research articles, and the 49 articles themselves were the most widely cited articles in these journals.
  • ...7 more annotations...
  • Of the 49 articles, 45 claimed to have uncovered effective interventions. Thirty-four of these claims had been retested, and 14 of these, or 41 percent, had been convincingly shown to be wrong or significantly exaggerated. If between a third and a half of the most acclaimed research in medicine was proving untrustworthy, the scope and impact of the problem were undeniable. That article was published in the Journal of the American Medical Association. [here's the link.]
  • David Freedman -- has quite a bit on the sociology of research in medical science. Here are a few quotes:
  • Even when the evidence shows that a particular research idea is wrong, if you have thousands of scientists who have invested their careers in it, they’ll continue to publish papers on it,” he says. “It’s like an epidemic, in the sense that they’re infected with these wrong ideas, and they’re spreading it to other researchers through journals.”
  • the peer-review process often pressures researchers to shy away from striking out in genuinely new directions, and instead to build on the findings of their colleagues (that is, their potential reviewers) in ways that only seem like breakthroughs—as with the exciting-sounding gene linkages (autism genes identified!) and nutritional findings (olive oil lowers blood pressure!) that are really just dubious and conflicting variations on a theme.
  • The ultimate protection against research error and bias is supposed to come from the way scientists constantly retest each other’s results—except they don’t. Only the most prominent findings are likely to be put to the test, because there’s likely to be publication payoff in firming up the proof, or contradicting it.
  • Doctors may notice that their patients don’t seem to fare as well with certain treatments as the literature would lead them to expect, but the field is appropriately conditioned to subjugate such anecdotal evidence to study findings.
  • [B]eing wrong in science is fine, and even necessary—as long as scientists recognize that they blew it, report their mistake openly instead of disguising it as a success, and then move on to the next thing, until they come up with the very occasional genuine breakthrough. But as long as careers remain contingent on producing a stream of research that’s dressed up to seem more right than it is, scientists will keep delivering exactly that.
  •  
    "Lies, Damned Lies, and Medical Science"
Weiye Loh

Research integrity: Sabotage! : Nature News - 0 views

  • University of Michigan in Ann Arbor
  • Vipul Bhrigu, a former postdoc at the university's Comprehensive Cancer Center, wears a dark-blue three-buttoned suit and a pinched expression as he cups his pregnant wife's hand in both of his. When Pollard Hines calls Bhrigu's case to order, she has stern words for him: "I was inclined to send you to jail when I came out here this morning."
  • Bhrigu, over the course of several months at Michigan, had meticulously and systematically sabotaged the work of Heather Ames, a graduate student in his lab, by tampering with her experiments and poisoning her cell-culture media. Captured on hidden camera, Bhrigu confessed to university police in April and pleaded guilty to malicious destruction of personal property, a misdemeanour that apparently usually involves cars: in the spaces for make and model on the police report, the arresting officer wrote "lab research" and "cells". Bhrigu has said on multiple occasions that he was compelled by "internal pressure" and had hoped to slow down Ames's work. Speaking earlier this month, he was contrite. "It was a complete lack of moral judgement on my part," he said.
  • ...16 more annotations...
  • Bhrigu's actions are surprising, but probably not unique. There are few firm numbers showing the prevalence of research sabotage, but conversations with graduate students, postdocs and research-misconduct experts suggest that such misdeeds occur elsewhere, and that most go unreported or unpoliced. In this case, the episode set back research, wasted potentially tens of thousands of dollars and terrorized a young student. More broadly, acts such as Bhrigu's — along with more subtle actions to hold back or derail colleagues' work — have a toxic effect on science and scientists. They are an affront to the implicit trust between scientists that is necessary for research endeavours to exist and thrive.
  • Despite all this, there is little to prevent perpetrators re-entering science.
  • federal bodies that provide research funding have limited ability and inclination to take action in sabotage cases because they aren't interpreted as fitting the federal definition of research misconduct, which is limited to plagiarism, fabrication and falsification of research data.
  • In Bhrigu's case, administrators at the University of Michigan worked with police to investigate, thanks in part to the persistence of Ames and her supervisor, Theo Ross. "The question is, how many universities have such procedures in place that scientists can go and get that kind of support?" says Christine Boesz, former inspector-general for the US National Science Foundation in Arlington, Virginia, and now a consultant on scientific accountability. "Most universities I was familiar with would not necessarily be so responsive."
  • Some labs are known to be hyper-competitive, with principal investigators pitting postdocs against each other. But Ross's lab is a small, collegial place. At the time that Ames was noticing problems, it housed just one other graduate student, a few undergraduates doing projects, and the lab manager, Katherine Oravecz-Wilson, a nine-year veteran of the lab whom Ross calls her "eyes and ears". And then there was Bhrigu, an amiable postdoc who had joined the lab in April 2009.
  • Some people whom Ross consulted with tried to convince her that Ames was hitting a rough patch in her work and looking for someone else to blame. But Ames was persistent, so Ross took the matter to the university's office of regulatory affairs, which advises on a wide variety of rules and regulations pertaining to research and clinical care. Ray Hutchinson, associate dean of the office, and Patricia Ward, its director, had never dealt with anything like it before. After several meetings and two more instances of alcohol in the media, Ward contacted the department of public safety — the university's police force — on 9 March. They immediately launched an investigation — into Ames herself. She endured two interrogations and a lie-detector test before investigators decided to look elsewhere.
  • At 4:00 a.m. on Sunday 18 April, officers installed two cameras in the lab: one in the cold room where Ames's blots had been contaminated, and one above the refrigerator where she stored her media. Ames came in that day and worked until 5:00 p.m. On Monday morning at around 10:15, she found that her medium had been spiked again. When Ross reviewed the tapes of the intervening hours with Richard Zavala, the officer assigned to the case, she says that her heart sank. Bhrigu entered the lab at 9:00 a.m. on Monday and pulled out the culture media that he would use for the day. He then returned to the fridge with a spray bottle of ethanol, usually used to sterilize lab benches. With his back to the camera, he rummaged through the fridge for 46 seconds. Ross couldn't be sure what he was doing, but it didn't look good. Zavala escorted Bhrigu to the campus police department for questioning. When he told Bhrigu about the cameras in the lab, the postdoc asked for a drink of water and then confessed. He said that he had been sabotaging Ames's work since February. (He denies involvement in the December and January incidents.)
  • Misbehaviour in science is nothing new — but its frequency is difficult to measure. Daniele Fanelli at the University of Edinburgh, UK, who studies research misconduct, says that overtly malicious offences such as Bhrigu's are probably infrequent, but other forms of indecency and sabotage are likely to be more common. "A lot more would be the kind of thing you couldn't capture on camera," he says. Vindictive peer review, dishonest reference letters and withholding key aspects of protocols from colleagues or competitors can do just as much to derail a career or a research project as vandalizing experiments. These are just a few of the questionable practices that seem quite widespread in science, but are not technically considered misconduct. In a meta-analysis of misconduct surveys, published last year (D. Fanelli PLoS ONE 4, e5738; 2009), Fanelli found that up to one-third of scientists admit to offences that fall into this grey area, and up to 70% say that they have observed them.
  • Some say that the structure of the scientific enterprise is to blame. The big rewards — tenured positions, grants, papers in stellar journals — are won through competition. To get ahead, researchers need only be better than those they are competing with. That ethos, says Brian Martinson, a sociologist at HealthPartners Research Foundation in Minneapolis, Minnesota, can lead to sabotage. He and others have suggested that universities and funders need to acknowledge the pressures in the research system and try to ease them by means of education and rehabilitation, rather than simply punishing perpetrators after the fact.
  • Bhrigu says that he felt pressure in moving from the small college at Toledo to the much bigger one in Michigan. He says that some criticisms he received from Ross about his incomplete training and his work habits frustrated him, but he doesn't blame his actions on that. "In any kind of workplace there is bound to be some pressure," he says. "I just got jealous of others moving ahead and I wanted to slow them down."
  • At Washtenaw County Courthouse in July, having reviewed the case files, Pollard Hines delivered Bhrigu's sentence. She ordered him to pay around US$8,800 for reagents and experimental materials, plus $600 in court fees and fines — and to serve six months' probation, perform 40 hours of community service and undergo a psychiatric evaluation.
  • But the threat of a worse sentence hung over Bhrigu's head. At the request of the prosecutor, Ross had prepared a more detailed list of damages, including Bhrigu's entire salary, half of Ames's, six months' salary for a technician to help Ames get back up to speed, and a quarter of the lab's reagents. The court arrived at a possible figure of $72,000, with the final amount to be decided upon at a restitution hearing in September.
  • Ross, though, is happy that the ordeal is largely over. For the month-and-a-half of the investigation, she became reluctant to take on new students or to hire personnel. She says she considered packing up her research programme. She even questioned her own sanity, worrying that she was the one sabotaging Ames's work via "an alternate personality". Ross now wonders if she was too trusting, and urges other lab heads to "realize that the whole spectrum of humanity is in your lab. So, when someone complains to you, take it seriously."
  • She also urges others to speak up when wrongdoing is discovered. After Bhrigu pleaded guilty in June, Ross called Trempe at the University of Toledo. He was shocked, of course, and for more than one reason. His department at Toledo had actually re-hired Bhrigu. Bhrigu says that he lied about the reason he left Michigan, blaming it on disagreements with Ross. Toledo let Bhrigu go in July, not long after Ross's call.
  • Now that Bhrigu is in India, there is little to prevent him from getting back into science. And even if he were in the United States, there wouldn't be much to stop him. The National Institutes of Health in Bethesda, Maryland, through its Office of Research Integrity, will sometimes bar an individual from receiving federal research funds for a time if they are found guilty of misconduct. But Bhigru probably won't face that prospect because his actions don't fit the federal definition of misconduct, a situation Ross finds strange. "All scientists will tell you that it's scientific misconduct because it's tampering with data," she says.
  • Ames says that the experience shook her trust in her chosen profession. "I did have doubts about continuing with science. It hurt my idea of science as a community that works together, builds upon each other's work and collaborates."
  •  
    Research integrity: Sabotage! Postdoc Vipul Bhrigu destroyed the experiments of a colleague in order to get ahead.
Weiye Loh

Sociologist Harry Collins poses as a physicist. - By Jon Lackman - Slate Magazine - 0 views

  • British sociologist Harry Collins asked a scientist who specializes in gravitational waves to answer seven questions about the physics of these waves. Collins, who has made an amateur study of this field for more than 30 years but has never actually practiced it, also answered the questions himself. Then he submitted both sets of answers to a panel of judges who are themselves gravitational-wave researchers. The judges couldn't tell the impostor from one of their own. Collins argues that he is therefore as qualified as anyone to discuss this field, even though he can't conduct experiments in it.
  • The journal Nature predicted that the experiment would have a broad impact, writing that Collins could help settle the "science wars of the 1990s," "when sociologists launched what scientists saw as attacks on the very nature of science, and scientists responded in kind," accusing the sociologists of misunderstanding science. More generally, it could affect "the argument about whether an outsider, such as an anthropologist, can properly understand another group, such as a remote rural community." With this comment, Nature seemed to be saying that if a sociologist can understand physics, then anyone can understand anything.
  • It will be interesting to see if Collins' results can indeed be repeated in different situations. Meanwhile, his experiment is plenty interesting in itself. Just one of the judges succeeded in distinguishing Collins' answers from those of the trained experts. One threw up his hands. And the other seven declared Collins the physicist. He didn't simply do as well as the trained specialist—he did better, even though the test questions demanded technical answers. One sample answer from Collins gives you the flavor: "Since gravitational waves change the shape of spacetime and radio waves do not, the effect on an interferometer of radio waves can only be to mimic the effects of a gravitational wave, not reproduce them." (More details can be found in this paper Collins wrote with his collaborators.)
  • ...5 more annotations...
  • To be sure, a differently designed experiment would have presented more difficulty for Collins. If he'd chosen questions that involved math, they would have done him in
  • But many scientists consider themselves perfectly qualified to discuss topics for which they lack the underlying mathematical skills, as Collins noted when I talked to him. "You can be a great physicist and not know any mathematics," he said.
  • So, if Collins can talk gravitational waves as well as an insider, who cares if he doesn't know how to crunch the numbers? Alan Sokal does. The New York University physicist is famous for an experiment a decade ago that seemed to demonstrate the futility of laymen discussing science. In 1996, he tricked the top humanities journal Social Text into publishing as genuine scholarship a totally nonsensical paper that celebrated fashionable literary theory and then applied it to all manner of scientific questions. ("As Lacan suspected, there is an intimate connection between the external structure of the physical world and its inner psychological representation qua knot theory.") Sokal showed that, with a little flattery, laymen could be induced to swallow the most ridiculous of scientific canards—so why should we value their opinions on science as highly as scientists'?
  • Sokal doesn't think Collins has proved otherwise. When I reached him this week, he acknowledged that you don't need to practice science in order to understand it. But he maintains, as he put it to Nature, that in many science debates, "you need a knowledge of the field that is virtually, if not fully, at the level of researchers in the field," in order to participate. He elaborated: Say there are two scientists, X and Y. If you want to argue that X's theory was embraced over Y's, even though Y's is better, because the science community is biased against Y, then you had better be able to read and evaluate their theories yourself, mathematics included (or collaborate with someone who can). He has a point. Just because mathematics features little in the work of some gravitational-wave physicists doesn't mean it's a trivial part of the subject.
  • Even if Collins didn't demonstrate that he is qualified to pronounce on all of gravitational-wave physics, he did learn more of the subject than anyone may have thought possible. Sokal says he was shocked by Collins' store of knowledge: "He knows more about gravitational waves than I do!" Sokal admitted that Collins was already qualified to pronounce on a lot, and that with a bit more study, he would be the equal of a professional.
Weiye Loh

From Abstract to News Release to Story, a Tilt to the 'Front-Page Thought' - NYTimes.com - 0 views

  •  
    "In the post on research on extreme rainfall and warming, Gavin Schmidt, the NASA climate scientist and Real Climate blogger, described the misinterpretation of some paper abstracts as mainly reflecting a cultural divide: "Here we show" statements are required by Nature and Science to clearly lay out the point of the paper. If you don't include it, they will write it in. The caveats/uncertainties/issues all come later. I think the confusion is more cultural than anything. No one at Nature or Science or any of the authors in any subject think that uncertainties are zero, but they require a clear statement of the point of the paper within their house style. I think that conclusion misses the reality that, particularly in the world of online communication of science, abstracts are not merely for colleagues who know the shorthand, but have different audiences who'll have different ways of interpreting phrases such as "here we show.""
Weiye Loh

The Science of Why We Don't Believe Science | Mother Jones - 0 views

  • Conservatives are more likely to embrace climate science if it comes to them via a business or religious leader, who can set the issue in the context of different values than those from which environmentalists or scientists often argue. Doing so is, effectively, to signal a détente in what Kahan has called a "culture war of fact." In other words, paradoxically, you don't lead with the facts in order to convince. You lead with the values—so as to give the facts a fighting chance.
  • Kahan's work at Yale. In one study, he and his colleagues packaged the basic science of climate change into fake newspaper articles bearing two very different headlines—"Scientific Panel Recommends Anti-Pollution Solution to Global Warming" and "Scientific Panel Recommends Nuclear Solution to Global Warming"—and then tested how citizens with different values responded. Sure enough, the latter framing made hierarchical individualists much more open to accepting the fact that humans are causing global warming. Kahan infers that the effect occurred because the science had been written into an alternative narrative that appealed to their pro-industry worldview.
  • If you want someone to accept new evidence, make sure to present it to them in a context that doesn't trigger a defensive, emotional reaction.
  • ...1 more annotation...
  • All we can currently bank on is the fact that we all have blinders in some situations. The question then becomes: What can be done to counteract human nature itself?
Weiye Loh

The Decline Effect and the Scientific Method : The New Yorker - 0 views

  • On September 18, 2007, a few dozen neuroscientists, psychiatrists, and drug-company executives gathered in a hotel conference room in Brussels to hear some startling news. It had to do with a class of drugs known as atypical or second-generation antipsychotics, which came on the market in the early nineties.
  • the therapeutic power of the drugs appeared to be steadily waning. A recent study showed an effect that was less than half of that documented in the first trials, in the early nineteen-nineties. Many researchers began to argue that the expensive pharmaceuticals weren’t any better than first-generation antipsychotics, which have been in use since the fifties. “In fact, sometimes they now look even worse,” John Davis, a professor of psychiatry at the University of Illinois at Chicago, told me.
  • Before the effectiveness of a drug can be confirmed, it must be tested and tested again. Different scientists in different labs need to repeat the protocols and publish their results. The test of replicability, as it’s known, is the foundation of modern research. Replicability is how the community enforces itself. It’s a safeguard for the creep of subjectivity. Most of the time, scientists know what results they want, and that can influence the results they get. The premise of replicability is that the scientific community can correct for these flaws.
  • ...30 more annotations...
  • But now all sorts of well-established, multiply confirmed findings have started to look increasingly uncertain. It’s as if our facts were losing their truth: claims that have been enshrined in textbooks are suddenly unprovable. This phenomenon doesn’t yet have an official name, but it’s occurring across a wide range of fields, from psychology to ecology. In the field of medicine, the phenomenon seems extremely widespread, affecting not only antipsychotics but also therapies ranging from cardiac stents to Vitamin E and antidepressants: Davis has a forthcoming analysis demonstrating that the efficacy of antidepressants has gone down as much as threefold in recent decades.
  • the effect is especially troubling because of what it exposes about the scientific process. If replication is what separates the rigor of science from the squishiness of pseudoscience, where do we put all these rigorously validated findings that can no longer be proved? Which results should we believe? Francis Bacon, the early-modern philosopher and pioneer of the scientific method, once declared that experiments were essential, because they allowed us to “put nature to the question.” But it appears that nature often gives us different answers.
  • At first, he assumed that he’d made an error in experimental design or a statistical miscalculation. But he couldn’t find anything wrong with his research. He then concluded that his initial batch of research subjects must have been unusually susceptible to verbal overshadowing. (John Davis, similarly, has speculated that part of the drop-off in the effectiveness of antipsychotics can be attributed to using subjects who suffer from milder forms of psychosis which are less likely to show dramatic improvement.) “It wasn’t a very satisfying explanation,” Schooler says. “One of my mentors told me that my real mistake was trying to replicate my work. He told me doing that was just setting myself up for disappointment.”
  • In private, Schooler began referring to the problem as “cosmic habituation,” by analogy to the decrease in response that occurs when individuals habituate to particular stimuli. “Habituation is why you don’t notice the stuff that’s always there,” Schooler says. “It’s an inevitable process of adjustment, a ratcheting down of excitement. I started joking that it was like the cosmos was habituating to my ideas. I took it very personally.”
  • The most likely explanation for the decline is an obvious one: regression to the mean. As the experiment is repeated, that is, an early statistical fluke gets cancelled out. The extrasensory powers of Schooler’s subjects didn’t decline—they were simply an illusion that vanished over time. And yet Schooler has noticed that many of the data sets that end up declining seem statistically solid—that is, they contain enough data that any regression to the mean shouldn’t be dramatic. “These are the results that pass all the tests,” he says. “The odds of them being random are typically quite remote, like one in a million. This means that the decline effect should almost never happen. But it happens all the time!
  • this is why Schooler believes that the decline effect deserves more attention: its ubiquity seems to violate the laws of statistics. “Whenever I start talking about this, scientists get very nervous,” he says. “But I still want to know what happened to my results. Like most scientists, I assumed that it would get easier to document my effect over time. I’d get better at doing the experiments, at zeroing in on the conditions that produce verbal overshadowing. So why did the opposite happen? I’m convinced that we can use the tools of science to figure this out. First, though, we have to admit that we’ve got a problem.”
  • In 2001, Michael Jennions, a biologist at the Australian National University, set out to analyze “temporal trends” across a wide range of subjects in ecology and evolutionary biology. He looked at hundreds of papers and forty-four meta-analyses (that is, statistical syntheses of related studies), and discovered a consistent decline effect over time, as many of the theories seemed to fade into irrelevance. In fact, even when numerous variables were controlled for—Jennions knew, for instance, that the same author might publish several critical papers, which could distort his analysis—there was still a significant decrease in the validity of the hypothesis, often within a year of publication. Jennions admits that his findings are troubling, but expresses a reluctance to talk about them publicly. “This is a very sensitive issue for scientists,” he says. “You know, we’re supposed to be dealing with hard facts, the stuff that’s supposed to stand the test of time. But when you see these trends you become a little more skeptical of things.”
  • the worst part was that when I submitted these null results I had difficulty getting them published. The journals only wanted confirming data. It was too exciting an idea to disprove, at least back then.
  • the steep rise and slow fall of fluctuating asymmetry is a clear example of a scientific paradigm, one of those intellectual fads that both guide and constrain research: after a new paradigm is proposed, the peer-review process is tilted toward positive results. But then, after a few years, the academic incentives shift—the paradigm has become entrenched—so that the most notable results are now those that disprove the theory.
  • Jennions, similarly, argues that the decline effect is largely a product of publication bias, or the tendency of scientists and scientific journals to prefer positive data over null results, which is what happens when no effect is found. The bias was first identified by the statistician Theodore Sterling, in 1959, after he noticed that ninety-seven per cent of all published psychological studies with statistically significant data found the effect they were looking for. A “significant” result is defined as any data point that would be produced by chance less than five per cent of the time. This ubiquitous test was invented in 1922 by the English mathematician Ronald Fisher, who picked five per cent as the boundary line, somewhat arbitrarily, because it made pencil and slide-rule calculations easier. Sterling saw that if ninety-seven per cent of psychology studies were proving their hypotheses, either psychologists were extraordinarily lucky or they published only the outcomes of successful experiments. In recent years, publication bias has mostly been seen as a problem for clinical trials, since pharmaceutical companies are less interested in publishing results that aren’t favorable. But it’s becoming increasingly clear that publication bias also produces major distortions in fields without large corporate incentives, such as psychology and ecology.
  • While publication bias almost certainly plays a role in the decline effect, it remains an incomplete explanation. For one thing, it fails to account for the initial prevalence of positive results among studies that never even get submitted to journals. It also fails to explain the experience of people like Schooler, who have been unable to replicate their initial data despite their best efforts
  • an equally significant issue is the selective reporting of results—the data that scientists choose to document in the first place. Palmer’s most convincing evidence relies on a statistical tool known as a funnel graph. When a large number of studies have been done on a single subject, the data should follow a pattern: studies with a large sample size should all cluster around a common value—the true result—whereas those with a smaller sample size should exhibit a random scattering, since they’re subject to greater sampling error. This pattern gives the graph its name, since the distribution resembles a funnel.
  • The funnel graph visually captures the distortions of selective reporting. For instance, after Palmer plotted every study of fluctuating asymmetry, he noticed that the distribution of results with smaller sample sizes wasn’t random at all but instead skewed heavily toward positive results.
  • Palmer has since documented a similar problem in several other contested subject areas. “Once I realized that selective reporting is everywhere in science, I got quite depressed,” Palmer told me. “As a researcher, you’re always aware that there might be some nonrandom patterns, but I had no idea how widespread it is.” In a recent review article, Palmer summarized the impact of selective reporting on his field: “We cannot escape the troubling conclusion that some—perhaps many—cherished generalities are at best exaggerated in their biological significance and at worst a collective illusion nurtured by strong a-priori beliefs often repeated.”
  • Palmer emphasizes that selective reporting is not the same as scientific fraud. Rather, the problem seems to be one of subtle omissions and unconscious misperceptions, as researchers struggle to make sense of their results. Stephen Jay Gould referred to this as the “shoehorning” process. “A lot of scientific measurement is really hard,” Simmons told me. “If you’re talking about fluctuating asymmetry, then it’s a matter of minuscule differences between the right and left sides of an animal. It’s millimetres of a tail feather. And so maybe a researcher knows that he’s measuring a good male”—an animal that has successfully mated—“and he knows that it’s supposed to be symmetrical. Well, that act of measurement is going to be vulnerable to all sorts of perception biases. That’s not a cynical statement. That’s just the way human beings work.”
  • One of the classic examples of selective reporting concerns the testing of acupuncture in different countries. While acupuncture is widely accepted as a medical treatment in various Asian countries, its use is much more contested in the West. These cultural differences have profoundly influenced the results of clinical trials. Between 1966 and 1995, there were forty-seven studies of acupuncture in China, Taiwan, and Japan, and every single trial concluded that acupuncture was an effective treatment. During the same period, there were ninety-four clinical trials of acupuncture in the United States, Sweden, and the U.K., and only fifty-six per cent of these studies found any therapeutic benefits. As Palmer notes, this wide discrepancy suggests that scientists find ways to confirm their preferred hypothesis, disregarding what they don’t want to see. Our beliefs are a form of blindness.
  • John Ioannidis, an epidemiologist at Stanford University, argues that such distortions are a serious issue in biomedical research. “These exaggerations are why the decline has become so common,” he says. “It’d be really great if the initial studies gave us an accurate summary of things. But they don’t. And so what happens is we waste a lot of money treating millions of patients and doing lots of follow-up studies on other themes based on results that are misleading.”
  • In 2005, Ioannidis published an article in the Journal of the American Medical Association that looked at the forty-nine most cited clinical-research studies in three major medical journals. Forty-five of these studies reported positive results, suggesting that the intervention being tested was effective. Because most of these studies were randomized controlled trials—the “gold standard” of medical evidence—they tended to have a significant impact on clinical practice, and led to the spread of treatments such as hormone replacement therapy for menopausal women and daily low-dose aspirin to prevent heart attacks and strokes. Nevertheless, the data Ioannidis found were disturbing: of the thirty-four claims that had been subject to replication, forty-one per cent had either been directly contradicted or had their effect sizes significantly downgraded.
  • The situation is even worse when a subject is fashionable. In recent years, for instance, there have been hundreds of studies on the various genes that control the differences in disease risk between men and women. These findings have included everything from the mutations responsible for the increased risk of schizophrenia to the genes underlying hypertension. Ioannidis and his colleagues looked at four hundred and thirty-two of these claims. They quickly discovered that the vast majority had serious flaws. But the most troubling fact emerged when he looked at the test of replication: out of four hundred and thirty-two claims, only a single one was consistently replicable. “This doesn’t mean that none of these claims will turn out to be true,” he says. “But, given that most of them were done badly, I wouldn’t hold my breath.”
  • the main problem is that too many researchers engage in what he calls “significance chasing,” or finding ways to interpret the data so that it passes the statistical test of significance—the ninety-five-per-cent boundary invented by Ronald Fisher. “The scientists are so eager to pass this magical test that they start playing around with the numbers, trying to find anything that seems worthy,” Ioannidis says. In recent years, Ioannidis has become increasingly blunt about the pervasiveness of the problem. One of his most cited papers has a deliberately provocative title: “Why Most Published Research Findings Are False.”
  • The problem of selective reporting is rooted in a fundamental cognitive flaw, which is that we like proving ourselves right and hate being wrong. “It feels good to validate a hypothesis,” Ioannidis said. “It feels even better when you’ve got a financial interest in the idea or your career depends upon it. And that’s why, even after a claim has been systematically disproven”—he cites, for instance, the early work on hormone replacement therapy, or claims involving various vitamins—“you still see some stubborn researchers citing the first few studies that show a strong effect. They really want to believe that it’s true.”
  • scientists need to become more rigorous about data collection before they publish. “We’re wasting too much time chasing after bad studies and underpowered experiments,” he says. The current “obsession” with replicability distracts from the real problem, which is faulty design. He notes that nobody even tries to replicate most science papers—there are simply too many. (According to Nature, a third of all studies never even get cited, let alone repeated.)
  • Schooler recommends the establishment of an open-source database, in which researchers are required to outline their planned investigations and document all their results. “I think this would provide a huge increase in access to scientific work and give us a much better way to judge the quality of an experiment,” Schooler says. “It would help us finally deal with all these issues that the decline effect is exposing.”
  • Although such reforms would mitigate the dangers of publication bias and selective reporting, they still wouldn’t erase the decline effect. This is largely because scientific research will always be shadowed by a force that can’t be curbed, only contained: sheer randomness. Although little research has been done on the experimental dangers of chance and happenstance, the research that exists isn’t encouraging
  • John Crabbe, a neuroscientist at the Oregon Health and Science University, conducted an experiment that showed how unknowable chance events can skew tests of replicability. He performed a series of experiments on mouse behavior in three different science labs: in Albany, New York; Edmonton, Alberta; and Portland, Oregon. Before he conducted the experiments, he tried to standardize every variable he could think of. The same strains of mice were used in each lab, shipped on the same day from the same supplier. The animals were raised in the same kind of enclosure, with the same brand of sawdust bedding. They had been exposed to the same amount of incandescent light, were living with the same number of littermates, and were fed the exact same type of chow pellets. When the mice were handled, it was with the same kind of surgical glove, and when they were tested it was on the same equipment, at the same time in the morning.
  • The premise of this test of replicability, of course, is that each of the labs should have generated the same pattern of results. “If any set of experiments should have passed the test, it should have been ours,” Crabbe says. “But that’s not the way it turned out.” In one experiment, Crabbe injected a particular strain of mouse with cocaine. In Portland the mice given the drug moved, on average, six hundred centimetres more than they normally did; in Albany they moved seven hundred and one additional centimetres. But in the Edmonton lab they moved more than five thousand additional centimetres. Similar deviations were observed in a test of anxiety. Furthermore, these inconsistencies didn’t follow any detectable pattern. In Portland one strain of mouse proved most anxious, while in Albany another strain won that distinction.
  • The disturbing implication of the Crabbe study is that a lot of extraordinary scientific data are nothing but noise. The hyperactivity of those coked-up Edmonton mice wasn’t an interesting new fact—it was a meaningless outlier, a by-product of invisible variables we don’t understand. The problem, of course, is that such dramatic findings are also the most likely to get published in prestigious journals, since the data are both statistically significant and entirely unexpected. Grants get written, follow-up studies are conducted. The end result is a scientific accident that can take years to unravel.
  • This suggests that the decline effect is actually a decline of illusion.
  • While Karl Popper imagined falsification occurring with a single, definitive experiment—Galileo refuted Aristotelian mechanics in an afternoon—the process turns out to be much messier than that. Many scientific theories continue to be considered true even after failing numerous experimental tests. Verbal overshadowing might exhibit the decline effect, but it remains extensively relied upon within the field. The same holds for any number of phenomena, from the disappearing benefits of second-generation antipsychotics to the weak coupling ratio exhibited by decaying neutrons, which appears to have fallen by more than ten standard deviations between 1969 and 2001. Even the law of gravity hasn’t always been perfect at predicting real-world phenomena. (In one test, physicists measuring gravity by means of deep boreholes in the Nevada desert found a two-and-a-half-per-cent discrepancy between the theoretical predictions and the actual data.) Despite these findings, second-generation antipsychotics are still widely prescribed, and our model of the neutron hasn’t changed. The law of gravity remains the same.
  • Such anomalies demonstrate the slipperiness of empiricism. Although many scientific ideas generate conflicting results and suffer from falling effect sizes, they continue to get cited in the textbooks and drive standard medical practice. Why? Because these ideas seem true. Because they make sense. Because we can’t bear to let them go. And this is why the decline effect is so troubling. Not because it reveals the human fallibility of science, in which data are tweaked and beliefs shape perceptions. (Such shortcomings aren’t surprising, at least for scientists.) And not because it reveals that many of our most exciting theories are fleeting fads and will soon be rejected. (That idea has been around since Thomas Kuhn.) The decline effect is troubling because it reminds us how difficult it is to prove anything. We like to pretend that our experiments define the truth for us. But that’s often not the case. Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true. When the experiments are done, we still have to choose what to believe.
Weiye Loh

Roger Pielke Jr.'s Blog: Political Affiliations of Scientists - 0 views

  • Dan Sarewitz tossed some red meat out on the table in the form of an essay in Slate on the apparent paucity of Republicans among the US scientific establishment.  Sarewitz suggests that it is in the interests f the scientific community both to understand this situation and to seek greater diversity in its ranks, explaining that "the issue here is legitimacy, not literacy."
  • The issue that Sarewitz raises is one of legitimacy.  All of us evaluate knowledge claims outside our own expertise (and actually very few people are in fact experts) based not on a careful consideration of facts and evidence, but by other factors, such as who we trust and how their values jibe with our own.  Thus if expert institutions are going to sustain and function in a democratic society they must attend to their legitimacy.  Scientific institutions that come to be associated with one political party risk their legitimacy among those who are not sympathetic to that party's views.
  • Of course, we don't just evaluate knowledge claims simply based on individuals, but usually through institutions, like scientific journals, national academies, professional associations, universities and so on. Sarewitz's Slate article did not get into a discussion of these institutions, but I think that it is essential to fully understand his argument.
  • ...4 more annotations...
  • Consider that the opinion poll that Sarewitz cited which found that only 6% of scientists self-identify as Republicans has some very important fine print -- specifically that the scientists that it surveyed were all members of the AAAS.  I do not have detailed demographics information, but based on my experience I would guess that AAAS membership is dominated by university and government scientists.  The opinion poll thus does not tell us much about US scientists as a whole, but rather something about one scientific institution -- AAAS.  And the poll indicates that AAAS is largely an association that does not include Republicans.
  • One factor might be seen in a recent action of the American Geophysical Union -- another big US science association: AGU recently appointed Chris Mooney to its Board.  I am sure that Chris is a fine fellow, but appointing an English major who has written divisively about the "Republican War on Science" to help AGU oversee "science communication" is more than a little ironic, and unlikely to attract many Republican scientists to the institution, perhaps even having the opposite effect.  To the extent that AAAS and AGU endorse the Democratic policy agenda, or just appear to do so, it reflects their role not as arbiters of knowledge claims, but rather as political actors.
  • I would wager that the partisan affiliation of scientists in the US military, in the energy , pharmaceutical and finance industries would look starkly different than that of AAAS.  If there is a crisis of legitimacy in the scientific community, it is among those institutions which have become to be so dominated by those espousing a shared political view, whatever that happens to be. This crisis is shared by AAAS and AGU, viewed with suspicion by those on the Right, and, for instance, by ExxonMobil, which is viewed by a similar suspicion by those on the Left.  Sarewitz is warning that for many on the Right, institutions like AAAS are viewed with every bit as skeptical an eye as those on the Left view ExxonMobil.
  • Many observers are so wrapped up in their own partisan battles that they either don't care that science is being associated with one political party or they somehow think that through such politicization they will once and for all win the partisan battles.  They won't. Political parties are far more robust than institutions of science. Institutions of science need help to survive intact partisan political battles.  The blogosphere and activist scientists and journalists offer little help.
Weiye Loh

Letter from Seed editor Adam Bly to ScienceBlogs.com contributors | Science | guardian.... - 0 views

  • the conversation should include scientists from academia and government; we also think it should include scientists from industry. Because industry is increasingly the interface between science and society.
  • The bloggers who blog on 'corporate blogs' on SB are necessarily credentialed scientists (we make sure of that), in some cases highly credentialed scientists who have published extensively in peer-reviewed journals. The fact that they work at a profit-making company does not automatically disqualify their science in our mind. And frankly, nor does it disqualify them in the eyes of the Nobel Prize Committee either.
  • All editorial content is written by PepsiCo's scientists or scientists invited by PepsiCo and/or ScienceBlogs. All posts carry a byline above the fold indicating the scientist's affiliation and conflicts of interest." This must be 100% transparent so our readers can evaluate the merit of the post for themselves.
  • ...4 more annotations...
  • Are we making a judgment about PepsiCo's science by hosting a blog for them on SB? No. (Nor are we making a judgment about your own research for that matter). Are we saying that they are entitled to have a seat at the table? Yes. Do they know that they are opening themselves us to debate? Absolutely. You may disagree with the substance of their posts (as you do on any other blog). You may even call into question their presence on a public forum dedicated to science. It will be up to them to respond. Better yet, it will be up to them to listen and take actions. The sustainability of this experiment lives or dies in the establishment of a transparent dialogue.
  • SB, like nearly all free content sites, is sustainable because of advertising. But advertising is itself highly unpredictable, as the last year has shown the industry. And securing advertising around topics like physics and evolution is even more challenging
  • We started experimenting with sponsored blogs a couple of years ago and decided to market long-term sponsorship contracts instead of sporadic advertising contracts. This is not a new idea: respected magazines have been doing the same thing for years (think Atlantic Ideas Festival going on now or The New Yorker Festival, where representatives of sponsoring companies sit on stage alongside writers and thinkers, or advertorials where companies pay to create content -- clearly marked as such -- instead of just running an ad). We think this may be a digital equivalent.
  • meaningful discussion about science and society in the 21st century requires that all players be at the table (with affiliations made clear), from all parts of the world, from every sector of society. And ScienceBlogs is where this is starting to happen.
  •  
    Letter from Seed editor Adam Bly to ScienceBlogs.com contributors * Sent to bloggers in response to the controversial decision by ScienceBlogs.com to host a blog on nutrition, written by PepsiCo * Read and comment on the full story here
Weiye Loh

Freakonomics » Scientific Literacy Does Not Increase Concern Over Climate Cha... - 0 views

  • The conventional explanation for controversy over climate change emphasizes impediments to public understanding: Limited popular knowledge of science, the inability of ordinary citizens to assess technical information, and the resulting widespread use of unreliable cognitive heuristics to assess risk. A large survey of U.S. adults (N = 1540) found little support for this account. On the whole, the most scientifically literate and numerate subjects were slightly less likely, not more, to see climate change as a serious threat than the least scientifically literate and numerate ones. More importantly, greater scientific literacy and numeracy were associated with greater cultural polarization: Respondents predisposed by their values to dismiss climate change evidence became more dismissive, and those predisposed by their values to credit such evidence more concerned, as science literacy and numeracy increased. We suggest that this evidence reflects a conflict between two levels of rationality: The individual level, which is characterized by citizens’ effective use of their knowledge and reasoning capacities to form risk perceptions that express their cultural commitments; and the collective level, which is characterized by citizens’ failure to converge on the best available scientific evidence on how to promote their common welfare. Dispelling this, “tragedy of the risk-perception commons,” we argue, should be understood as the central aim of the science of science communication.
  •  
    A new study by the Cultural Cognition Project, a team headed up by Yale law professor Dan Kahan, shows that people who are more science- and math-literate tend to be more skeptical about the consequences of climate change. Increased scientific literacy also leads to higher polarization on climate-change issues:
Weiye Loh

Most scientists in this country are Democrats. That's a problem. - By Daniel Sarewitz -... - 0 views

  • A Pew Research Center Poll from July 2009 showed that only around 6 percent of U.S. scientists are Republicans; 55 percent are Democrats, 32 percent are independent, and the rest "don't know" their affiliation.
  • When President Obama appears Wednesday on Discovery Channel's Mythbusters (9 p.m. ET), he will be there not just to encourage youngsters to do their science homework but also to reinforce the idea that Democrats are the party of science and rationality. And why not? Most scientists are already on his side.
  • Yet, partisan politics aside, why should it matter that there are so few Republican scientists? After all, it's the scientific facts that matter, and facts aren't blue or red.
  • ...7 more annotations...
  • For 20 years, evidence about global warming has been directly and explicitly linked to a set of policy responses demanding international governance regimes, large-scale social engineering, and the redistribution of wealth. These are the sort of things that most Democrats welcome, and most Republicans hate. No wonder the Republicans are suspicious of the science.
  • Think about it: The results of climate science, delivered by scientists who are overwhelmingly Democratic, are used over a period of decades to advance a political agenda that happens to align precisely with the ideological preferences of Democrats. Coincidence—or causation?
  • How would a more politically diverse scientific community improve this situation? First, it could foster greater confidence among Republican politicians about the legitimacy of mainstream science. Second, it would cultivate more informed, creative, and challenging debates about the policy implications of scientific knowledge. This could help keep difficult problems like climate change from getting prematurely straitjacketed by ideology. A more politically diverse scientific community would, overall, support a healthier relationship between science and politics.
  • American society has long tended toward pragmatism, with a great deal of respect for the value and legitimacy not just of scientific facts, but of scientists themselves.
  • Yet this exceptional status could well be forfeit in the escalating fervor of national politics, given that most scientists are on one side of the partisan divide. If that public confidence is lost, it would be a huge and perhaps unrecoverable loss for a democratic society.
  • A democratic society needs Republican scientists.
  • I have to imagine 50 years ago there were a lot more Republican scientists, when the Democrats were still the party of Southern Baptists. As a rational person I find it impossible to support any candidate who panders to the religious right, and in current politics, that's every National Republican.
Weiye Loh

Haidt Requests Apology from Pigliucci « YourMorals.Org Moral Psychology Blog - 0 views

  • Here is my response to Pigliucci, which I posted as a comment on his blog. (Well, I submitted it as a comment on Feb 13 at 4pm EST, but he has not approved it yet, so it doesn’t show yet over there.)
  • Massimo Pigliucci, the chair of the philosophy department at CUNY-Lehman, wrote a critique of me on his blog, Rationally Speaking, in which he accused me of professional misconduct.
  • Dear Prof. Pigliucci: Let me be certain that I have understood you. You did not watch my talk, even though a link to it was embedded in the Tierney article. Instead, you picked out one piece of my argument (that the near-total absence of conservatives in social psychology is evidence of discrimination) and you made the standard response, the one that most bloggers have made: underrepresentation of any group is not, by itself, evidence of discrimination. That’s a good point; I made it myself quite explicitly in my talk: Of course there are many reasons why conservatives would be underrepresented in social psychology, and most of them have nothing to do with discrimination or hostile climate. Research on personality consistently shows that liberals are higher on openness to experience. They’re more interested in novel ideas, and in trying to use science to improve society. So of course our field is and always will be mostly liberal. I don’t think we should ever strive for exact proportional representation.
  • ...6 more annotations...
  • I made it clear that I’m not concerned about simple underrepresentation. I did not even make the moral argument that we need ideological diversity to right an injustice. Rather, I focused on what happens when a scientific community shares sacred values. A tribal moral community arises, one that actively suppresses ideas that are sacrilegious, and that discourages non-believers from entering. I argued that my field has become a tribal moral community, and the absence of conservatives (not just their underrepresentation) has serious consequences for the quality of our science. We rely on our peers to find flaws in our arguments, but when there is essentially nobody out there to challenge liberal assumptions and interpretations of experimental findings, the peer review process breaks down, at least for work that is related to those sacred values. (
  • The fact that you criticized me without making an effort to understand me is not surprising.
  • Rather, what sets you apart from all other bloggers who are members of the academy is what you did next. You accused me of professional misconduct—lying, essentially–and you speculated as to my true motive: I suspect that Haidt is either an incompetent psychologist (not likely) or is disingenuously saying the sort of things controversial enough to get him in the New York Times (more likely).
  • As far as I can tell your evidence for these accusations is that my argument was so bad that I couldn’t have believed it myself. Here is how you justified your accusations: A serious social scientist doesn’t go around crying out discrimination just on the basis of unequal numbers. If that were the case, the NBA would be sued for discriminating against short people, dance companies against people without spatial coordination, and newspapers against dyslexics
  • Accusations of professional misconduct are sensibly made only if one has a reasonable and detailed understanding of the facts of the case, and can bring forth evidence of misconduct. Pigliucci has made no effort to acquire such an understanding, nor has he presented any evidence to support his accusation. He simply took one claim from the Tierney article and then ran wild with speculation about Haidt’s motives. It was pretty silly of him, and down right irresponsible of Pigliucci to publish that garbage without even knowing what Haidt said.
  • I challenge you to watch the video of my talk (click here) and then either 1) Retract your blog post and apologize publicly for calling me a liar or 2) State on your blog that you stand by your original post. If you do stand by your post, even after hearing my argument, then the world can decide for itself which of us is right, and which of us best models the ideals of science, philosophy, and the Enlightenment which you claim for yourself in the header of your blog, “Rationally Speaking.” Jonathan Haidt
Weiye Loh

Gleick apology over Heartland leak stirs ethics debate among climate scientists | Envir... - 0 views

  • For some campaigners, such as Naomi Klein, Gleick was an unalloyed hero, who should be sent some "Twitter love", she wrote on Tuesday."Heartland has been subverting well-understood science for years," wrote Scott Mandia, co-founder of the climate science rapid response team. "They also subvert the education of our schoolchildren by trying to 'teach the controversy' where none exists."Mandia went on: "Peter Gleick, a scientist who is also a journalist, just used the same tricks that any investigative reporter uses to uncover the truth. He is the hero and Heartland remains the villain. He will have many people lining up to support him."
  • Others acknowledged Gleick's wrongdoing, but said it should be viewed in the context of the work of Heartland and other entities devoted to spreading disinformation about science."What Peter Gleick did was unethical. He acknowledges that from a point of view of professional ethics there is no defending those actions," said Dale Jamieson, an expert on ethics who heads the environmental studies programme at New York University. "But relative to what has been going on on the climate denial side this is a fairly small breach of ethics."He also rejected the suggestion that Gleick's wrongdoing could hurt the cause of climate change, or undermine the credibility of scientists."Whatever moral high ground there is in science comes from doing science," he said. "The failing that Peter Gleick engaged in is not a scientific failing. It is just a personal failure."
Weiye Loh

Real Climate faces libel suit | Environment | guardian.co.uk - 0 views

  • Gavin Schmidt, a climate modeller and Real Climate member based at Nasa's Goddard Institute for Space Studies in New York, has claimed that Energy & Environment (E&E) has "effectively dispensed with substantive peer review for any papers that follow the editor's political line." The journal denies the claim, and, according to Schmidt, has threatened to take further action unless he retracts it.
  • Every paper that is submitted to the journal is vetted by a number of experts, she said. But she did not deny that she allows her political agenda to influence which papers are published in the journal. "I'm not ashamed to say that I deliberately encourage the publication of papers that are sceptical of climate change," said Boehmer-Christiansen, who does not believe in man-made climate change.
  • Simon Singh, a science writer who last year won a major libel battle with the British Chiropractic Association (BCA), said: "A libel threat is potentially catastrophic. It can lead to a journalist going bankrupt or a blogger losing his house. A lot of journalists and scientists will understandably react to the threat of libel by retracting their articles, even if they are confident they are correct. So I'm delighted that Gavin Schmidt is going to stand up for what he has written." During the case with the BCA, Singh also received a libel threat in response to an article he had written about climate change, but Singh stood by what he had written and threat was not carried through.
  • ...7 more annotations...
  • Schmidt has refused to retract his comments and maintains that the majority of papers published in the journal are "dross"."I would personally not credit any article that was published there with any useful contribution to the science," he told the Guardian. "Saying a paper was published in E&E has become akin to immediately discrediting it." He also describes the journal as a "backwater" of poorly presented and incoherent contributions that "anyone who has done any science can see are fundamentally flawed from the get-go."
  • Schmidt points to an E&E paper that claimed that the Sun is made of iron. "The editor sent it out for review, where it got trashed (as it should have been), and [Boehmer-Christiansen] published it anyway," he says.
  • The journal also published a much-maligned analysis suggesting that levels of the greenhouse gas carbon dioxide could go up and down by 100 parts per million in a year or two, prompting marine biologist Ralph Keeling at the Scripps Institute of Oceanography in La Jolla, California to write a response to the journal, in which he asked: "Is it really the intent of E&E to provide a forum for laundering pseudo-science?"
  • Schmidt and Keeling are not alone in their criticisms. Roger Pielke Jr, a professor of environmental studies at the University of Colorado, said he regrets publishing a paper in the journal in 2000 – one year after it was established and before he had time to realise that it was about to become a fringe platform for climate sceptics. "[E&E] has published a number of low-quality papers, and the editor's political agenda has clearly undermined the legitimacy of the outlet," Pielke says. "If I had a time machine I'd go back and submit our paper elsewhere."
  • Any paper published in E&E is now ignored by the broader scientific community, according to Pielke. "In some cases perhaps that is justified, but I would argue that it provided a convenient excuse to ignore our paper on that basis alone, and not on the merits of its analysis," he said. In the long run, Pielke is confident that good ideas will win out over bad ideas. "But without care to the legitimacy of our science institutions – including journals and peer review – that long run will be a little longer," he says.
  • she has no intention of changing the way she runs E&E – which is not listed on the ISI Journal Master list, an official list of academic journals – in response to his latest criticisms.
  • Schmidt is unsurprised. "You would need a new editor, new board of advisors, and a scrupulous adherence to real peer review, perhaps ... using an open review process," he said. "But this is very unlikely to happen since their entire raison d'être is political, not scientific."
Weiye Loh

Skepticblog » Litigation gone wild! A geologist's take on the Italian seismol... - 0 views

  • Apparently, an Italian lab technician named Giampaolo Giuliani made a prediction about a month before the quake, based on elevated levels of radon gas. However, seismologists have known for a long time that radon levels, like any other “magic bullet” precursor, are unreliable because no two quakes are alike, and no two quakes give the same precursors. Nevertheless, his prediction caused a furor before the quake actually happened. The Director of the Civil Defence, Guido Bertolaso, forced him to remove his findings from the Internet (old versions are still on line). Giuliani was also reported to the police for “causing fear” with his predictions about a quake near Sulmona, which was far from where the quake actually struck. Enzo Boschi, the head of the Italian National Geophysics Institute declared: “Every time there is an earthquake there are people who claim to have predicted it. As far as I know nobody predicted this earthquake with precision. It is not possible to predict earthquakes.” Most of the geological and geophysical organizations around the world made similar statements in support of the proper scientific procedures adopted by the Italian geophysical community. They condemned Giuliani for scaring people using a method that has not shown to be reliable.
  • most the of press coverage I have read (including many cited above) took the sensationalist approach, and cast Guiliani as the little “David” fighting against the “Goliath” of “Big Science”
  • none of the reporters bothered to do any real background research, or consult with any other legitimate seismologist who would confirm that there is no reliable way to predict earthquakes in the short term and Giuliani is misleading people when he says so. Giulian’s “prediction” was sheer luck, and if he had failed, no one would have mentioned it again.
  • ...4 more annotations...
  • Even though he believes in his method, he ignores the huge body of evidence that shows radon gas is no more reliable than any other “predictor”.
  • If the victims insist on suing someone, they should leave the seismologists alone and look into the construction of some of those buildings. The stories out of L’Aquila suggest that the death toll was much higher because of official corruption and shoddy construction, as happens in many countries both before and after big quakes.
  • much of the construction is apparently Mafia-controlled in that area—good luck suing them! Sadly, the ancient medieval buildings that crumbled were the most vulnerable because they were made of unreinforced masonry, the worst possible construction material in earthquake country
  • what does this imply for scientists who are working in a field that might have predictive power? In a litigious society like Italy or the U.S., this is a serious question. If a reputable seismologist does make a prediction and fails, he’s liable, because people will panic and make foolish decisions and then blame the seismologist for their losses. Now the Italian courts are saying that (despite world scientific consensus) seismologists are liable if they don’t predict quakes. They’re damned if they do, and damned if they don’t. In some societies where seismologists work hard at prediction and preparation (such as China and Japan), there is no precedent for suing scientists for doing their jobs properly, and the society and court system does not encourage people to file frivolous suits. But in litigious societies, the system is counterproductive, and stifles research that we would like to see developed. What seismologist would want to work on earthquake prediction if they can be sued? I know of many earth scientists with brilliant ideas not only about earthquake prediction but even ways to defuse earthquakes, slow down global warming, or many other incredible but risky brainstorms—but they dare not propose the idea seriously or begin to implement it for fear of being sued.
  •  
    In the case of most natural disasters, people usually regard such events as "acts of God" and  try to get on with their lives as best they can. No human cause is responsible for great earthquakes, tsunamis, volcanic eruptions, tornadoes, hurricanes, or floods. But in the bizarre world of the Italian legal system, six seismologists and a public official have been charged with manslaughter for NOT predicting the quake! My colleagues in the earth science community were incredulous and staggered at this news. Seismologists and geologists have been saying for decades (at least since the 1970s) that short-term earthquake prediction (within minutes to hours of the event) is impossible, and anyone who claims otherwise is lying. As Charles Richter himself said, "Only fools, liars, and charlatans predict earthquakes." How could anyone then go to court and sue seismologists for following proper scientific procedures?
Weiye Loh

Libel Chill and Me « Skepticism « Critical Thinking « Skeptic North - 0 views

  • Skeptics may by now be very familiar with recent attempts in Canada to ban wifi from public schools and libraries.  In short: there is no valid scientific reason to be worried about wifi.  It has also been revealed that the chief scientists pushing the wifi bans have been relying on poor data and even poorer studies.  By far the vast majority of scientific data that currently exists supports the conclusion that wifi and cell phone signals are perfectly safe.
  • So I wrote about that particular topic in the summer.  It got some decent coverage, but the fear mongering continued. I wrote another piece after I did a little digging into one of the main players behind this, one Rodney Palmer, and I discovered some decidedly pseudo-scientific tendencies in his past, as well as some undisclosed collusion.
  • One night I came home after a long day at work, a long commute, and a phone call that a beloved family pet was dying, and will soon be in significant pain.  That is the state I was in when I read the news about Palmer and Parliamentary committee.
  • ...18 more annotations...
  • That’s when I wrote my last significant piece for Skeptic North.  Titled, “Rodney Palmer: When Pseudoscience and Narcissism Collide,” it was a fiery take-down of every claim I heard Palmer speak before the committee, as well as reiterating some of his undisclosed collusion, unethical media tactics, and some reasons why he should not be considered an expert.
  • This time, the article got a lot more reader eyeballs than anything I had ever written for this blog (or my own) and it also caught the attention of someone on a school board which was poised to vote on wifi.  In these regards: Mission very accomplished.  I finally thought that I might be able to see some people in the media start to look at Palmer’s claims with a more critical eye than they had been previously, and I was flattered at the mountain of kind words, re-tweets, reddit comments and Facebook “likes.”
  • The comments section was mostly supportive of my article, and they were one of the few things that kept me from hiding in a hole for six weeks.  There were a few comments in opposition to what I wrote, some sensible, most incoherent rambling (one commenter, when asked for evidence, actually linked to a YouTube video which they referred to as “peer reviewed”)
  • One commenter was none other than the titular subject of the post, Rodney Palmer himself.  Here is a screen shot of what he said: Screen shot of the Libel/Slander threat.
  • Knowing full well the story of the libel threat against Simon Singh, I’ve always thought that if ever a threat like that came my way, I’d happily beat it back with the righteous fury and good humour of a person with the facts on their side.  After all, if I’m wrong, you’d be able to prove me wrong, rather than try to shut me up with a threat of a lawsuit.  Indeed, I’ve been through a similar situation once before, so I should be an old hat at this! Let me tell you friends, it’s not that easy.  In fact, it’s awful.  Outside observers could easily identify that Palmer had no case against me, but that was still cold comfort to me.  It is a very stressful situation to find yourself in.
  • The state of libel and slander laws in this country are such that a person can threaten a lawsuit without actually threatening a lawsuit.  There is no need to hire a lawyer to investigate the claims, look into who I am, where I live, where I work, and issue a carefully worded threatening letter demanding compliance.  All a person has to say is some version of  “Libel.  Slander.  Hmmmm….,” and that’s enough to spook a lot of people into backing off. It’s a modern day bogeyman.  They don’t have to prove it.  They don’t have to act on it.  A person or organization just has to say “BOO!” with sufficient seriousness, and unless you’ve got a good deal of editorial and financial support, discussion goes out the window. Libel Chill refers to the ‘chilling effect’ that the possibility of a libel/slander lawsuit has.  If a person is scared they might get sued, then they won’t even comment on a piece at all.  In my case, I had already commented three times on the wifi scaremongering, but this bogus threat against me was surely a major contributing factor to my not commenting again.
  • I ceased to discuss anything in the comment thread of the original article, and even shied away from other comment threads, calling me out.  I learned a great deal about the wifi/EMF issue since I wrote the article, but I did not comment on any of it, because I knew that Palmer and his supporters were watching me like a hawk (sorry to stretch the simile), and would likely try to silence me again.  I couldn’t risk a lawsuit.  Even though I knew there was no case against me, I couldn’t afford a lawyer just to prove that I didn’t do anything illegal.
  • The Libel and Slanders Act of Ontario, 1990 hasn’t really caught up with the internet.  There isn’t a clear precedent that defines a blog post, Twitter feed or Facebook post as falling under the umbrella of “broadcast,” which is what the bill addresses.  If I had written the original article in print, Palmer would have had six weeks to file suit against me.  But the internet is only kind of considered ‘broadcast.’  So it could be just six weeks, but he could also have up to two years to act and get a lawyer after me.  Truth is, there’s not a clear demarcation point for our Canadian legal system.
  • Libel laws in Canada are somewhere in between the Plaintiff-favoured UK system, and the Defendant-favoured US system.  On the one hand, if Palmer chose to incur the expense and time to hire a lawyer and file suit against me, the burden of proof would be on me to prove that I did not act with malice.  Easy peasy.  On the other hand, I would have a strong case that I acted in the best interests of Canadians, which would fall under the recent Supreme Court of Canada decision on protecting what has been termed, “Responsible Communication.”  The Supreme Court of Canada decision does not grant bloggers immunity from libel and slander suits, but it is a healthy dose of welcome freedom to discuss issues of importance to Canadians.
  • Palmer himself did not specify anything against me in his threat.  There was nothing particular that he complained about, he just said a version of “Libel and Slander!” at me.  He may as well have said “Boo!”
  • This is not a DBAD discussion (although I wholeheartedly agree with Phil Plait there). 
  • If you’d like to boil my lessons down to an acronym, I suppose the best one would be DBRBC: Don’t be reckless. Be Careful.
  • I wrote a piece that, although it was not incorrect in any measurable way, was written with fire and brimstone, piss and vinegar.  I stand by my piece, but I caution others to be a little more careful with the language they use.  Not because I think it is any less or more tactically advantageous (because I’m not sure anyone can conclusively demonstrate that being an aggressive jerk is an inherently better or worse communication tool), but because the risks aren’t always worth it.
  • I’m not saying don’t go after a person.  There are egomaniacs out there who deserve to be called out and taken down (verbally, of course).  But be very careful with what you say.
  • ask yourself some questions first: 1) What goal(s) are you trying to accomplish with this piece? Are you trying to convince people that there is a scientific misunderstanding here?  Are you trying to attract the attention of the mainstream media to a particular facet of the issue?  Are you really just pissed off and want to vent a little bit?  Is this article a catharsis, or is it communicative?  Be brutally honest with your intentions, it’s not as easy as you think.  Venting is okay.  So is vicious venting, but be careful what you dress it up as.
  • 2) In order to attain your goals, did you use data, or personalities?  If the former, are you citing the best, most current data you have available to you? Have you made a reasonable effort to check your data against any conflicting data that might be out there? If the latter, are you providing a mountain of evidence, and not just projecting onto personalities?  There is nothing inherently immoral or incorrect with going after the personalities.  But it is a very risky undertaking. You have to be damn sure you know what you’re talking about, and damn ready to defend yourself.  If you’re even a little loose with your claims, you will be called out for it, and a legal threat is very serious and stressful. So if you’re going after a personality, is it worth it?
  • 3) Are you letting the science speak for itself?  Are you editorializing?  Are you pointing out what part of your piece is data and what part is your opinion?
  • 4) If this piece was written in anger, frustration, or otherwise motivated by a powerful emotion, take a day.  Let your anger subside.  It will.  There are many cathartic enterprises out there, and you don’t need to react to the first one that comes your way.  Let someone else read your work before you share it with the internet.  Cooler heads definitely do think more clearly.
Weiye Loh

The Greening of the American Brain - TIME - 0 views

  • The past few years have seen a marked decline in the percentage of Americans who believe what scientists say about climate, with belief among conservatives falling especially fast. It's true that the science community has hit some bumps — the IPCC was revealed to have made a few dumb errors in its recent assessment, and the "Climategate" hacked emails showed scientists behaving badly. But nothing changed the essential truth that more man-made CO2 means more warming; in fact, the basic scientific case has only gotten stronger. Yet still, much of the American public remains unconvinced — and importantly, last November that public returned control of the House of Representatives to a Republican party that is absolutely hostile to the basic truths of climate science.
  • facts and authority alone may not shift people's opinions on climate science or many other topics. That was the conclusion I took from the Climate, Mind and Behavior conference, a meeting of environmentalists, neuroscientists, psychologists and sociologists that I attended last week at the Garrison Institute in New York's Hudson Valley. We like to think of ourselves as rational creatures who select from the choices presented to us for maximum individual utility — indeed, that's the essential principle behind most modern economics. But when you do assume rationality, the politics of climate change get confusing. Why would so many supposedly rational human beings choose to ignore overwhelming scientific authority?
  • Maybe because we're not actually so rational after all, as research is increasingly showing. Emotions and values — not always fully conscious — play an enormous role in how we process information and make choices. We are beset by cognitive biases that throw what would be sound decision-making off-balance. Take loss aversion: psychologists have found that human beings tend to be more concerned about avoiding losses than achieving gains, holding onto what they have even when this is not in their best interests. That has a simple parallel to climate politics: environmentalists argue that the shift to a low-carbon economy will create abundant new green jobs, but for many people, that prospect of future gain — even if it comes with a safer planet — may not be worth the risk of losing the jobs and economy they have.
  • ...4 more annotations...
  • What's the answer for environmentalists? Change the message and frame the issue in a way that doesn't trigger unconscious opposition among so many Americans. That can be a simple as using the right labels: a recent study by researchers at the University of Michigan found that Republicans are less skeptical of "climate change" than "global warming," possibly because climate change sounds less specific. Possibly too because so broad a term includes the severe snowfalls of the past winter that can be a paradoxical result of a generally warmer world. Greens should also pin their message on subjects that are less controversial, like public health or national security. Instead of issuing dire warnings about an apocalyptic future — which seems to make many Americans stop listening — better to talk about the present generation's responsibility to the future, to bequeath their children and grandchildren a safer and healthy planet.
  • Group identification also plays a major role in how we make decisions — and that's another way facts can get filtered. Declining belief in climate science has been, for the most part in America, a conservative phenomenon. On the surface, that's curious: you could expect Republicans to be skeptical of economic solutions to climate change like a carbon tax, since higher taxes tend to be a Democratic policy, but scientific information ought to be non-partisan. Politicians never debate the physics of space travel after all, even if they argue fiercely over the costs and priorities associated with it. That, however, is the power of group thinking; for most conservative Americans, the very idea of climate science has been poisoned by ideologues who seek to advance their economic arguments by denying scientific fact. No additional data — new findings about CO2 feedback loops or better modeling of ice sheet loss — is likely to change their mind.
  • The bright side of all this irrationality is that it means human beings can act in ways that sometimes go against their immediate utility, sacrificing their own interests for the benefit of the group.
  • Our brains develop socially, not just selfishly, which means sustainable behavior — and salvation for the planet — may not be as difficult as it sometimes seem. We can motivate people to help stop climate change — it may just not be climate science that convinces them to act.
Weiye Loh

Climate scientists plan campaign against global warming skeptics - Los Angeles Times - 0 views

  • The still-evolving efforts reveal a shift among climate scientists, many of whom have traditionally stayed out of politics and avoided the news media. Many now say they are willing to go toe-to-toe with their critics, some of whom gained new power after the Republicans won control of the House in Tuesday's election.
  • American Geophysical Union, the country's largest association of climate scientists, plans to announce that 700 climate scientists have agreed to speak out as experts on questions about global warming and the role of man-made air pollution.
  • John Abraham of St. Thomas University in Minnesota, who last May wrote a widely disseminated response to climate change skeptics, is also pulling together a "climate rapid response team," which includes scientists prepared to go before what they consider potentially hostile audiences on conservative talk radio and television shows.
  • ...1 more annotation...
  • "This group feels strongly that science and politics can't be divorced and that we need to take bold measures to not only communicate science but also to aggressively engage the denialists and politicians who attack climate science and its scientists," said Scott Mandia, professor of physical sciences at Suffolk County Community College in New York.
Weiye Loh

Skepticblog » Seismologists Charged with Manslaughter - 0 views

  • On it’s surface the story is pretty sensational and downright silly: Judge Giuseppe Romano Gargarella said that the seven defendants had supplied “imprecise, incomplete and contradictory information,” in a press conference following a meeting held by the committee 6 days before the quake, reported the Italian daily Corriere della Sera. That may have something to do with the fact that earthquake science is imprecise, incomplete, and often produces contradictory information. The scientists and their colleagues are calling this a witch hunt and warn that it will have a chilling effect on scientists, a very real concern.
  • how should experts be held accountable for their performance. We often call upon experts to give us their expert opinion, and sometimes the stakes are very high. This happens in medicine every day – in any applied science. We cannot fault experts for not being perfect, for not foreseeing the unforeseeable, and for not having crystal balls. We do expect them to be honest and transparent about their uncertainty.
  • We can require that they meet minimal standards of competence.
  • ...5 more annotations...
  • did the top seismologists of Italy commit scientific malpractice in their assessment of the risk of a large quake?
  • Another relevant issue here is the balance between warning the public about credible risks, while not panicking them. In this case the Italian seismologists said, in effect, that the recent tremors were not necessarily sign of a big quake in the near future. There still might not be a big quake for years. But, they warned, a big quake is coming eventually. That sounds like a fair assessment of the science.
  • Apparently, the judge did not like the balance that these scientists struck: The charges filed by the prosecution contends that this assessment “persuaded the victims to stay at home”, La Repubblica newspaper reported. But defense for the scientists claim that they never said anything akin to – there is no risk.
  • scientists, especially a consensus of recognized experts, should be free to express their scientific assessment to the public, without fear of being the target of later litigation (unless they really did commit scientific malpractice).
  • Politicians and regulatory agencies should take their cue from the scientific community, but may want to also add their own spin in order to tweak the balance between reassurance and preparedness.
  •  
    The Italian Government has charged their top seismologists with manslaughter because they failed to predict the devastating 2009 earthquake, which killed 308 people. The scientists, and the seismology community, are stunned - primarily because it's impossible to predict earthquakes.
‹ Previous 21 - 40 of 84 Next › Last »
Showing 20 items per page