Skip to main content

Home/ New Media Ethics 2009 course/ Group items tagged Academic Journal

Rss Feed Group items tagged

Weiye Loh

Why do we care where we publish? - 0 views

  • being both a working scientist and a science writer gives me a unique perspective on science, scientific publications, and the significance of scientific work. The final disclosure should be that I have never published in any of the top rank physics journals or in Science, Nature, or PNAS. I don't believe I have an axe to grind about that, but I am also sure that you can ascribe some of my opinions to PNAS envy.
  • If you asked most scientists what their goals were, the answer would boil down to the generation of new knowledge. But, at some point, science and scientists have to interact with money and administrators, which has significant consequences for science. For instance, when trying to employ someone to do a job, you try to objectively decide if the skills set of the prospective employee matches that required to do the job. In science, the same question has to be asked—instead of being asked once per job interview, however, this question gets asked all the time.
  • Because science requires funding, and no one gets a lifetime dollop-o-cash to explore their favorite corner of the universe. So, the question gets broken down to "how competent is the scientist?" "Is the question they want to answer interesting?" "Do they have the resources to do what they say they will?" We will ignore the last question and focus on the first two.
  • ...17 more annotations...
  • How can we assess the competence of a scientist? Past performance is, realistically, the only way to judge future performance. Past performance can only be assessed by looking at their publications. Were they in a similar area? Are they considered significant? Are they numerous? Curiously, though, the second question is also answered by looking at publications—if a topic is considered significant, then there will be lots of publications in that area, and those publications will be of more general interest, and so end up in higher ranking journals.
  • So we end up in the situation that the editors of major journals are in the position to influence the direction of scientific funding, meaning that there is a huge incentive for everyone to make damn sure that their work ends up in Science or Nature. But why are Science, Nature, and PNAS considered the place to put significant work? Why isn't a new optical phenomena, published in Optics Express, as important as a new optical phenomena published in Science?
  • The big three try to be general; they will, in principle, publish reports from any discipline, and they anticipate readership from a range of disciplines. This explicit generality means that the scientific results must not only be of general interest, but also highly significant. The remaining journals become more specialized, covering perhaps only physics, or optics, or even just optical networking. However, they all claim to only publish work that is highly original in nature.
  • Are standards really so different? Naturally, the more specialized a journal is, the fewer people it appeals to. However, the major difference in determining originality is one of degree and referee. A more specialized journal has more detailed articles, so the differences between experiments stand out more obviously, while appealing to general interest changes the emphasis of the article away from details toward broad conclusions.
  • as the audience becomes broader, more technical details get left by the wayside. Note that none of the gene sequences published in Science have the actual experimental and analysis details. What ends up published is really a broad-brush description of the work, with the important details either languishing as supplemental information, or even published elsewhere, in a more suitable journal. Yet, the high profile paper will get all the citations, while the more detailed—the unkind would say accurate—description of the work gets no attention.
  • And that is how journals are ranked. Count the number of citations for each journal per volume, run it through a magic number generator, and the impact factor jumps out (make your checks out to ISI Thomson please). That leaves us with the following formula: grants require high impact publications, high impact publications need citations, and that means putting research in a journal that gets lots of citations. Grants follow the concepts that appear to be currently significant, and that's decided by work that is published in high impact journals.
  • This system would be fine if it did not ignore the fact that performing science and reporting scientific results are two very different skills, and not everyone has both in equal quantity. The difference between a Nature-worthy finding and a not-Nature-worthy finding is often in the quality of the writing. How skillfully can I relate this bit of research back to general or topical interests? It really is this simple. Over the years, I have seen quite a few physics papers with exaggerated claims of significance (or even results) make it into top flight journals, and the only differences I can see between those works and similar works published elsewhere is that the presentation and level of detail are different.
  • articles from the big three are much easier to cover on Nobel Intent than articles from, say Physical Review D. Nevertheless, when we do cover them, sometimes the researchers suddenly realize that they could have gotten a lot more mileage out of their work. It changes their approach to reporting their results, which I see as evidence that writing skill counts for as much as scientific quality.
  • If that observation is generally true, then it raises questions about the whole process of evaluating a researcher's competence and a field's significance, because good writers corrupt the process by publishing less significant work in journals that only publish significant findings. In fact, I think it goes further than that, because Science, Nature, and PNAS actively promote themselves as scientific compasses. Want to find the most interesting and significant research? Read PNAS.
  • The publishers do this by extensively publicizing science that appears in their own journals. Their news sections primarily summarize work published in the same issue of the same magazine. This lets them create a double-whammy of scientific significance—not only was the work published in Nature, they also summarized it in their News and Views section.
  • Furthermore, the top three work very hard at getting other journalists to cover their articles. This is easy to see by simply looking at Nobel Intent's coverage. Most of the work we discuss comes from Science and Nature. Is this because we only read those two publications? No, but they tell us ahead of time what is interesting in their upcoming issue. They even provide short summaries of many papers that practically guide people through writing the story, meaning reporter Jim at the local daily doesn't need a science degree to cover the science beat.
  • Very few of the other journals do this. I don't get early access to the Physical Review series, even though I love reporting from them. In fact, until this year, they didn't even highlight interesting papers for their own readers. This makes it incredibly hard for a science reporter to cover science outside of the major journals. The knock-on effect is that Applied Physics Letters never appears in the news, which means you can't evaluate recent news coverage to figure out what's of general interest, leaving you with... well, the big three journals again, which mostly report on themselves. On the other hand, if a particular scientific topic does start to receive some press attention, it is much more likely that similar work will suddenly be acceptable in the big three journals.
  • That said, I should point out that judging the significance of scientific work is a process fraught with difficulty. Why do you think it takes around 10 years from the publication of first results through to obtaining a Nobel Prize? Because it can take that long for the implications of the results to sink in—or, more commonly, sink without trace.
  • I don't think that we can reasonably expect journal editors and peer reviewers to accurately assess the significance (general or otherwise) of a new piece of research. There are, of course, exceptions: the first genome sequences, the first observation that the rate of the expansion of the universe is changing. But the point is that these are exceptions, and most work's significance is far more ambiguous, and even goes unrecognized (or over-celebrated) by scientists in the field.
  • The conclusion is that the top three journals are significantly gamed by scientists who are trying to get ahead in their careers—citations always lag a few years behind, so a PNAS paper with less than ten citations can look good for quite a few years, even compared to an Optics Letters with 50 citations. The top three journals overtly encourage this, because it is to their advantage if everyone agrees that they are the source of the most interesting science. Consequently, scientists who are more honest in self-assessing their work, or who simply aren't word-smiths, end up losing out.
  • scientific competence should not be judged by how many citations the author's work has received or where it was published. Instead, we should consider using a mathematical graph analysis to look at the networks of publications and citations, which should help us judge how central to a field a particular researcher is. This would have the positive influence of a publication mattering less than who thought it was important.
  • Science and Nature should either eliminate their News and Views section, or implement a policy of not reporting on their own articles. This would open up one of the major sources of "science news for scientists" to stories originating in other journals.
Weiye Loh

Real Climate faces libel suit | Environment | guardian.co.uk - 0 views

  • Gavin Schmidt, a climate modeller and Real Climate member based at Nasa's Goddard Institute for Space Studies in New York, has claimed that Energy & Environment (E&E) has "effectively dispensed with substantive peer review for any papers that follow the editor's political line." The journal denies the claim, and, according to Schmidt, has threatened to take further action unless he retracts it.
  • Every paper that is submitted to the journal is vetted by a number of experts, she said. But she did not deny that she allows her political agenda to influence which papers are published in the journal. "I'm not ashamed to say that I deliberately encourage the publication of papers that are sceptical of climate change," said Boehmer-Christiansen, who does not believe in man-made climate change.
  • Simon Singh, a science writer who last year won a major libel battle with the British Chiropractic Association (BCA), said: "A libel threat is potentially catastrophic. It can lead to a journalist going bankrupt or a blogger losing his house. A lot of journalists and scientists will understandably react to the threat of libel by retracting their articles, even if they are confident they are correct. So I'm delighted that Gavin Schmidt is going to stand up for what he has written." During the case with the BCA, Singh also received a libel threat in response to an article he had written about climate change, but Singh stood by what he had written and threat was not carried through.
  • ...7 more annotations...
  • Schmidt has refused to retract his comments and maintains that the majority of papers published in the journal are "dross"."I would personally not credit any article that was published there with any useful contribution to the science," he told the Guardian. "Saying a paper was published in E&E has become akin to immediately discrediting it." He also describes the journal as a "backwater" of poorly presented and incoherent contributions that "anyone who has done any science can see are fundamentally flawed from the get-go."
  • Schmidt points to an E&E paper that claimed that the Sun is made of iron. "The editor sent it out for review, where it got trashed (as it should have been), and [Boehmer-Christiansen] published it anyway," he says.
  • The journal also published a much-maligned analysis suggesting that levels of the greenhouse gas carbon dioxide could go up and down by 100 parts per million in a year or two, prompting marine biologist Ralph Keeling at the Scripps Institute of Oceanography in La Jolla, California to write a response to the journal, in which he asked: "Is it really the intent of E&E to provide a forum for laundering pseudo-science?"
  • Schmidt and Keeling are not alone in their criticisms. Roger Pielke Jr, a professor of environmental studies at the University of Colorado, said he regrets publishing a paper in the journal in 2000 – one year after it was established and before he had time to realise that it was about to become a fringe platform for climate sceptics. "[E&E] has published a number of low-quality papers, and the editor's political agenda has clearly undermined the legitimacy of the outlet," Pielke says. "If I had a time machine I'd go back and submit our paper elsewhere."
  • Any paper published in E&E is now ignored by the broader scientific community, according to Pielke. "In some cases perhaps that is justified, but I would argue that it provided a convenient excuse to ignore our paper on that basis alone, and not on the merits of its analysis," he said. In the long run, Pielke is confident that good ideas will win out over bad ideas. "But without care to the legitimacy of our science institutions – including journals and peer review – that long run will be a little longer," he says.
  • she has no intention of changing the way she runs E&E – which is not listed on the ISI Journal Master list, an official list of academic journals – in response to his latest criticisms.
  • Schmidt is unsurprised. "You would need a new editor, new board of advisors, and a scrupulous adherence to real peer review, perhaps ... using an open review process," he said. "But this is very unlikely to happen since their entire raison d'être is political, not scientific."
Weiye Loh

Meet Science: What is "peer review"? - Boing Boing - 0 views

  • Scientists do complain about peer review. But let me set one thing straight: The biggest complaints scientists have about peer review are not that it stifles unpopular ideas. You've heard this truthy factoid from countless climate-change deniers, and purveyors of quack medicine. And peer review is a convenient scapegoat for their conspiracy theories. There's just enough truth to make the claims sound plausible.
  • Peer review is flawed. Peer review can be biased. In fact, really new, unpopular ideas might well have a hard time getting published in the biggest journals right at first. You saw an example of that in my interview with sociologist Harry Collins. But those sort of findings will often published by smaller, more obscure journals. And, if a scientist keeps finding more evidence to support her claims, and keeps submitting her work to peer review, more often than not she's going to eventually convince people that she's right. Plenty of scientists, including Harry Collins, have seen their once-shunned ideas published widely.
  • So what do scientists complain about? This shouldn't be too much of a surprise. It's the lack of training, the lack of feedback, the time constraints, and the fact that, the more specific your research gets, the fewer people there are with the expertise to accurately and thoroughly review your work.
  • ...5 more annotations...
  • Scientists are frustrated that most journals don't like to publish research that is solid, but not ground-breaking. They're frustrated that most journals don't like to publish studies where the scientist's hypothesis turned out to be wrong.
  • Some scientists would prefer that peer review not be anonymous—though plenty of others like that feature. Journals like the British Medical Journal have started requiring reviewers to sign their comments, and have produced evidence that this practice doesn't diminish the quality of the reviews.
  • There are also scientists who want to see more crowd-sourced, post-publication review of research papers. Because peer review is flawed, they say, it would be helpful to have centralized places where scientists can go to find critiques of papers, written by scientists other than the official peer-reviewers. Maybe the crowd can catch things the reviewers miss. We certainly saw that happen earlier this year, when microbiologist Rosie Redfield took a high-profile peer-reviewed paper about arsenic-based life to task on her blog. The website Faculty of 1000 is attempting to do something like this. You can go to that site, look up a previously published peer-reviewed paper, and see what other scientists are saying about it. And the Astrophysics Archive has been doing this same basic thing for years.
  • you shouldn't canonize everything a peer-reviewed journal article says just because it is a peer-reviewed journal article.
  • at the same time, being peer reviewed is a sign that the paper's author has done some level of due diligence in their work. Peer review is flawed, but it has value. There are improvements that could be made. But, like the old joke about democracy, peer review is the worst possible system except for every other system we've ever come up with.
  •  
    Being peer reviewed doesn't mean your results are accurate. Not being peer reviewed doesn't mean you're a crank. But the fact that peer review exists does weed out a lot of cranks, simply by saying, "There is a standard." Journals that don't have peer review do tend to be ones with an obvious agenda. White papers, which are not peer reviewed, do tend to contain more bias and self-promotion than peer-reviewed journal articles.
Weiye Loh

Where to find scientific research with negative results - Boing Boing - 0 views

  • Health scientists, and health science reporters, know there's a bias that leads to more published studies showing positive results for treatments. Many of the studies that show negative results are never published, but there are some out there, if you know where to look.
  • If you want to know what treatments don't work Ivan Oransky has three recommendations: Compare registration lists of medical trials to published results; step away from the big name books and read through some lower-ranked peer-reviewed journals; and peruse the delightfully named Journal of Negative Results in Biomedicine.
  • It is interesting that people are more and more thinking on publishing negative results. I've recently discovered The All Results Journals, a new journal focus on publishing negative results and think the idea is great. Have a look to their published articles (they are good, believe me) at: http://www.arjournals.com/ojs/index.php?journal=Biol&page=issue&op=current and http://www.arjournals.com/ojs/index.php?journal=Chem&page=issue&op=current Their slogan is also great (in my opinion) : All your results are good results! (specially the negative)
Weiye Loh

When big pharma pays a publisher to publish a fake journal... : Respectful Insolence - 0 views

  • pharmaceutical company Merck, Sharp & Dohme paid Elsevier to produce a fake medical journal that, to any superficial examination, looked like a real medical journal but was in reality nothing more than advertising for Merck
  • As reported by The Scientist: Merck paid an undisclosed sum to Elsevier to produce several volumes of a publication that had the look of a peer-reviewed medical journal, but contained only reprinted or summarized articles--most of which presented data favorable to Merck products--that appeared to act solely as marketing tools with no disclosure of company sponsorship. "I've seen no shortage of creativity emanating from the marketing departments of drug companies," Peter Lurie, deputy director of the public health research group at the consumer advocacy nonprofit Public Citizen, said, after reviewing two issues of the publication obtained by The Scientist. "But even for someone as jaded as me, this is a new wrinkle." The Australasian Journal of Bone and Joint Medicine, which was published by Exerpta Medica, a division of scientific publishing juggernaut Elsevier, is not indexed in the MEDLINE database, and has no website (not even a defunct one). The Scientist obtained two issues of the journal: Volume 2, Issues 1 and 2, both dated 2003. The issues contained little in the way of advertisements apart from ads for Fosamax, a Merck drug for osteoporosis, and Vioxx.
  • there are numerous "throwaway" journals out there. "Throwaway" journals tend to be defined as journals that are provided free of charge, have a lot of advertising (a high "advertising-to-text" ratio, as it is often described), and contain no original investigations. Other relevant characteristics include: Supported virtually entirely by advertising revenue. Ads tend to be placed within article pages interrupting the articles, rather than between articles, as is the case with most medical journals that accept ads Virtually the entire content is reviews of existing content of variable (and often dubious) quality. Parasitic. Throwaways often summarize peer-reviewed research from real journals. Questionable (at best) peer review. Throwaways tend to cater to an uninvolved and uncritical readership. No original work.
Weiye Loh

Editorial Policies - 0 views

  • More than 60% of the experiments fail to produce results or expected discoveries. From an objective point of view, this high percentage of “failed “ research generates high level pieces of knowledge. Generally, all these experiments have not been published anywhere as they have been considered useless for our research target. The objective of “The All Results Journals: Biology” focuses on recovering and publishing these valuable pieces of information in Biology. These key experiments must be considered vital for the development of science. They  are the catalyst for a real science-based empirical knowledge.
  • The All Results Journals: Biology is an online journal that publishes research articles after a controlled peer review. All articles will be published, without any barriers to access, immediately upon acceptance.
  • Every single contribution submitted to The All Results Journals and selected for a peer-review will be sent to, at least, one reviewer, though usually could be sent to two or more independent reviewers, selected by the editors and sometimes by more if further advice is required (e.g., on statistics or on a particular technique). Authors are welcome to suggest suitable independent reviewers and may also request the journal to exclude certain individuals or laboratories.
  • ...1 more annotation...
  • The journal will cover negative (or “secondary”) experiments coming from all disciplines of Biology (Botany, Cell Biology, Genetics, Ecology, Microbiology, etc). An article in The All Results Journals should be created to show the failed experiments tuning methods or reactions. Articles should present experimental discoveries, interpret their significance and establish perspective with respect to earlier work of the author. It is also advisable to cite the work where the experiments has already been tuned and published.
  •  
    More than 60% of the experiments fail to produce results or expected discoveries. From an objective point of view, this high percentage of "failed " research generates high level pieces of knowledge. Generally, all these experiment
Weiye Loh

nanopolitan: Medicine, Trials, Conflict of Interest, Disclosures - 0 views

  • Some 1500 documents revealed in litigation provide unprecedented insights into how pharmaceutical companies promote drugs, including the use of vendors to produce ghostwritten manuscripts and place them into medical journals.
  • Dozens of ghostwritten reviews and commentaries published in medical journals and supplements were used to promote unproven benefits and downplay harms of menopausal hormone therapy (HT), and to cast raloxifene and other competing therapies in a negative light.
  • the pharmaceutical company Wyeth used ghostwritten articles to mitigate the perceived risks of breast cancer associated with HT, to defend the unsupported cardiovascular “benefits” of HT, and to promote off-label, unproven uses of HT such as the prevention of dementia, Parkinson's disease, vision problems, and wrinkles.
  • ...7 more annotations...
  • Given the growing evidence that ghostwriting has been used to promote HT and other highly promoted drugs, the medical profession must take steps to ensure that prescribers renounce participation in ghostwriting, and to ensure that unscrupulous relationships between industry and academia are avoided rather than courted.
  • Twenty-five out of 32 highly paid consultants to medical device companies in 2007, or their publishers, failed to reveal the financial connections in journal articles the following year, according to a [recent] study.
  • The study compared major payments to consultants by orthopedic device companies with financial disclosures the consultants later made in medical journal articles, and found them lacking in public transparency. “We found a massive, dramatic system failure,” said David J. Rothman, a professor and president of the Institute on Medicine as a Profession at Columbia University, who wrote the study with two other Columbia researchers, Susan Chimonas and Zachary Frosch.
  • Carl Elliot in The Chronicle of Higher Educations: The Secret Lives of Big Pharma's 'Thought Leaders':
  • See also a related NYTimes report -- Menopause, as Brought to You by Big Pharma by Natasha Singer and Duff Wilson -- from December 2009. Duff Wilson reports in the NYTimes: Medical Industry Ties Often Undisclosed in Journals:
  • Pharmaceutical companies hire KOL's [Key Opinion Leaders] to consult for them, to give lectures, to conduct clinical trials, and occasionally to make presentations on their behalf at regulatory meetings or hearings.
  • KOL's do not exactly endorse drugs, at least not in ways that are too obvious, but their opinions can be used to market them—sometimes by word of mouth, but more often by quasi-academic activities, such as grand-rounds lectures, sponsored symposia, or articles in medical journals (which may be ghostwritten by hired medical writers). While pharmaceutical companies seek out high-status KOL's with impressive academic appointments, status is only one determinant of a KOL's influence. Just as important is the fact that a KOL is, at least in theory, independent. [...]
  •  
    Medicine, Trials, Conflict of Interest, Disclosures Just a bunch of links -- mostly from the US -- that paint give us a troubling picture of the state of ethics in biomedical fields:
Weiye Loh

Roger Pielke Jr.'s Blog: Full Comments to the Guardian - 0 views

  • The Guardian has an good article today on a threatened libel suit under UK law against Gavin Schmidt, a NASA researcher who blogs at Real Climate, by the publishers of the journal Energy and Environment. 
  • Here are my full comments to the reporter for the Guardian, who was following up on Gavin's reference to comments I had made a while back about my experiences with E&E:
  • In 2000, we published a really excellent paper (in my opinion) in E&E in that has stood the test of time: Pielke, Jr., R. A., R. Klein, and D. Sarewitz (2000), Turning the big knob: An evaluation of the use of energy policy to modulate future climate impacts. Energy and Environment 2:255-276. http://sciencepolicy.colorado.edu/admin/publication_files/resource-250-2000.07.pdf You'll see that paper was in only the second year of the journal, and we were obviously invited to submit a year or so before that. It was our expectation at the time that the journal would soon be ISI listed and it would become like any other academic journal. So why not publish in E&E?
  • ...5 more annotations...
  • That paper, like a lot of research, required a lot of effort.  So it was very disappointing to E&E in the years that followed identify itself as an outlet for alternative perspectives on the climate issue. It has published a number of low-quality papers and a high number of opinion pieces, and as far as I know it never did get ISI listed.
  • Boehmer-Christiansen's quote about following her political agenda in running the journal is one that I also have cited on numerous occasions as an example of the pathological politicization of science. In this case the editor's political agenda has clearly undermined the legitimacy of the outlet.  So if I had a time machine I'd go back and submit our paper elsewhere!
  • A consequence of the politicization of E&E is that any paper published there is subsequently ignored by the broader scientific community. In some cases perhaps that is justified, but I would argue that it provided a convenient excuse to ignore our paper on that basis alone, and not on the merits of its analysis. So the politicization of E&E enables a like response from its critics, which many have taken full advantage of. For outside observers of climate science this action and response together give the impression that scientific studies can be evaluated simply according to non-scientific criteria, which ironically undermines all of science, not just E&E.  The politicization of the peer review process is problematic regardless of who is doing the politicization because it more readily allows for political judgments to substitute for judgments of the scientific merit of specific arguments.  An irony here of course is that the East Anglia emails revealed a desire to (and some would say success in) politicize the peer review process, which I discuss in The Climate Fix.
  • For my part, in 2007 I published a follow on paper to the 2000 E&E paper that applied and extended a similar methodology.  This paper passed peer review in the Philosophical Transactions of the Royal Society: Pielke, Jr., R. A. (2007), Future economic damage from tropical cyclones: sensitivities to societal and climate changes. Philosophical Transactions of the Royal Society A 365 (1860) 2717-2729 http://sciencepolicy.colorado.edu/admin/publication_files/resource-2517-2007.14.pdf
  • Over the long run I am confident that good ideas will win out over bad ideas, but without care to the legitimacy of our science institutions -- including journals and peer review -- that long run will be a little longer.
Weiye Loh

How drug companies' PR tactics skew the presentation of medical research | Science | gu... - 0 views

  • Drug companies exert this hold on knowledge through publication planning agencies, an obscure subsection of the pharmaceutical industry that has ballooned in size in recent years, and is now a key lever in the commercial machinery that gets drugs sold.The planning companies are paid to implement high-impact publication strategies for specific drugs. They target the most influential academics to act as authors, draft the articles, and ensure that these include clearly-defined branding messages and appear in the most prestigious journals.
  • In selling their services to drug companies, the agencies' explain their work in frank language. Current Medical Directions, a medical communications company based in New York, promises to create "scientific content in support of our clients' messages". A rival firm from Macclesfield, Complete HealthVizion, describes what it does as "a fusion of evidence and inspiration."
  • There are now at least 250 different companies engaged in the business of planning clinical publications for the pharmaceutical industry, according to the International Society for Medical Publication Professionals, which said it has over 1000 individual members.Many firms are based in the UK and the east coast of the United States in traditional "pharma" centres like Pennsylvania and New Jersey.Precise figures are hard to pin down because publication planning is widely dispersed and is only beginning to be recognized as something like a discrete profession.
  • ...6 more annotations...
  • the standard approach to article preparation is for planners to work hand-in-glove with drug companies to create a first draft. "Key messages" laid out by the drug companies are accommodated to the extent that they can be supported by available data.Planners combine scientific information about a drug with two kinds of message that help create a "drug narrative". "Environmental" messages are intended to forge the sense of a gap in available medicine within a specific clinical field, while "product" messages show how the new drug meets this need.
  • In a flow-chart drawn up by Eric Crown, publications manager at Merck (the company that sold the controversial painkiller Vioxx), the determination of authorship appears as the fourth stage of the article preparation procedure. That is, only after company employees have presented clinical study data, discussed the findings, finalised "tactical plans" and identified where the article should be published.Perhaps surprisingly to the casual observer, under guidelines tightened up in recent years by the International Committee of Journal Editors (ICMJE), Crown's approach, typical among pharmaceutical companies, does not constitute ghostwriting.
  • What publication planners understand by the term is precise but it is also quite distinct from the popular interpretation.
  • "We may have written a paper, but the people we work with have to have some input and approve it."
  • "I feel that we're doing something good for mankind in the long-run," said Kimberly Goldin, head of the International Society for Medical Publication Professionals (ISMPP). "We want to influence healthcare in a very positive, scientifically sound way.""The profession grew out of a marketing umbrella, but has moved under the science umbrella," she said.But without the window of court documents to show how publication planning is being carried out today, the public simply cannot know if reforms the industry says it has made are genuine.
  • Dr Leemon McHenry, a medical ethicist at California State University, says nothing has changed. "They've just found more clever ways of concealing their activities. There's a whole army of hidden scribes. It's an epistemological morass where you can't trust anything."Alastair Matheson is a British medical writer who has worked extensively for medical communication agencies. He dismisses the planners' claims to having reformed as "bullshit"."The new guidelines work very nicely to permit the current system to continue as it has been", he said. "The whole thing is a big lie. They are promoting a product."
Weiye Loh

In the Dock, in Paris « EJIL: Talk! - 0 views

  • My entire professional life has been in the law, but nothing had prepared me for this. I have been a tenured faculty member  at the finest institutions, most recently Harvard and NYU.  I have held visiting appointments from Florence to Singapore, from Melbourne to Jerusalem. I have acted as legal counsel to governments on four continents, handled cases before the highest jurisdictions and arbitrated the most complex disputes among economic ‘super powers.’
  • Last week, for the first  time I found myself  in the dock, as a criminal defendant. The French Republic v Weiler on a charge of Criminal Defamation.
  • As Editor-in-Chief of the European Journal of International Law and its associated Book Reviewing website, I commissioned and then published a review of a book on the International Criminal Court. It was not a particularly favorable review. You may see all details here.  The author of the book, claiming defamation, demanded I remove it. I examined carefully the claim and concluded that the accusation was fanciful. Unflattering? Yes. Defamatory, by no stretch of imagination. It was my ‘Voltairian’ moment. I refused the request. I did offer to publish a reply by the author. This offer was declined.
  • ...6 more annotations...
  • Three months later I was summoned to appear before an Examining Magistrate in Paris based on a complaint of criminal defamation lodged by the author. Why Paris you might ask? Indeed. The author of the book was an Israeli academic. The book was in English. The publisher was Dutch. The reviewer was a distinguished German professor. The review was published on a New York website.
  • Beyond doubt, once a text or image go online, they become available worldwide, including France. But should that alone give jurisdiction to French courts in circumstances such as this? Does the fact that the author of the book, it turned out, retained her French nationality before going to live and work in Israel make a difference? Libel tourism – libel terrorism to some — is typically associated with London, where notorious high legal fees and punitive damages coerce many to throw in the towel even before going to trial. Paris, as we would expect, is more egalitarian and less materialist. It is very plaintiff friendly.
  • In France an attack on one’s honor is taken as seriously as a bodily attack. Substantively, if someone is defamed, the bad faith of the defamer is presumed just as in our system, if someone slaps you in the face, it will be assumed that he intended to do so. Procedurally it is open to anyone who feels defamed, to avoid the costly civil route, and simply lodge a criminal complaint.  At this point the machinery of the State swings into action. For the defendant it is not without cost, I discovered. Even if I win I will not recover my considerable legal expenses and conviction results in a fine the size of which may depend on one’s income (the egalitarian reflex at its best). But money is not the principal currency here. It is honor and shame. If I lose, I will stand convicted of a crime, branded a criminal. The complainant will not enjoy a windfall as in London, but considerable moral satisfaction. The chilling effect on book reviewing well beyond France will be considerable.
  • The case was otiose for two reasons: It was in our view an egregious instance of ‘forum shopping,’ legalese for libel tourism. We wanted it thrown out. But if successful, the Court would never get to the merits –  and it was important to challenge this hugely dangerous attack on academic freedom and liberty of expression. Reversing custom, we specifically asked the Court not to examine our jurisdictional challenge as a preliminary matter but to join it to the case on the merits so that it would have the possibility to pronounce on both issues.
  • The trial was impeccable by any standard with which I am familiar. The Court, comprised three judges specialized in defamation and the Public Prosecutor. Being a criminal case within the Inquisitorial System, the case began by my interrogation by the President of the Court. I was essentially asked to explain the reasons for refusing to remove the article. The President was patient with my French – fluent but bad!  I was then interrogated by the other judges, the Public Prosecutor and the lawyers for the complainant. The complainant was then subjected to the same procedure after which the lawyers made their (passionate) legal arguments. The Public Prosecutor then expressed her Opinion to the Court. I was allowed the last word. It was a strange mélange of the criminal and civil virtually unknown in the Common Law world. The procedure was less formal, aimed at establishing the truth, and far less hemmed down by rules of evidence and procedure. Due process was definitely served. It was a fair trial.
  • we steadfastly refused to engage the complainants challenges to the veracity of the critical statements made by the reviewer. The thrust of our argument was that absent bad faith and malice, so long as the review in question addressed the book and did not make false statement about the author such as plagiarism, it should be shielded from libel claims, let alone criminal libel. Sorting out of the truth should be left to academic discourse, even if academic discourse has its own biases and imperfections.
Weiye Loh

Lies, damned lies, and impact factors - The Dayside - 0 views

  • a journal's impact factor for a given year is the average number of citations received by papers published in the journal during the two preceding years. Letters to the editor, editorials, book reviews, and other non-papers are excluded from the impact factor calculation.
  • Review papers that don't necessarily contain new scientific knowledge yet provide useful overviews garner lots of citations. Five of the top 10 perennially highest-impact-factor journals, including the top four, are review journals.
  • Now suppose you're a journal editor or publisher. In these tough financial times, cash-strapped libraries use impact factors to determine which subscriptions to keep and which to cancel. How would you raise your journal's impact factor? Publishing fewer and better papers is one method. Or you could run more review articles. But, as a paper posted recently on arXiv describes, there's another option: You can manipulate the impact factor by publishing your own papers that cite your own journal.
  • ...1 more annotation...
  • Douglas Arnold and Kristine Fowler. "Nefarious Numbers" is the title they chose for the paper. Its abstract reads as follows: We investigate the journal impact factor, focusing on the applied mathematics category. We demonstrate that significant manipulation of the impact factor is being carried out by the editors of some journals and that the impact factor gives a very inaccurate view of journal quality, which is poorly correlated with expert opinion.
  •  
    Lies, damned lies, and impact factors
Weiye Loh

RealClimate: E&E threatens a libel suit - 0 views

  • From: Bill Hughes Cc: Sonja Boehmer-Christiansen Subject:: E&E libel Date: 02/18/11 10:48:01 Gavin, your comment about Energy & Environment which you made on RealClimate has been brought to my attention: “The evidence for this is in precisely what happens in venues like E&E that have effectively dispensed with substantive peer review for any papers that follow the editor’s political line. ” To assert, without knowing, as you cannot possibly know, not being connected with the journal yourself, that an academic journal does not bother with peer review, is a terribly damaging charge, and one I’m really quite surprised that you’re prepared to make. And to further assert that peer review is abandoned precisely in order to let the editor publish papers which support her political position, is even more damaging, not to mention being completely ridiculous. At the moment, I’m prepared to settle merely for a retraction posted on RealClimate. I’m quite happy to work with you to find a mutually satisfactory form of words: I appreciate you might find it difficult. I look forward to hearing from you. With best wishes Bill Hughes Director Multi-Science Publsihing [sic] Co Ltd
  • The comment in question was made in the post “From blog to Science”
  • The point being that if the ‘peer-review’ bar gets lowered, the result is worse submissions, less impact and a declining reputation. Something that fits E&E in spades. This conclusion is based on multiple years of evidence of shoddy peer-review at E&E and, obviously, on the statements of the editor, Sonja Boehmer-Christiansen. She was quoted by Richard Monastersky in the Chronicle of Higher Education (3 Sep 2003) in the wake of the Soon and Baliunas fiasco: The journal’s editor, Sonja Boehmer-Christiansen, a reader in geography at the University of Hull, in England, says she sometimes publishes scientific papers challenging the view that global warming is a problem, because that position is often stifled in other outlets. “I’m following my political agenda — a bit, anyway,” she says. “But isn’t that the right of the editor?”
  • ...4 more annotations...
  • the claim that the ‘an editor publishes papers based on her political position’ while certainly ‘terribly damaging’ to the journal’s reputation is, unfortunately, far from ridiculous.
  • Other people have investigated the peer-review practices of E&E and found them wanting. Greenfyre, dissecting a list of supposedly ‘peer-reviewed’ papers from E&E found that: A given paper in E&E may have been peer reviewed (but unlikely). If it was, the review process might have been up to the normal standards for science (but unlikely). Hence E&E’s exclusion from the ISI Journal Master list, and why many (including Scopus) do not consider E&E a peer reviewed journal at all. Further, even the editor states that it is not a science journal and that it is politically motivated/influenced. Finally, at least some of what it publishes is just plain loony.
  • Also, see comments from John Hunter and John Lynch. Nexus6 claimed to found the worst climate paper ever published in its pages, and that one doesn’t even appear to have been proof-read (a little like Bill’s email). A one-time author, Roger Pielke Jr, said “…had we known then how that outlet would evolve beyond 1999 we certainly wouldn’t have published there. “, and Ralph Keeling once asked, “Is it really the intent of E&E to provide a forum for laundering pseudo-science?”. We report, you decide.
  • We are not surprised to find that Bill Hughes (the publisher) is concerned about his journal’s evidently appalling reputation. However, perhaps the way to fix that is to start applying a higher level of quality control rather than by threatening libel suits against people who publicly point out the problems?
Weiye Loh

Open science: a future shaped by shared experience | Education | The Observer - 0 views

  • one day he took one of these – finding a mathematical proof about the properties of multidimensional objects – and put his thoughts on his blog. How would other people go about solving this conundrum? Would somebody else have any useful insights? Would mathematicians, notoriously competitive, be prepared to collaborate? "It was an experiment," he admits. "I thought it would be interesting to try."He called it the Polymath Project and it rapidly took on a life of its own. Within days, readers, including high-ranking academics, had chipped in vital pieces of information or new ideas. In just a few weeks, the number of contributors had reached more than 40 and a result was on the horizon. Since then, the joint effort has led to several papers published in journals under the collective pseudonym DHJ Polymath. It was an astonishing and unexpected result.
  • "If you set out to solve a problem, there's no guarantee you will succeed," says Gowers. "But different people have different aptitudes and they know different tricks… it turned out their combined efforts can be much quicker."
  • There are many interpretations of what open science means, with different motivations across different disciplines. Some are driven by the backlash against corporate-funded science, with its profit-driven research agenda. Others are internet radicals who take the "information wants to be free" slogan literally. Others want to make important discoveries more likely to happen. But for all their differences, the ambition remains roughly the same: to try and revolutionise the way research is performed by unlocking it and making it more public.
  • ...10 more annotations...
  • Jackson is a young bioscientist who, like many others, has discovered that the technologies used in genetics and molecular biology, once the preserve of only the most well-funded labs, are now cheap enough to allow experimental work to take place in their garages. For many, this means that they can conduct genetic experiments in a new way, adopting the so-called "hacker ethic" – the desire to tinker, deconstruct, rebuild.
  • The rise of this group is entertainingly documented in a new book by science writer Marcus Wohlsen, Biopunk (Current £18.99), which describes the parallels between today's generation of biological innovators and the rise of computer software pioneers of the 1980s and 1990s. Indeed, Bill Gates has said that if he were a teenager today, he would be working on biotechnology, not computer software.
  • open scientists suggest that it doesn't have to be that way. Their arguments are propelled by a number of different factors that are making transparency more viable than ever.The first and most powerful change has been the use of the web to connect people and collect information. The internet, now an indelible part of our lives, allows like-minded individuals to seek one another out and share vast amounts of raw data. Researchers can lay claim to an idea not by publishing first in a journal (a process that can take many months) but by sharing their work online in an instant.And while the rapidly decreasing cost of previously expensive technical procedures has opened up new directions for research, there is also increasing pressure for researchers to cut costs and deliver results. The economic crisis left many budgets in tatters and governments around the world are cutting back on investment in science as they try to balance the books. Open science can, sometimes, make the process faster and cheaper, showing what one advocate, Cameron Neylon, calls "an obligation and responsibility to the public purse".
  • "The litmus test of openness is whether you can have access to the data," says Dr Rufus Pollock, a co-founder of the Open Knowledge Foundation, a group that promotes broader access to information and data. "If you have access to the data, then anyone can get it, use it, reuse it and redistribute it… we've always built on the work of others, stood on the shoulders of giants and learned from those who have gone before."
  • moves are afoot to disrupt the closed world of academic journals and make high-level teaching materials available to the public. The Public Library of Science, based in San Francisco, is working to make journals more freely accessible
  • it's more than just politics at stake – it's also a fundamental right to share knowledge, rather than hide it. The best example of open science in action, he suggests, is the Human Genome Project, which successfully mapped our DNA and then made the data public. In doing so, it outflanked J Craig Venter's proprietary attempt to patent the human genome, opening up the very essence of human life for science, rather than handing our biological information over to corporate interests.
  • the rise of open science does not please everyone. Critics have argued that while it benefits those at either end of the scientific chain – the well-established at the top of the academic tree or the outsiders who have nothing to lose – it hurts those in the middle. Most professional scientists rely on the current system for funding and reputation. Others suggest it is throwing out some of the most important elements of science and making deep, long-term research more difficult.
  • Open science proponents say that they do not want to make the current system a thing of the past, but that it shouldn't be seen as immutable either. In fact, they say, the way most people conceive of science – as a highly specialised academic discipline conducted by white-coated professionals in universities or commercial laboratories – is a very modern construction.It is only over the last century that scientific disciplines became industrialised and compartmentalised.
  • open scientists say they don't want to throw scientists to the wolves: they just want to help answer questions that, in many cases, are seen as insurmountable.
  • "Some people, very straightforwardly, said that they didn't like the idea because it undermined the concept of the romantic, lone genius." Even the most dedicated open scientists understand that appeal. "I do plan to keep going at them," he says of collaborative projects. "But I haven't given up on solitary thinking about problems entirely."
Weiye Loh

Times Higher Education - Unconventional thinkers or recklessly dangerous minds? - 0 views

  • The origin of Aids denialism lies with one man. Peter Duesberg has spent the whole of his academic career at the University of California, Berkeley. In the 1970s he performed groundbreaking work that helped show how mutated genes cause cancer, an insight that earned him a well-deserved international reputation.
  • in the early 1980s, something changed. Duesberg attempted to refute his own theories, claiming that it was not mutated genes but rather environmental toxins that are cancer's true cause. He dismissed the studies of other researchers who had furthered his original work. Then, in 1987, he published a paper that extended his new train of thought to Aids.
  • Initially many scientists were open to Duesberg's ideas. But as evidence linking HIV to Aids mounted - crucially the observation that ARVs brought Aids sufferers who were on the brink of death back to life - the vast majority concluded that the debate was over. Nonetheless, Duesberg persisted with his arguments, and in doing so attracted a cabal of supporters
  • ...12 more annotations...
  • In 1999, denialism secured its highest-profile advocate: Thabo Mbeki, who was then president of South Africa. Having studied denialist literature, Mbeki decided that the consensus on Aids sounded too much like a "biblical absolute truth" that couldn't be questioned. The following year he set up a panel of advisers, nearly half of whom were Aids denialists, including Duesberg. The resultant health policies cut funding for clinics distributing ARVs, withheld donor medication and blocked international aid grants. Meanwhile, Mbeki's health minister, Manto Tshabalala-Msimang, promoted the use of alternative Aids remedies, such as beetroot and garlic.
  • In 2007, Nicoli Nattrass, an economist and director of the Aids and Society Research Unit at the University of Cape Town, estimated that, between 1999 and 2007, Mbeki's Aids denialist policies led to more than 340,000 premature deaths. Later, scientists Max Essex, Pride Chigwedere and other colleagues at the Harvard School of Public Health arrived at a similar figure.
  • "I don't think it's hyperbole to say the (Mbeki regime's) Aids policies do not fall short of a crime against humanity," says Kalichman. "The science behind these medications was irrefutable, and yet they chose to buy into pseudoscience and withhold life-prolonging, if not life-saving, medications from the population. I just don't think there's any question that it should be looked into and investigated."
  • In fairness, there was a reason to have faint doubts about HIV treatment in the early days of Mbeki's rule.
  • some individual cases had raised questions about their reliability on mass rollout. In 2002, for example, Sarah Hlalele, a South African HIV patient and activist from a settlement background, died from "lactic acidosis", a side-effect of her drugs combination. Today doctors know enough about mixing ARVs not to make the same mistake, but at the time her death terrified the medical community.
  • any trial would be futile because of the uncertainties over ARVs that existed during Mbeki's tenure and the fact that others in Mbeki's government went along with his views (although they have since renounced them). "Mbeki was wrong, but propositions we had established then weren't as incontestably established as they are now ... So I think these calls (for genocide charges or criminal trials) are misguided, and I think they're a sideshow, and I don't support them."
  • Regardless of the culpability of politicians, the question remains whether scientists themselves should be allowed to promote views that go wildly against the mainstream consensus. The history of science is littered with offbeat ideas that were ridiculed by the scientific communities of the time. Most of these ideas missed the textbooks and went straight into the waste-paper basket, but a few - continental drift, the germ basis of disease or the Earth's orbit around the Sun, for instance - ultimately proved to be worth more than the paper they were written on. In science, many would argue, freedom of expression is too important to throw away.
  • Such an issue is engulfing the Elsevier journal Medical Hypotheses. Last year the journal, which is not peer reviewed, published a paper by Duesberg and others claiming that the South African Aids death-toll estimates were inflated, while reiterating the argument that there is "no proof that HIV causes Aids". That prompted several Aids scientists to complain to Elsevier, which responded by retracting the paper and asking the journal's editor, Bruce Charlton, to implement a system of peer review. Having refused to change the editorial policy, Charlton faces the sack
  • There are people who would like the journal to keep its current format and continue accepting controversial papers, but for Aids scientists, Duesberg's paper was a step too far. Although it was deleted from both the journal's website and the Medline database, its existence elsewhere on the internet drove Chigwedere and Essex to publish a peer-reviewed rebuttal earlier this year in AIDS and Behavior, lest any readers be "hoodwinked" into thinking there was genuine debate about the causes of Aids.
  • Duesberg believes he is being "censored", although he has found other outlets. In 1991, he helped form "The Group for the Scientific Reappraisal of the HIV/Aids Hypothesis" - now called Rethinking Aids, or simply The Group - to publicise denialist information. Backed by his Berkeley credentials, he regularly promotes his views in media articles and films. Meanwhile, his closest collaborator, David Rasnick, tells "anyone who asks" that "HIV drugs do more harm than good".
  • "Is academic freedom such a precious concept that scientists can hide behind it while betraying the public so blatantly?" asked John Moore, an Aids scientist at Cornell University, on a South African health news website last year. Moore suggested that universities could put in place a "post-tenure review" system to ensure that their researchers act within accepted bounds of scientific practice. "When the facts are so solidly against views that kill people, there must be a price to pay," he added.
  • Now it seems Duesberg may have to pay that price since it emerged last month that his withdrawn paper has led to an investigation at Berkeley for misconduct. Yet for many in the field, chasing fellow scientists comes second to dealing with the Aids pandemic.
  •  
    6 May 2010 Aids denialism is estimated to have killed many thousands. Jon Cartwright asks if scientists should be held accountable, while overleaf Bruce Charlton defends his decision to publish the work of an Aids sceptic, which sparked a row that has led to his being sacked and his journal abandoning its raison d'etre: presenting controversial ideas for scientific debate
Weiye Loh

Roger Pielke Jr.'s Blog: Science Impact - 0 views

  • The Guardian has a blog post up by three neuroscientists decrying the state of hype in the media related to their field, which is fueled in part by their colleagues seeking "impact." 
  • Anyone who has followed recent media reports that electrical brain stimulation "sparks bright ideas" or "unshackles the genius within" could be forgiven for believing that we stand on the frontier of a brave new world. As James Gallagher of the BBC put it, "Are we entering the era of the thinking cap – a device to supercharge our brains?" The answer, we would suggest, is a categorical no. Such speculations begin and end in the colourful realm of science fiction. But we are also in danger of entering the era of the "neuro-myth", where neuroscientists sensationalise and distort their own findings in the name of publicity. The tendency for scientists to over-egg the cake when dealing with the media is nothing new, but recent examples are striking in their disregard for accurate reporting to the public. We believe the media and academic community share a collective responsibility to prevent pseudoscience from masquerading as neuroscience.
  • They identify an . . . . . . unacceptable gulf between, on the one hand, the evidence-bound conclusions reached in peer-reviewed scientific journals, and on the other, the heavy spin applied by scientists to achieve publicity in the media. Are we as neuroscientists so unskilled at communicating with the public, or so low in our estimation of the public's intelligence, that we see no alternative but to mislead and exaggerate?
  • ...1 more annotation...
  • Somewhere down the line, achieving an impact in the media seems to have become the goal in itself, rather than what it should be: a way to inform and engage the public with clarity and objectivity, without bias or prejudice. Our obsession with impact is not one-sided. The craving of scientists for publicity is fuelled by a hurried and unquestioning media, an academic community that disproportionately rewards publication in "high impact" journals such as Nature, and by research councils that emphasise the importance of achieving "impact" while at the same time delivering funding cuts. Academics are now pushed to attend media training courses, instructed about "pathways to impact", required to include detailed "impact summaries" when applying for grant funding, and constantly reminded about the importance of media engagement to further their careers. Yet where in all of this strategising and careerism is it made clear why public engagement is important? Where is it emphasised that the most crucial consideration in our interactions with the media is that we are accurate, honest and open about the limitations of our research?
  •  
    The Guardian has a blog post up by three neuroscientists decrying the state of hype in the media related to their field, which is fueled in part by their colleagues seeking "impact." 
Weiye Loh

U. of California Tries Just Saying No to Rising Journal Costs - Research - The Chronicl... - 0 views

  • Nature proposed to raise the cost of California's license for its journals by 400 percent next year. If the publisher won't negotiate, the letter said, the system may have to take "more drastic actions" with the help of the faculty. Those actions could include suspending subscriptions to all of the Nature Group journals the California system buys access to—67 in all, including Nature.
  • faculty would also organize "a systemwide boycott" of Nature's journals if the publisher does not relent. The voluntary boycott would "strongly encourage" researchers not to contribute papers to those journals or review manuscripts for them. It would urge them to resign from Nature's editorial boards and to encourage similar "sympathy actions" among colleagues outside the University of California system.
Weiye Loh

School children publish science project in peer reviewed academic journal « E... - 0 views

  • A group of school children aged between 8 and 10 years old have had their school science project accepted for publication in an internationally recognised peer-reviewed journal. The paper, which reports novel findings in how bumblebees perceive colour, is published in the Royal Society journal Biology Letters.
Weiye Loh

takchek (读书 ): How Nature selects manuscripts for publication - 0 views

  • the explanation's pretty weak on the statistics given that it is a scientific journal. Drug Monkey and writedit have more on commentary about this particular editorial.
  • Good science, bad science, and whether it will lead to publication or not all rests on the decision of the editor. The gatekeeper.
  • do you know that Watson and Crick's landmark 1953 paper on the structure of DNA in the journal was not sent out for peer review at all?The reasons, as stated by Nature's Emeritus Editor John Maddox were:First, the Crick and Watson paper could not have been refereed: its correctness is self-evident. No referee working in the field (Linus Pauling?) could have kept his mouth shut once he saw the structure. Second, it would have been entirely consistent with my predecessor L. J. F. Brimble's way of working that Bragg's commendation should have counted as a referee's approval.
  • ...1 more annotation...
  • The whole business of scientific publishing is murky and sometimes who you know counts more than what you know in order to get your foot into the 'club'. Even Maddox alluded to the existence of such an 'exclusive' club:Brimble, who used to "take luncheon" at the Athenaeum in London most days, preferred to carry a bundle of manuscripts with him in the pocket of his greatcoat and pass them round among his chums "taking coffee" in the drawing-room after lunch. I set up a more systematic way of doing the job when I became editor in April 1966.
  •  
    How Nature selects manuscripts for publication Nature actually devoted an editorial (doi:10.1038/463850a) explaining its publication process.
Weiye Loh

The Decline Effect and the Scientific Method : The New Yorker - 0 views

  • On September 18, 2007, a few dozen neuroscientists, psychiatrists, and drug-company executives gathered in a hotel conference room in Brussels to hear some startling news. It had to do with a class of drugs known as atypical or second-generation antipsychotics, which came on the market in the early nineties.
  • the therapeutic power of the drugs appeared to be steadily waning. A recent study showed an effect that was less than half of that documented in the first trials, in the early nineteen-nineties. Many researchers began to argue that the expensive pharmaceuticals weren’t any better than first-generation antipsychotics, which have been in use since the fifties. “In fact, sometimes they now look even worse,” John Davis, a professor of psychiatry at the University of Illinois at Chicago, told me.
  • Before the effectiveness of a drug can be confirmed, it must be tested and tested again. Different scientists in different labs need to repeat the protocols and publish their results. The test of replicability, as it’s known, is the foundation of modern research. Replicability is how the community enforces itself. It’s a safeguard for the creep of subjectivity. Most of the time, scientists know what results they want, and that can influence the results they get. The premise of replicability is that the scientific community can correct for these flaws.
  • ...30 more annotations...
  • But now all sorts of well-established, multiply confirmed findings have started to look increasingly uncertain. It’s as if our facts were losing their truth: claims that have been enshrined in textbooks are suddenly unprovable. This phenomenon doesn’t yet have an official name, but it’s occurring across a wide range of fields, from psychology to ecology. In the field of medicine, the phenomenon seems extremely widespread, affecting not only antipsychotics but also therapies ranging from cardiac stents to Vitamin E and antidepressants: Davis has a forthcoming analysis demonstrating that the efficacy of antidepressants has gone down as much as threefold in recent decades.
  • In private, Schooler began referring to the problem as “cosmic habituation,” by analogy to the decrease in response that occurs when individuals habituate to particular stimuli. “Habituation is why you don’t notice the stuff that’s always there,” Schooler says. “It’s an inevitable process of adjustment, a ratcheting down of excitement. I started joking that it was like the cosmos was habituating to my ideas. I took it very personally.”
  • At first, he assumed that he’d made an error in experimental design or a statistical miscalculation. But he couldn’t find anything wrong with his research. He then concluded that his initial batch of research subjects must have been unusually susceptible to verbal overshadowing. (John Davis, similarly, has speculated that part of the drop-off in the effectiveness of antipsychotics can be attributed to using subjects who suffer from milder forms of psychosis which are less likely to show dramatic improvement.) “It wasn’t a very satisfying explanation,” Schooler says. “One of my mentors told me that my real mistake was trying to replicate my work. He told me doing that was just setting myself up for disappointment.”
  • the effect is especially troubling because of what it exposes about the scientific process. If replication is what separates the rigor of science from the squishiness of pseudoscience, where do we put all these rigorously validated findings that can no longer be proved? Which results should we believe? Francis Bacon, the early-modern philosopher and pioneer of the scientific method, once declared that experiments were essential, because they allowed us to “put nature to the question.” But it appears that nature often gives us different answers.
  • The most likely explanation for the decline is an obvious one: regression to the mean. As the experiment is repeated, that is, an early statistical fluke gets cancelled out. The extrasensory powers of Schooler’s subjects didn’t decline—they were simply an illusion that vanished over time. And yet Schooler has noticed that many of the data sets that end up declining seem statistically solid—that is, they contain enough data that any regression to the mean shouldn’t be dramatic. “These are the results that pass all the tests,” he says. “The odds of them being random are typically quite remote, like one in a million. This means that the decline effect should almost never happen. But it happens all the time!
  • this is why Schooler believes that the decline effect deserves more attention: its ubiquity seems to violate the laws of statistics. “Whenever I start talking about this, scientists get very nervous,” he says. “But I still want to know what happened to my results. Like most scientists, I assumed that it would get easier to document my effect over time. I’d get better at doing the experiments, at zeroing in on the conditions that produce verbal overshadowing. So why did the opposite happen? I’m convinced that we can use the tools of science to figure this out. First, though, we have to admit that we’ve got a problem.”
  • In 2001, Michael Jennions, a biologist at the Australian National University, set out to analyze “temporal trends” across a wide range of subjects in ecology and evolutionary biology. He looked at hundreds of papers and forty-four meta-analyses (that is, statistical syntheses of related studies), and discovered a consistent decline effect over time, as many of the theories seemed to fade into irrelevance. In fact, even when numerous variables were controlled for—Jennions knew, for instance, that the same author might publish several critical papers, which could distort his analysis—there was still a significant decrease in the validity of the hypothesis, often within a year of publication. Jennions admits that his findings are troubling, but expresses a reluctance to talk about them publicly. “This is a very sensitive issue for scientists,” he says. “You know, we’re supposed to be dealing with hard facts, the stuff that’s supposed to stand the test of time. But when you see these trends you become a little more skeptical of things.”
  • the worst part was that when I submitted these null results I had difficulty getting them published. The journals only wanted confirming data. It was too exciting an idea to disprove, at least back then.
  • the steep rise and slow fall of fluctuating asymmetry is a clear example of a scientific paradigm, one of those intellectual fads that both guide and constrain research: after a new paradigm is proposed, the peer-review process is tilted toward positive results. But then, after a few years, the academic incentives shift—the paradigm has become entrenched—so that the most notable results are now those that disprove the theory.
  • Jennions, similarly, argues that the decline effect is largely a product of publication bias, or the tendency of scientists and scientific journals to prefer positive data over null results, which is what happens when no effect is found. The bias was first identified by the statistician Theodore Sterling, in 1959, after he noticed that ninety-seven per cent of all published psychological studies with statistically significant data found the effect they were looking for. A “significant” result is defined as any data point that would be produced by chance less than five per cent of the time. This ubiquitous test was invented in 1922 by the English mathematician Ronald Fisher, who picked five per cent as the boundary line, somewhat arbitrarily, because it made pencil and slide-rule calculations easier. Sterling saw that if ninety-seven per cent of psychology studies were proving their hypotheses, either psychologists were extraordinarily lucky or they published only the outcomes of successful experiments. In recent years, publication bias has mostly been seen as a problem for clinical trials, since pharmaceutical companies are less interested in publishing results that aren’t favorable. But it’s becoming increasingly clear that publication bias also produces major distortions in fields without large corporate incentives, such as psychology and ecology.
  • While publication bias almost certainly plays a role in the decline effect, it remains an incomplete explanation. For one thing, it fails to account for the initial prevalence of positive results among studies that never even get submitted to journals. It also fails to explain the experience of people like Schooler, who have been unable to replicate their initial data despite their best efforts
  • an equally significant issue is the selective reporting of results—the data that scientists choose to document in the first place. Palmer’s most convincing evidence relies on a statistical tool known as a funnel graph. When a large number of studies have been done on a single subject, the data should follow a pattern: studies with a large sample size should all cluster around a common value—the true result—whereas those with a smaller sample size should exhibit a random scattering, since they’re subject to greater sampling error. This pattern gives the graph its name, since the distribution resembles a funnel.
  • The funnel graph visually captures the distortions of selective reporting. For instance, after Palmer plotted every study of fluctuating asymmetry, he noticed that the distribution of results with smaller sample sizes wasn’t random at all but instead skewed heavily toward positive results.
  • Palmer has since documented a similar problem in several other contested subject areas. “Once I realized that selective reporting is everywhere in science, I got quite depressed,” Palmer told me. “As a researcher, you’re always aware that there might be some nonrandom patterns, but I had no idea how widespread it is.” In a recent review article, Palmer summarized the impact of selective reporting on his field: “We cannot escape the troubling conclusion that some—perhaps many—cherished generalities are at best exaggerated in their biological significance and at worst a collective illusion nurtured by strong a-priori beliefs often repeated.”
  • Palmer emphasizes that selective reporting is not the same as scientific fraud. Rather, the problem seems to be one of subtle omissions and unconscious misperceptions, as researchers struggle to make sense of their results. Stephen Jay Gould referred to this as the “shoehorning” process. “A lot of scientific measurement is really hard,” Simmons told me. “If you’re talking about fluctuating asymmetry, then it’s a matter of minuscule differences between the right and left sides of an animal. It’s millimetres of a tail feather. And so maybe a researcher knows that he’s measuring a good male”—an animal that has successfully mated—“and he knows that it’s supposed to be symmetrical. Well, that act of measurement is going to be vulnerable to all sorts of perception biases. That’s not a cynical statement. That’s just the way human beings work.”
  • One of the classic examples of selective reporting concerns the testing of acupuncture in different countries. While acupuncture is widely accepted as a medical treatment in various Asian countries, its use is much more contested in the West. These cultural differences have profoundly influenced the results of clinical trials. Between 1966 and 1995, there were forty-seven studies of acupuncture in China, Taiwan, and Japan, and every single trial concluded that acupuncture was an effective treatment. During the same period, there were ninety-four clinical trials of acupuncture in the United States, Sweden, and the U.K., and only fifty-six per cent of these studies found any therapeutic benefits. As Palmer notes, this wide discrepancy suggests that scientists find ways to confirm their preferred hypothesis, disregarding what they don’t want to see. Our beliefs are a form of blindness.
  • John Ioannidis, an epidemiologist at Stanford University, argues that such distortions are a serious issue in biomedical research. “These exaggerations are why the decline has become so common,” he says. “It’d be really great if the initial studies gave us an accurate summary of things. But they don’t. And so what happens is we waste a lot of money treating millions of patients and doing lots of follow-up studies on other themes based on results that are misleading.”
  • In 2005, Ioannidis published an article in the Journal of the American Medical Association that looked at the forty-nine most cited clinical-research studies in three major medical journals. Forty-five of these studies reported positive results, suggesting that the intervention being tested was effective. Because most of these studies were randomized controlled trials—the “gold standard” of medical evidence—they tended to have a significant impact on clinical practice, and led to the spread of treatments such as hormone replacement therapy for menopausal women and daily low-dose aspirin to prevent heart attacks and strokes. Nevertheless, the data Ioannidis found were disturbing: of the thirty-four claims that had been subject to replication, forty-one per cent had either been directly contradicted or had their effect sizes significantly downgraded.
  • The situation is even worse when a subject is fashionable. In recent years, for instance, there have been hundreds of studies on the various genes that control the differences in disease risk between men and women. These findings have included everything from the mutations responsible for the increased risk of schizophrenia to the genes underlying hypertension. Ioannidis and his colleagues looked at four hundred and thirty-two of these claims. They quickly discovered that the vast majority had serious flaws. But the most troubling fact emerged when he looked at the test of replication: out of four hundred and thirty-two claims, only a single one was consistently replicable. “This doesn’t mean that none of these claims will turn out to be true,” he says. “But, given that most of them were done badly, I wouldn’t hold my breath.”
  • the main problem is that too many researchers engage in what he calls “significance chasing,” or finding ways to interpret the data so that it passes the statistical test of significance—the ninety-five-per-cent boundary invented by Ronald Fisher. “The scientists are so eager to pass this magical test that they start playing around with the numbers, trying to find anything that seems worthy,” Ioannidis says. In recent years, Ioannidis has become increasingly blunt about the pervasiveness of the problem. One of his most cited papers has a deliberately provocative title: “Why Most Published Research Findings Are False.”
  • The problem of selective reporting is rooted in a fundamental cognitive flaw, which is that we like proving ourselves right and hate being wrong. “It feels good to validate a hypothesis,” Ioannidis said. “It feels even better when you’ve got a financial interest in the idea or your career depends upon it. And that’s why, even after a claim has been systematically disproven”—he cites, for instance, the early work on hormone replacement therapy, or claims involving various vitamins—“you still see some stubborn researchers citing the first few studies that show a strong effect. They really want to believe that it’s true.”
  • scientists need to become more rigorous about data collection before they publish. “We’re wasting too much time chasing after bad studies and underpowered experiments,” he says. The current “obsession” with replicability distracts from the real problem, which is faulty design. He notes that nobody even tries to replicate most science papers—there are simply too many. (According to Nature, a third of all studies never even get cited, let alone repeated.)
  • Schooler recommends the establishment of an open-source database, in which researchers are required to outline their planned investigations and document all their results. “I think this would provide a huge increase in access to scientific work and give us a much better way to judge the quality of an experiment,” Schooler says. “It would help us finally deal with all these issues that the decline effect is exposing.”
  • Although such reforms would mitigate the dangers of publication bias and selective reporting, they still wouldn’t erase the decline effect. This is largely because scientific research will always be shadowed by a force that can’t be curbed, only contained: sheer randomness. Although little research has been done on the experimental dangers of chance and happenstance, the research that exists isn’t encouraging
  • John Crabbe, a neuroscientist at the Oregon Health and Science University, conducted an experiment that showed how unknowable chance events can skew tests of replicability. He performed a series of experiments on mouse behavior in three different science labs: in Albany, New York; Edmonton, Alberta; and Portland, Oregon. Before he conducted the experiments, he tried to standardize every variable he could think of. The same strains of mice were used in each lab, shipped on the same day from the same supplier. The animals were raised in the same kind of enclosure, with the same brand of sawdust bedding. They had been exposed to the same amount of incandescent light, were living with the same number of littermates, and were fed the exact same type of chow pellets. When the mice were handled, it was with the same kind of surgical glove, and when they were tested it was on the same equipment, at the same time in the morning.
  • The premise of this test of replicability, of course, is that each of the labs should have generated the same pattern of results. “If any set of experiments should have passed the test, it should have been ours,” Crabbe says. “But that’s not the way it turned out.” In one experiment, Crabbe injected a particular strain of mouse with cocaine. In Portland the mice given the drug moved, on average, six hundred centimetres more than they normally did; in Albany they moved seven hundred and one additional centimetres. But in the Edmonton lab they moved more than five thousand additional centimetres. Similar deviations were observed in a test of anxiety. Furthermore, these inconsistencies didn’t follow any detectable pattern. In Portland one strain of mouse proved most anxious, while in Albany another strain won that distinction.
  • The disturbing implication of the Crabbe study is that a lot of extraordinary scientific data are nothing but noise. The hyperactivity of those coked-up Edmonton mice wasn’t an interesting new fact—it was a meaningless outlier, a by-product of invisible variables we don’t understand. The problem, of course, is that such dramatic findings are also the most likely to get published in prestigious journals, since the data are both statistically significant and entirely unexpected. Grants get written, follow-up studies are conducted. The end result is a scientific accident that can take years to unravel.
  • This suggests that the decline effect is actually a decline of illusion.
  • While Karl Popper imagined falsification occurring with a single, definitive experiment—Galileo refuted Aristotelian mechanics in an afternoon—the process turns out to be much messier than that. Many scientific theories continue to be considered true even after failing numerous experimental tests. Verbal overshadowing might exhibit the decline effect, but it remains extensively relied upon within the field. The same holds for any number of phenomena, from the disappearing benefits of second-generation antipsychotics to the weak coupling ratio exhibited by decaying neutrons, which appears to have fallen by more than ten standard deviations between 1969 and 2001. Even the law of gravity hasn’t always been perfect at predicting real-world phenomena. (In one test, physicists measuring gravity by means of deep boreholes in the Nevada desert found a two-and-a-half-per-cent discrepancy between the theoretical predictions and the actual data.) Despite these findings, second-generation antipsychotics are still widely prescribed, and our model of the neutron hasn’t changed. The law of gravity remains the same.
  • Such anomalies demonstrate the slipperiness of empiricism. Although many scientific ideas generate conflicting results and suffer from falling effect sizes, they continue to get cited in the textbooks and drive standard medical practice. Why? Because these ideas seem true. Because they make sense. Because we can’t bear to let them go. And this is why the decline effect is so troubling. Not because it reveals the human fallibility of science, in which data are tweaked and beliefs shape perceptions. (Such shortcomings aren’t surprising, at least for scientists.) And not because it reveals that many of our most exciting theories are fleeting fads and will soon be rejected. (That idea has been around since Thomas Kuhn.) The decline effect is troubling because it reminds us how difficult it is to prove anything. We like to pretend that our experiments define the truth for us. But that’s often not the case. Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true. When the experiments are done, we still have to choose what to believe.
Weiye Loh

Do Fights Over Climate Communication Reflect the End of 'Scientism'? - NYTimes.com - 0 views

  • climate (mis)communication. Two sessions explored a focal point of this blog, the interface of climate science and policy, and the roles of scientists and the media in fostering productive discourse. Both discussions homed in on an uncomfortable reality — the erosion of a longstanding presumption that scientific information, if communicated more effectively, will end up framing policy choices.
  • First I sat in on a symposium on the  future of climate communication in a world where traditional science journalism is a shrinking wedge of a growing pie of communication options. The discussion didn’t really provide many answers, but did reveal the persistent frustrations of some scientists with the way the media cover their field.
  • Sparks flew between Kerry Emanuel, a climatologist long focused on hurricanes and warming, and Seth Borenstein, who covers climate and other science for the Associated Press. Borenstein spoke highly of a Boston Globe dual profile of Emanuel and his colleague at the Massachusetts Institute of Technology,  Richard Lindzen. To Emanuel, the piece was a great example of what he described as “he said, he said” coverage of science. Borenstein replied that this particular piece was not centered on the science, but on the men — in the context of their relationship, research and worldviews. (It’s worth noting that Emanuel, whom I’ve been interviewing on hurricanes and climate since 1988, describes himself as  a conservative and, mainly, Republican voter.)
  • ...11 more annotations...
  • Keith Kloor, blogging on the session  at Collide-a-Scape, included a sobering assessment of the scientist-journalist tensions over global warming from Tom Rosensteil, a panelist and long-time journalist who now heads up Pew’s Project for Excellence in Journalism: If you’re waiting for the press to persuade the public, you’re going to lose. The press doesn’t see that as its job.
  • scientists have  a great opportunity, and responsibility, to tell their own story more directly, as some are doing occasionally through Dot Earth “ Post Cards” and The Times’ Scientist at Work blog.
  • Naomi Oreskes, a political scientist at the University of California, San Diego, and co-author of “Merchants of Doubt“: Of Mavericks and Mules Gavin Schmidt of NASA’s Goddard Institute for Space Studies and Realclimate.org: Between Sound Bites and the Scientific Paper: Communicating in the Hinterland Thomas Lessl, a scholar at the University of Georgia focused on the cultural history of science: Reforming Scientific Communication About Anthropogenic Climate Change
  • I focused on two words in the title of the session — diversity and denial. The diversity of lines of inquiry in climate science has a two-pronged impact. It helps build a robust overall picture of a growing human influence on a complex system. But for many of the most important  pixel points in that picture, there is robust, durable and un-manufactured debate. That debate can then be exploited by naysayers eager to cast doubt on the enterprise, when in fact — as I’ve written here before — it’s simply the (sometimes ugly) way that science progresses.
  • My denial, I said, lay in my longstanding presumption, like that of many scientists and journalists, that better communication of information will tend to change people’s perceptions, priorities and behavior. This attitude, in my view, crested for climate scientists in the wake of the 2007 report from the Intergovernmental Panel on Climate Change.
  • In his talk, Thomas Lessl said much of this attitude is rooted in what he and some other social science scholars call “scientism,” the idea — rooted in the 19th century — that scientific inquiry is a “distinctive mode of inquiry that promises to bring clarity to all human endeavors.” [5:45 p.m. | Updated Chris Mooney sent an e-mail noting how the discussion below resonates with "Do Scientists Understand the Public," a report he wrote last year for the American Academy of Arts and Sciences and explored here.]
  • Scientism, though it is good at promoting the recognition that scientific knowledge is the only kind of knowledge, also promotes communication behavior that is bad for the scientific ethos. By this I mean that it turns such communication into combat. By presuming that scientific understanding is the only criterion that matters, scientism inclines public actors to treat resistant audiences as an enemy: If the public doesn’t get the science, shame on the public. If the public rejects a scientific claim, it is either because they don’t get it or because they operate upon some sinister motive.
  • Scientific knowledge cannot take the place of prudence in public affairs.
  • Prudence, according to Robert Harriman, “is the mode of reasoning about contingent matters in order to select the best course of action. Contingent events cannot be known with certainty, and actions are intelligible only with regard to some idea of what is good.”
  • Scientism tends to suppose a one-size-fits-all notion of truth telling. But in the public sphere, people don’t think that way. They bring to the table a variety of truth standards: moral judgment, common-sense judgment, a variety of metaphysical perspectives, and ideological frameworks. The scientists who communicate about climate change may regard these standards as wrong-headed or at best irrelevant, but scientists don’t get to decide this in a democratic debate. When scientists become public actors, they have stepped outside of science, and they are obliged to honor the rules of communication and thought that govern the rest of the world. This might be different, if climate change was just about determining the causes of climate change, but it never is. Getting from the acceptance of ACC to acceptance of the kinds of emissions-reducing policies that are being advocated takes us from one domain of knowing into another.
  • One might object by saying that the formation of public policy depends upon first establishing the scientific bases of ACC, and that the first question can be considered independently of the second. Of course that is right, but that is an abstract academic distinction that does not hold in public debates. In public debates a different set of norms and assumptions apply: motive is not to be casually set aside as a nonfactor. Just because scientists customarily bracket off scientific topics from their policy implications does not mean that lay people do this—or even that they should be compelled to do so. When scientists talk about one thing, they seem to imply the other. But which is the motive force? Are they advocating for ACC because they subscribe to a political worldview that supports legal curtailments upon free enterprise? Or do they support such a political worldview because they are convinced of ACC? The fact that they speak as scientists may mean to other scientists that they reason from evidence alone. But the public does not necessarily share this assumption. If scientists don’t respect this fact about their audiences, they are bound to get in trouble. [Read the rest.]
1 - 20 of 46 Next › Last »
Showing 20 items per page