Skip to main content

Home/ New Media Ethics 2009 course/ Group items tagged Public

Rss Feed Group items tagged

Weiye Loh

True Enough : CJR - 0 views

  • The dangers are clear. As PR becomes ascendant, private and government interests become more able to generate, filter, distort, and dominate the public debate, and to do so without the public knowing it. “What we are seeing now is the demise of journalism at the same time we have an increasing level of public relations and propaganda,” McChesney said. “We are entering a zone that has never been seen before in this country.”
  • Michael Schudson, a journalism professor at Columbia University, cjr contributor, and author of Discovering the News, said modern public relations started when Ivy Lee, a minister’s son and a former reporter at the New York World, tipped reporters to an accident on the Pennsylvania Railroad. Before then, railroads had done everything they could to cover up accidents. But Lee figured that crashes, which tend to leave visible wreckage, were hard to hide. So it was better to get out in front of the inevitable story. The press release was born. Schudson said the rise of the “publicity agent” created deep concern among the nation’s leaders, who distrusted a middleman inserting itself and shaping messages between government and the public. Congress was so concerned that it attached amendments to bills in 1908 and 1913 that said no money could be appropriated for preparing newspaper articles or hiring publicity agents.
  • But World War I pushed those concerns to the side. The government needed to rally the public behind a deeply unpopular war. Suddenly, publicity agents did not seem so bad.
  • ...7 more annotations...
  • “After the war, PR becomes a very big deal,” Schudson said. “It was partly stimulated by the war and the idea of journalists and others being employed by the government as propagandists.” Many who worked for the massive wartime propaganda apparatus found an easy transition into civilian life.
  • People “became more conscious that they were not getting direct access, that it was being screened for them by somebody else,” Schudson said. But there was no turning back. PR had become a fixture of public life. Concern about the invisible filter of public relations became a steady drumbeat in the press
  • When public relations began its ascent in the early twentieth century, journalism was rising alongside it. The period saw the ferocious work of the muckrakers, the development of the great newspaper chains, and the dawn of radio and, later, television. Journalism of the day was not perfect; sometimes it was not even good. But it was an era of expansion that eventually led to the powerful press of the mid to late century.
  • Now, during a second rise of public relations, we are in an era of massive contraction in traditional journalism. Bureaus have closed, thousands of reporters have been laid off, once-great newspapers like the Rocky Mountain News have died. The Pew Center took a look at the impact of these changes last year in a study of the Baltimore news market. The report, “How News Happens,” found that while new online outlets had increased the demand for news, the number of original stories spread out among those outlets had declined. In one example, Pew found that area newspapers wrote one-third the number of stories about state budget cuts as they did the last time the state made similar cuts in 1991. In 2009, Pew said, The Baltimore Sun produced 32 percent fewer stories than it did in 1999.
  • even original reporting often bore the fingerprints of government and private public relations. Mark Jurkowitz, associate director the Pew Center, said the Baltimore report concentrated on six major story lines: state budget cuts, shootings of police officers, the University of Maryland’s efforts to develop a vaccine, the auction of the Senator Theater, the installation of listening devices on public busses, and developments in juvenile justice. It found that 63 percent of the news about those subjects was generated by the government, 23 percent came from interest groups or public relations, and 14 percent started with reporters.
  • The Internet makes it easy for public relations people to reach out directly to the audience and bypass the press, via websites and blogs, social media and videos on YouTube, and targeted e-mail.
  • Some experts have argued that in the digital age, new forms of reporting will eventually fill the void left by traditional newsrooms. But few would argue that such a point has arrived, or is close to arriving. “There is the overwhelming sense that the void that is created by the collapse of traditional journalism is not being filled by new media, but by public relations,” said John Nichols, a Nation correspondent and McChesney’s co-author. Nichols said reporters usually make some calls and check facts. But the ability of government or private public relations to generate stories grows as reporters have less time to seek out stories on their own. That gives outside groups more power to set the agenda.
  •  
    In their recent book, The Death and Life of American Journalism, Robert McChesney and John Nichols tracked the number of people working in journalism since 1980 and compared it to the numbers for public relations. Using data from the US Bureau of Labor Statistics, they found that the number of journalists has fallen drastically while public relations people have multiplied at an even faster rate. In 1980, there were about .45 PR workers per one hundred thousand population compared with .36 journalists. In 2008, there were .90 PR people per one hundred thousand compared to .25 journalists. That's a ratio of more than three-to-one, better equipped, better financed.
Weiye Loh

Do Fights Over Climate Communication Reflect the End of 'Scientism'? - NYTimes.com - 0 views

  • climate (mis)communication. Two sessions explored a focal point of this blog, the interface of climate science and policy, and the roles of scientists and the media in fostering productive discourse. Both discussions homed in on an uncomfortable reality — the erosion of a longstanding presumption that scientific information, if communicated more effectively, will end up framing policy choices.
  • First I sat in on a symposium on the  future of climate communication in a world where traditional science journalism is a shrinking wedge of a growing pie of communication options. The discussion didn’t really provide many answers, but did reveal the persistent frustrations of some scientists with the way the media cover their field.
  • Sparks flew between Kerry Emanuel, a climatologist long focused on hurricanes and warming, and Seth Borenstein, who covers climate and other science for the Associated Press. Borenstein spoke highly of a Boston Globe dual profile of Emanuel and his colleague at the Massachusetts Institute of Technology,  Richard Lindzen. To Emanuel, the piece was a great example of what he described as “he said, he said” coverage of science. Borenstein replied that this particular piece was not centered on the science, but on the men — in the context of their relationship, research and worldviews. (It’s worth noting that Emanuel, whom I’ve been interviewing on hurricanes and climate since 1988, describes himself as  a conservative and, mainly, Republican voter.)
  • ...11 more annotations...
  • Keith Kloor, blogging on the session  at Collide-a-Scape, included a sobering assessment of the scientist-journalist tensions over global warming from Tom Rosensteil, a panelist and long-time journalist who now heads up Pew’s Project for Excellence in Journalism: If you’re waiting for the press to persuade the public, you’re going to lose. The press doesn’t see that as its job.
  • scientists have  a great opportunity, and responsibility, to tell their own story more directly, as some are doing occasionally through Dot Earth “ Post Cards” and The Times’ Scientist at Work blog.
  • Naomi Oreskes, a political scientist at the University of California, San Diego, and co-author of “Merchants of Doubt“: Of Mavericks and Mules Gavin Schmidt of NASA’s Goddard Institute for Space Studies and Realclimate.org: Between Sound Bites and the Scientific Paper: Communicating in the Hinterland Thomas Lessl, a scholar at the University of Georgia focused on the cultural history of science: Reforming Scientific Communication About Anthropogenic Climate Change
  • I focused on two words in the title of the session — diversity and denial. The diversity of lines of inquiry in climate science has a two-pronged impact. It helps build a robust overall picture of a growing human influence on a complex system. But for many of the most important  pixel points in that picture, there is robust, durable and un-manufactured debate. That debate can then be exploited by naysayers eager to cast doubt on the enterprise, when in fact — as I’ve written here before — it’s simply the (sometimes ugly) way that science progresses.
  • My denial, I said, lay in my longstanding presumption, like that of many scientists and journalists, that better communication of information will tend to change people’s perceptions, priorities and behavior. This attitude, in my view, crested for climate scientists in the wake of the 2007 report from the Intergovernmental Panel on Climate Change.
  • In his talk, Thomas Lessl said much of this attitude is rooted in what he and some other social science scholars call “scientism,” the idea — rooted in the 19th century — that scientific inquiry is a “distinctive mode of inquiry that promises to bring clarity to all human endeavors.” [5:45 p.m. | Updated Chris Mooney sent an e-mail noting how the discussion below resonates with "Do Scientists Understand the Public," a report he wrote last year for the American Academy of Arts and Sciences and explored here.]
  • Scientism, though it is good at promoting the recognition that scientific knowledge is the only kind of knowledge, also promotes communication behavior that is bad for the scientific ethos. By this I mean that it turns such communication into combat. By presuming that scientific understanding is the only criterion that matters, scientism inclines public actors to treat resistant audiences as an enemy: If the public doesn’t get the science, shame on the public. If the public rejects a scientific claim, it is either because they don’t get it or because they operate upon some sinister motive.
  • Scientific knowledge cannot take the place of prudence in public affairs.
  • Prudence, according to Robert Harriman, “is the mode of reasoning about contingent matters in order to select the best course of action. Contingent events cannot be known with certainty, and actions are intelligible only with regard to some idea of what is good.”
  • Scientism tends to suppose a one-size-fits-all notion of truth telling. But in the public sphere, people don’t think that way. They bring to the table a variety of truth standards: moral judgment, common-sense judgment, a variety of metaphysical perspectives, and ideological frameworks. The scientists who communicate about climate change may regard these standards as wrong-headed or at best irrelevant, but scientists don’t get to decide this in a democratic debate. When scientists become public actors, they have stepped outside of science, and they are obliged to honor the rules of communication and thought that govern the rest of the world. This might be different, if climate change was just about determining the causes of climate change, but it never is. Getting from the acceptance of ACC to acceptance of the kinds of emissions-reducing policies that are being advocated takes us from one domain of knowing into another.
  • One might object by saying that the formation of public policy depends upon first establishing the scientific bases of ACC, and that the first question can be considered independently of the second. Of course that is right, but that is an abstract academic distinction that does not hold in public debates. In public debates a different set of norms and assumptions apply: motive is not to be casually set aside as a nonfactor. Just because scientists customarily bracket off scientific topics from their policy implications does not mean that lay people do this—or even that they should be compelled to do so. When scientists talk about one thing, they seem to imply the other. But which is the motive force? Are they advocating for ACC because they subscribe to a political worldview that supports legal curtailments upon free enterprise? Or do they support such a political worldview because they are convinced of ACC? The fact that they speak as scientists may mean to other scientists that they reason from evidence alone. But the public does not necessarily share this assumption. If scientists don’t respect this fact about their audiences, they are bound to get in trouble. [Read the rest.]
Weiye Loh

Sunita Narain: Indian scientists: missing in action - 0 views

  • Since then there has been dead silence among the powerful scientific leaders of the country, with one exception. Kiran Karnik, a former employee of ISRO and board member of Deva
  • when the scientists who understand the issue are not prepared to engage with the public, there can be little informed discussion. The cynical public, which sees scams tumble out each day, believes easily that everybody is a crook. But, as I said, the country’s top scientists have withdrawn further into their comfort holes, their opinion frozen in contempt that Indian society is scientifically illiterate. I can assure you in the future there will be even less conversation between scientists and all of us in the public sphere.
  • This is not good. Science is about everyday policy. It needs to be understood and for this it needs to be discussed and deliberated openly and strenuously. But how will this happen if one side — the one with information, knowledge and power — will not engage in public discourse?
  • ...8 more annotations...
  • I suspect Indian scientists have retired hurt to the pavilion. They were exposed to some nasty public scrutiny on a deal made by a premier science research establishment, Indian Space Research Organisation (ISRO), with Devas, a private company, on the allocation of spectrum. The public verdict was that the arrangement was a scandal; public resources had been given away for a song. The government, already scam-bruised, hastily scrapped the contract.
  • Take the issue of genetically-modified (GM) crops. For long this matter has been decided inside closed-door committee rooms, where scientists are comforted by the fact that their decisions will not be challenged. Their defence is “sound science” and “superior knowledge”. It is interesting that the same scientists will accept data produced by private companies pushing the product. Issues of conflict of interest will be brushed aside as integrity cannot be questioned behind closed doors. Silence is the best insurance. This is what happened inside a stuffy committee room, where scientists sat to give permission to Mahyco-Monsanto to grow genetically-modified brinjal.
  • This case involved a vegetable we all eat. This was a matter of science we had the right to know about and to decide upon. The issue made headlines. The reaction of the scientific fraternity was predictable and obnoxious. They resented the questions. They did not want a public debate.
  • As the controversy raged and more people got involved, the scientists ran for cover. They wanted none of this messy street fight. They were meant to advise prime ministers and the likes, not to answer simple questions of people. Finally, when environment minister Jairam Ramesh took the decision on the side of the ordinary vegetable eater, unconvinced by the validity of the scientific data to justify no-harm, scientists were missing in their public reactions. Instead, they whispered about lack of “sound science” in the decision inside committees.
  • The matter did not end there. The minister commissioned an inter-academy inquiry — six top scientific institutions looked into GM crops and Bt-brinjal — expecting a rigorous examination of the technical issues and data gaps. The report released by this committee was shoddy to say the least. It contained no references or attributions and not a single citation. It made sweeping statements and lifted passages from a government newsletter and even from global biotech industry. The report was thrashed. Scientists again withdrew into offended silence.
  • The final report of this apex-science group is marginally better in that it includes citations but it reeks of scientific arrogance cloaked in jargon. The committee did not find it fit to review the matter, which had reached public scrutiny. The report is only a cover for their established opinion about the ‘truth’ of Bt-brinjal. Science for them is certainly not a matter of enquiry, critique or even dissent.
  • the world has changed. No longer is this report meant only for top political and policy leaders, who would be overwhelmed by the weight of the matter, the language and the expert knowledge of the writer. The report will be subjected to public scrutiny. Its lack of rigour will be deliberated, its unquestioned assertion challenged.
  • This is the difference between the manufactured comfortable world of science behind closed doors and the noisy messy world outside. It is clear to me that Indian scientists need confidence to creatively engage in public concerns. The task to build scientific literacy will not happen without their engagement and their tolerance for dissent. The role of science in Indian democracy is being revisited with a new intensity. The only problem is that the key players are missing in action.
Weiye Loh

Why do we care where we publish? - 0 views

  • being both a working scientist and a science writer gives me a unique perspective on science, scientific publications, and the significance of scientific work. The final disclosure should be that I have never published in any of the top rank physics journals or in Science, Nature, or PNAS. I don't believe I have an axe to grind about that, but I am also sure that you can ascribe some of my opinions to PNAS envy.
  • If you asked most scientists what their goals were, the answer would boil down to the generation of new knowledge. But, at some point, science and scientists have to interact with money and administrators, which has significant consequences for science. For instance, when trying to employ someone to do a job, you try to objectively decide if the skills set of the prospective employee matches that required to do the job. In science, the same question has to be asked—instead of being asked once per job interview, however, this question gets asked all the time.
  • Because science requires funding, and no one gets a lifetime dollop-o-cash to explore their favorite corner of the universe. So, the question gets broken down to "how competent is the scientist?" "Is the question they want to answer interesting?" "Do they have the resources to do what they say they will?" We will ignore the last question and focus on the first two.
  • ...17 more annotations...
  • How can we assess the competence of a scientist? Past performance is, realistically, the only way to judge future performance. Past performance can only be assessed by looking at their publications. Were they in a similar area? Are they considered significant? Are they numerous? Curiously, though, the second question is also answered by looking at publications—if a topic is considered significant, then there will be lots of publications in that area, and those publications will be of more general interest, and so end up in higher ranking journals.
  • So we end up in the situation that the editors of major journals are in the position to influence the direction of scientific funding, meaning that there is a huge incentive for everyone to make damn sure that their work ends up in Science or Nature. But why are Science, Nature, and PNAS considered the place to put significant work? Why isn't a new optical phenomena, published in Optics Express, as important as a new optical phenomena published in Science?
  • The big three try to be general; they will, in principle, publish reports from any discipline, and they anticipate readership from a range of disciplines. This explicit generality means that the scientific results must not only be of general interest, but also highly significant. The remaining journals become more specialized, covering perhaps only physics, or optics, or even just optical networking. However, they all claim to only publish work that is highly original in nature.
  • Are standards really so different? Naturally, the more specialized a journal is, the fewer people it appeals to. However, the major difference in determining originality is one of degree and referee. A more specialized journal has more detailed articles, so the differences between experiments stand out more obviously, while appealing to general interest changes the emphasis of the article away from details toward broad conclusions.
  • as the audience becomes broader, more technical details get left by the wayside. Note that none of the gene sequences published in Science have the actual experimental and analysis details. What ends up published is really a broad-brush description of the work, with the important details either languishing as supplemental information, or even published elsewhere, in a more suitable journal. Yet, the high profile paper will get all the citations, while the more detailed—the unkind would say accurate—description of the work gets no attention.
  • And that is how journals are ranked. Count the number of citations for each journal per volume, run it through a magic number generator, and the impact factor jumps out (make your checks out to ISI Thomson please). That leaves us with the following formula: grants require high impact publications, high impact publications need citations, and that means putting research in a journal that gets lots of citations. Grants follow the concepts that appear to be currently significant, and that's decided by work that is published in high impact journals.
  • This system would be fine if it did not ignore the fact that performing science and reporting scientific results are two very different skills, and not everyone has both in equal quantity. The difference between a Nature-worthy finding and a not-Nature-worthy finding is often in the quality of the writing. How skillfully can I relate this bit of research back to general or topical interests? It really is this simple. Over the years, I have seen quite a few physics papers with exaggerated claims of significance (or even results) make it into top flight journals, and the only differences I can see between those works and similar works published elsewhere is that the presentation and level of detail are different.
  • articles from the big three are much easier to cover on Nobel Intent than articles from, say Physical Review D. Nevertheless, when we do cover them, sometimes the researchers suddenly realize that they could have gotten a lot more mileage out of their work. It changes their approach to reporting their results, which I see as evidence that writing skill counts for as much as scientific quality.
  • If that observation is generally true, then it raises questions about the whole process of evaluating a researcher's competence and a field's significance, because good writers corrupt the process by publishing less significant work in journals that only publish significant findings. In fact, I think it goes further than that, because Science, Nature, and PNAS actively promote themselves as scientific compasses. Want to find the most interesting and significant research? Read PNAS.
  • The publishers do this by extensively publicizing science that appears in their own journals. Their news sections primarily summarize work published in the same issue of the same magazine. This lets them create a double-whammy of scientific significance—not only was the work published in Nature, they also summarized it in their News and Views section.
  • Furthermore, the top three work very hard at getting other journalists to cover their articles. This is easy to see by simply looking at Nobel Intent's coverage. Most of the work we discuss comes from Science and Nature. Is this because we only read those two publications? No, but they tell us ahead of time what is interesting in their upcoming issue. They even provide short summaries of many papers that practically guide people through writing the story, meaning reporter Jim at the local daily doesn't need a science degree to cover the science beat.
  • Very few of the other journals do this. I don't get early access to the Physical Review series, even though I love reporting from them. In fact, until this year, they didn't even highlight interesting papers for their own readers. This makes it incredibly hard for a science reporter to cover science outside of the major journals. The knock-on effect is that Applied Physics Letters never appears in the news, which means you can't evaluate recent news coverage to figure out what's of general interest, leaving you with... well, the big three journals again, which mostly report on themselves. On the other hand, if a particular scientific topic does start to receive some press attention, it is much more likely that similar work will suddenly be acceptable in the big three journals.
  • That said, I should point out that judging the significance of scientific work is a process fraught with difficulty. Why do you think it takes around 10 years from the publication of first results through to obtaining a Nobel Prize? Because it can take that long for the implications of the results to sink in—or, more commonly, sink without trace.
  • I don't think that we can reasonably expect journal editors and peer reviewers to accurately assess the significance (general or otherwise) of a new piece of research. There are, of course, exceptions: the first genome sequences, the first observation that the rate of the expansion of the universe is changing. But the point is that these are exceptions, and most work's significance is far more ambiguous, and even goes unrecognized (or over-celebrated) by scientists in the field.
  • The conclusion is that the top three journals are significantly gamed by scientists who are trying to get ahead in their careers—citations always lag a few years behind, so a PNAS paper with less than ten citations can look good for quite a few years, even compared to an Optics Letters with 50 citations. The top three journals overtly encourage this, because it is to their advantage if everyone agrees that they are the source of the most interesting science. Consequently, scientists who are more honest in self-assessing their work, or who simply aren't word-smiths, end up losing out.
  • scientific competence should not be judged by how many citations the author's work has received or where it was published. Instead, we should consider using a mathematical graph analysis to look at the networks of publications and citations, which should help us judge how central to a field a particular researcher is. This would have the positive influence of a publication mattering less than who thought it was important.
  • Science and Nature should either eliminate their News and Views section, or implement a policy of not reporting on their own articles. This would open up one of the major sources of "science news for scientists" to stories originating in other journals.
Weiye Loh

Index on Censorship » Blog Archive » Code breakers - 0 views

  • Journalism is demonstrably valuable to society. It tells us what is new, important and interesting in public life, it holds authority to account, it promotes informed debate, it entertains and enlightens. For sure, it comes with complications. It is rushed and imperfect, it sometimes upsets people and in pursuit of its objectives it occasionally does unpleasant or even illegal things. But by and large we accept these less welcome aspects of journalism as part of the package, and we do so because journalism as a whole is in the public interest. It does good, or to put it another way, we would be much poorer without it.
  • journalists themselves are slow to draw the distinction because theirs is traditionally an open industry, without barriers and categories, and also because they don’t tend to think of what they do in terms of doing good and being valuable.
  • privacy invaders do everything they can to blur the line. It is in their interest to be considered journalists, after all. They can shelter under the same umbrella and enjoy the same privileges as journalists. They can talk about freedom of expression, freedom of the press and serving the public interest; they can appeal to tradition and history and they can sound warnings about current and future censorship. This helps them to protect what they do.
  • ...3 more annotations...
  • the code of practice of the Press Complaints Commission (PCC), which at least in principle binds journalists working for member organisations and which includes clauses on such matters as accuracy, privacy and the use of subterfuge. The code makes clear, for example, that it is not acceptable to employ a clandestine recording device on a ‘fishing expedition’ — in other words, when you don’t have good grounds to expect you will gain a particular kind of evidence of a particular kind of wrongdoing.
  • journalism has to be about truth
  • The public interest is central because it is a sort of get-out-of-jail card for journalists, though it is actually recognised only grudgingly in law. An ethical journalist can justify telling a lie, or covertly recording a conversation, or trespassing if this act is done in the pursuit of the public interest, and even if he or she is found guilty of an offence, others will usually understand this as valid and will give their support. The public interest can literally keep a journalist out of jail, and it is not merely in the eye of the beholder. The Press Complaints Commission, for example, defines it as follows: The public interest includes, but is not confined to: i) Detecting or exposing crime or serious impropriety ii) Protecting public health and safety iii) Preventing the public from being misled by an action or statement of an individual or organisation
  •  
    We tend to speak of journalists, of their role, their rights, their responsibilities and very often their lack of restraint and how it should be addressed. But this is misleading, and prevents us from seeing some of the complexities and possibilities, because the word 'journalist', in this context, covers two very different groups of people. One group is the actual journalists, as traditionally understood, and the other is those people whose principal professional activity is invading other people's privacy for the purpose of publication.
Weiye Loh

How drug companies' PR tactics skew the presentation of medical research | Science | gu... - 0 views

  • Drug companies exert this hold on knowledge through publication planning agencies, an obscure subsection of the pharmaceutical industry that has ballooned in size in recent years, and is now a key lever in the commercial machinery that gets drugs sold.The planning companies are paid to implement high-impact publication strategies for specific drugs. They target the most influential academics to act as authors, draft the articles, and ensure that these include clearly-defined branding messages and appear in the most prestigious journals.
  • In selling their services to drug companies, the agencies' explain their work in frank language. Current Medical Directions, a medical communications company based in New York, promises to create "scientific content in support of our clients' messages". A rival firm from Macclesfield, Complete HealthVizion, describes what it does as "a fusion of evidence and inspiration."
  • There are now at least 250 different companies engaged in the business of planning clinical publications for the pharmaceutical industry, according to the International Society for Medical Publication Professionals, which said it has over 1000 individual members.Many firms are based in the UK and the east coast of the United States in traditional "pharma" centres like Pennsylvania and New Jersey.Precise figures are hard to pin down because publication planning is widely dispersed and is only beginning to be recognized as something like a discrete profession.
  • ...6 more annotations...
  • the standard approach to article preparation is for planners to work hand-in-glove with drug companies to create a first draft. "Key messages" laid out by the drug companies are accommodated to the extent that they can be supported by available data.Planners combine scientific information about a drug with two kinds of message that help create a "drug narrative". "Environmental" messages are intended to forge the sense of a gap in available medicine within a specific clinical field, while "product" messages show how the new drug meets this need.
  • In a flow-chart drawn up by Eric Crown, publications manager at Merck (the company that sold the controversial painkiller Vioxx), the determination of authorship appears as the fourth stage of the article preparation procedure. That is, only after company employees have presented clinical study data, discussed the findings, finalised "tactical plans" and identified where the article should be published.Perhaps surprisingly to the casual observer, under guidelines tightened up in recent years by the International Committee of Journal Editors (ICMJE), Crown's approach, typical among pharmaceutical companies, does not constitute ghostwriting.
  • What publication planners understand by the term is precise but it is also quite distinct from the popular interpretation.
  • "We may have written a paper, but the people we work with have to have some input and approve it."
  • "I feel that we're doing something good for mankind in the long-run," said Kimberly Goldin, head of the International Society for Medical Publication Professionals (ISMPP). "We want to influence healthcare in a very positive, scientifically sound way.""The profession grew out of a marketing umbrella, but has moved under the science umbrella," she said.But without the window of court documents to show how publication planning is being carried out today, the public simply cannot know if reforms the industry says it has made are genuine.
  • Dr Leemon McHenry, a medical ethicist at California State University, says nothing has changed. "They've just found more clever ways of concealing their activities. There's a whole army of hidden scribes. It's an epistemological morass where you can't trust anything."Alastair Matheson is a British medical writer who has worked extensively for medical communication agencies. He dismisses the planners' claims to having reformed as "bullshit"."The new guidelines work very nicely to permit the current system to continue as it has been", he said. "The whole thing is a big lie. They are promoting a product."
Weiye Loh

Rationally Speaking: The problem of replicability in science - 0 views

  • The problem of replicability in science from xkcdby Massimo Pigliucci
  • In recent months much has been written about the apparent fact that a surprising, indeed disturbing, number of scientific findings cannot be replicated, or when replicated the effect size turns out to be much smaller than previously thought.
  • Arguably, the recent streak of articles on this topic began with one penned by David Freedman in The Atlantic, and provocatively entitled “Lies, Damned Lies, and Medical Science.” In it, the major character was John Ioannidis, the author of some influential meta-studies about the low degree of replicability and high number of technical flaws in a significant portion of published papers in the biomedical literature.
  • ...18 more annotations...
  • As Freedman put it in The Atlantic: “80 percent of non-randomized studies (by far the most common type) turn out to be wrong, as do 25 percent of supposedly gold-standard randomized trials, and as much as 10 percent of the platinum-standard large randomized trials.” Ioannidis himself was quoted uttering some sobering words for the medical community (and the public at large): “Science is a noble endeavor, but it’s also a low-yield endeavor. I’m not sure that more than a very small percentage of medical research is ever likely to lead to major improvements in clinical outcomes and quality of life. We should be very comfortable with that fact.”
  • Julia and I actually addressed this topic during a Rationally Speaking podcast, featuring as guest our friend Steve Novella, of Skeptics’ Guide to the Universe and Science-Based Medicine fame. But while Steve did quibble with the tone of the Atlantic article, he agreed that Ioannidis’ results are well known and accepted by the medical research community. Steve did point out that it should not be surprising that results get better and better as one moves toward more stringent protocols like large randomized trials, but it seems to me that one should be surprised (actually, appalled) by the fact that even there the percentage of flawed studies is high — not to mention the fact that most studies are in fact neither large nor properly randomized.
  • The second big recent blow to public perception of the reliability of scientific results is an article published in The New Yorker by Jonah Lehrer, entitled “The truth wears off.” Lehrer also mentions Ioannidis, but the bulk of his essay is about findings in psychiatry, psychology and evolutionary biology (and even in research on the paranormal!).
  • In these disciplines there are now several documented cases of results that were initially spectacularly positive — for instance the effects of second generation antipsychotic drugs, or the hypothesized relationship between a male’s body symmetry and the quality of his genes — that turned out to be increasingly difficult to replicate over time, with the original effect sizes being cut down dramatically, or even disappearing altogether.
  • As Lehrer concludes at the end of his article: “Such anomalies demonstrate the slipperiness of empiricism. Although many scientific ideas generate conflicting results and suffer from falling effect sizes, they continue to get cited in the textbooks and drive standard medical practice. Why? Because these ideas seem true. Because they make sense. Because we can’t bear to let them go. And this is why the decline effect is so troubling.”
  • None of this should actually be particularly surprising to any practicing scientist. If you have spent a significant time of your life in labs and reading the technical literature, you will appreciate the difficulties posed by empirical research, not to mention a number of issues such as the fact that few scientists ever actually bother to replicate someone else’s results, for the simple reason that there is no Nobel (or even funded grant, or tenured position) waiting for the guy who arrived second.
  • n the midst of this I was directed by a tweet by my colleague Neil deGrasse Tyson (who has also appeared on the RS podcast, though in a different context) to a recent ABC News article penned by John Allen Paulos, which meant to explain the decline effect in science.
  • Paulos’ article is indeed concise and on the mark (though several of the explanations he proposes were already brought up in both the Atlantic and New Yorker essays), but it doesn’t really make things much better.
  • Paulos suggests that one explanation for the decline effect is the well known statistical phenomenon of the regression toward the mean. This phenomenon is responsible, among other things, for a fair number of superstitions: you’ve probably heard of some athletes’ and other celebrities’ fear of being featured on the cover of a magazine after a particularly impressive series of accomplishments, because this brings “bad luck,” meaning that the following year one will not be able to repeat the performance at the same level. This is actually true, not because of magical reasons, but simply as a result of the regression to the mean: extraordinary performances are the result of a large number of factors that have to line up just right for the spectacular result to be achieved. The statistical chances of such an alignment to repeat itself are low, so inevitably next year’s performance will likely be below par. Paulos correctly argues that this also explains some of the decline effect of scientific results: the first discovery might have been the result of a number of factors that are unlikely to repeat themselves in exactly the same way, thus reducing the effect size when the study is replicated.
  • nother major determinant of the unreliability of scientific results mentioned by Paulos is the well know problem of publication bias: crudely put, science journals (particularly the high-profile ones, like Nature and Science) are interested only in positive, spectacular, “sexy” results. Which creates a powerful filter against negative, or marginally significant results. What you see in science journals, in other words, isn’t a statistically representative sample of scientific results, but a highly biased one, in favor of positive outcomes. No wonder that when people try to repeat the feat they often come up empty handed.
  • A third cause for the problem, not mentioned by Paulos but addressed in the New Yorker article, is the selective reporting of results by scientists themselves. This is essentially the same phenomenon as the publication bias, except that this time it is scientists themselves, not editors and reviewers, who don’t bother to submit for publication results that are either negative or not strongly conclusive. Again, the outcome is that what we see in the literature isn’t all the science that we ought to see. And it’s no good to argue that it is the “best” science, because the quality of scientific research is measured by the appropriateness of the experimental protocols (including the use of large samples) and of the data analyses — not by whether the results happen to confirm the scientist’s favorite theory.
  • The conclusion of all this is not, of course, that we should throw the baby (science) out with the bath water (bad or unreliable results). But scientists should also be under no illusion that these are rare anomalies that do not affect scientific research at large. Too much emphasis is being put on the “publish or perish” culture of modern academia, with the result that graduate students are explicitly instructed to go for the SPU’s — Smallest Publishable Units — when they have to decide how much of their work to submit to a journal. That way they maximize the number of their publications, which maximizes the chances of landing a postdoc position, and then a tenure track one, and then of getting grants funded, and finally of getting tenure. The result is that, according to statistics published by Nature, it turns out that about ⅓ of published studies is never cited (not to mention replicated!).
  • “Scientists these days tend to keep up the polite fiction that all science is equal. Except for the work of the misguided opponent whose arguments we happen to be refuting at the time, we speak as though every scientist’s field and methods of study are as good as every other scientist’s, and perhaps a little better. This keeps us all cordial when it comes to recommending each other for government grants. ... We speak piously of taking measurements and making small studies that will ‘add another brick to the temple of science.’ Most such bricks lie around the brickyard.”
    • Weiye Loh
       
      Written by John Platt in a "Science" article published in 1964
  • Most damning of all, however, is the potential effect that all of this may have on science’s already dubious reputation with the general public (think evolution-creation, vaccine-autism, or climate change)
  • “If we don’t tell the public about these problems, then we’re no better than non-scientists who falsely claim they can heal. If the drugs don’t work and we’re not sure how to treat something, why should we claim differently? Some fear that there may be less funding because we stop claiming we can prove we have miraculous treatments. But if we can’t really provide those miracles, how long will we be able to fool the public anyway? The scientific enterprise is probably the most fantastic achievement in human history, but that doesn’t mean we have a right to overstate what we’re accomplishing.”
  • Joseph T. Lapp said... But is any of this new for science? Perhaps science has operated this way all along, full of fits and starts, mostly duds. How do we know that this isn't the optimal way for science to operate?My issues are with the understanding of science that high school graduates have, and with the reporting of science.
    • Weiye Loh
       
      It's the media at fault again.
  • What seems to have emerged in recent decades is a change in the institutional setting that got science advancing spectacularly since the establishment of the Royal Society. Flaws in the system such as corporate funded research, pal-review instead of peer-review, publication bias, science entangled with policy advocacy, and suchlike, may be distorting the environment, making it less suitable for the production of good science, especially in some fields.
  • Remedies should exist, but they should evolve rather than being imposed on a reluctant sociological-economic science establishment driven by powerful motives such as professional advance or funding. After all, who or what would have the authority to impose those rules, other than the scientific establishment itself?
Weiye Loh

Before Assange there was Jayakumar: Context, realpolitik, and the public inte... - 0 views

  • Singapore Ministry of Foreign Affairs spokesman’s remarks in the Wall Street Journal Asia piece, “Leaked cable spooks some U.S. sources” dated 3 Dec 2010. The paragraph in question went like this: “Others laid blame not on working U.S. diplomats, but on Wikileaks. Singapore’s Ministry of Foreign Affairs said it had “deep concerns about the damaging action of Wikileaks.” It added, ‘it is critical to protect the confidentiality of diplomatic and official correspondence.’” (emphasis my own)
  • on 25 Jan 2003, the then Singapore Minister of Foreign Affairs and current Senior Minister without portfolio, Professor S Jayakumar, in an unprecedented move, unilaterally released all diplomatic and official correspondence relating to confidential discussions on water negotiations between Singapore and Malaysia from the year 2000. In a parliamentary speech that would have had Julian Assange smiling from ear to ear, Jayakumar said, “We therefore have no choice but to set the record straight by releasing these documents for people to judge for themselves the truth of the matter.” The parliamentary reason for the unprecedented release of information was the misrepresentations made by Malaysia over the price of water, amongst others.
  • The then Malaysian Prime Minister, Mahathir’s response to Singapore’s pre-Wikileak wikileak was equally quote-worthy, “I don’t feel nice. You write a letter to your girlfriend. And your girlfriend circulates it to all her boyfriends. I don’t think I’ll get involved with that girl.”
  • ...9 more annotations...
  • Mahathir did not leave it at that. He foreshadowed the Wikileak-chastised countries of today saying what William, the Singapore Ministry of Foreign Affairs, the US and Iran today, amongst others, must agree with, “It’s very difficult now for us to write letters at all because we might as well negotiate through the media.”
  • I proceeded to the Ministry of Foreign Affairs homepage to search for the full press release. As I anticipated, there was a caveat. This is the press release in full: In response to media queries on the WikiLeaks release of confidential and secret-graded US diplomatic correspondence, the MFA Spokesman expressed deep concerns about the damaging action of WikiLeaks. It is critical to protect the confidentiality of diplomatic and official correspondence, which is why Singapore has the Officials Secrets Act. In particular, the selective release of documents, especially when taken out of context, will only serve to sow confusion and fail to provide a complete picture of the important issues that were being discussed amongst leaders in the strictest of confidentiality.
  • The sentence in red seems to posit that the selective release of documents can be legitimised if released documents are not taken out of context. If this interpretation is true, then one can account for the political decision to release confidential correspondence covering the Singapore and Malaysia water talks referred to above. In parallel, one can imagine Assange or his supporters arguing that lies of weapons of mass destruction in Iraq and the advent of abject two-faced politics today to be sufficient grounds to justify the actions of Wikileaks. As for the arguments about confidentiality and official correspondence, the events in parliament in 2003 tell us no one should underestimate the ability of nation-states to do an Assange if it befits their purpose – be it directly, as Jayakumar did, or indirectly, through the media or some other medium of influence.
  • Timothy Garton Ash put out the dilemma perfectly when he said, “There is a public interest in understanding how the world works and what is done in our name. There is a public interest in the confidential conduct of foreign policy. The two public interests conflict.”
  • the advent of technology will only further blur the lines between these two public interests, if it has not already. Quite apart from technology, the absence of transparent and accountable institutions may also serve to guarantee the prospect of more of such embarrassing leaks in future.
  • In August 2009, there was considerable interest in Singapore about the circumstances behind the departure of Chip Goodyear, former CEO of the Australian mining giant BHP Billiton, from the national sovereign wealth fund, Temasek Holdings. Before that, all the public knew was – in the name of leadership renewal – Chip Goodyear had been carefully chosen and apparently hand-picked to replace Ho Ching as CEO of Temasek Holdings. In response to Chip’s untimely departure, Finance Minister Tharman Shanmugaratnam was quoted, “People do want to know, there is curiosity, it is a matter of public interest. That is not sufficient reason to disclose information. It is not sufficient that there be curiosity and interest that you want to disclose information.”
  • Overly secretive and furtive politicians operating in a parliamentary democracy are unlikely to inspire confidence among an educated citizenry either, only serving to paradoxically fuel public cynicism and conspiracy theories.
  • I believe that government officials and politicians who perform their jobs honourably have nothing to fear from Wikileaks. I would admit that there is an inherent naivety and idealism in this position. But if the lesson from the Wikileaks episode portends a higher standard of ethical conduct, encourages transparency and accountability – all of which promote good governance, realpolitik notwithstanding – then it is perhaps a lesson all politicians and government officials should pay keen attention to.
  • Post-script: “These disclosures are largely of analysis and high-grade gossip. Insofar as they are sensational, it is in showing the corruption and mendacity of those in power, and the mismatch between what they claim and what they do….If American spies are breaking United Nations rules by seeking the DNA biometrics of the UN director general, he is entitled to hear of it. British voters should know what Afghan leaders thought of British troops. American (and British) taxpayers might question, too, how most of the billions of dollars going in aid to Afghanistan simply exits the country at Kabul airport.” –Simon Jenkins, Guardian
Weiye Loh

Roger Pielke Jr.'s Blog: Science Impact - 0 views

  • The Guardian has a blog post up by three neuroscientists decrying the state of hype in the media related to their field, which is fueled in part by their colleagues seeking "impact." 
  • Anyone who has followed recent media reports that electrical brain stimulation "sparks bright ideas" or "unshackles the genius within" could be forgiven for believing that we stand on the frontier of a brave new world. As James Gallagher of the BBC put it, "Are we entering the era of the thinking cap – a device to supercharge our brains?" The answer, we would suggest, is a categorical no. Such speculations begin and end in the colourful realm of science fiction. But we are also in danger of entering the era of the "neuro-myth", where neuroscientists sensationalise and distort their own findings in the name of publicity. The tendency for scientists to over-egg the cake when dealing with the media is nothing new, but recent examples are striking in their disregard for accurate reporting to the public. We believe the media and academic community share a collective responsibility to prevent pseudoscience from masquerading as neuroscience.
  • They identify an . . . . . . unacceptable gulf between, on the one hand, the evidence-bound conclusions reached in peer-reviewed scientific journals, and on the other, the heavy spin applied by scientists to achieve publicity in the media. Are we as neuroscientists so unskilled at communicating with the public, or so low in our estimation of the public's intelligence, that we see no alternative but to mislead and exaggerate?
  • ...1 more annotation...
  • Somewhere down the line, achieving an impact in the media seems to have become the goal in itself, rather than what it should be: a way to inform and engage the public with clarity and objectivity, without bias or prejudice. Our obsession with impact is not one-sided. The craving of scientists for publicity is fuelled by a hurried and unquestioning media, an academic community that disproportionately rewards publication in "high impact" journals such as Nature, and by research councils that emphasise the importance of achieving "impact" while at the same time delivering funding cuts. Academics are now pushed to attend media training courses, instructed about "pathways to impact", required to include detailed "impact summaries" when applying for grant funding, and constantly reminded about the importance of media engagement to further their careers. Yet where in all of this strategising and careerism is it made clear why public engagement is important? Where is it emphasised that the most crucial consideration in our interactions with the media is that we are accurate, honest and open about the limitations of our research?
  •  
    The Guardian has a blog post up by three neuroscientists decrying the state of hype in the media related to their field, which is fueled in part by their colleagues seeking "impact." 
Weiye Loh

Roger Pielke Jr.'s Blog: Richard Muller on NPR: Don't Play With the Peer Review System - 0 views

  • CONAN: Do you find that, though, there is a lot of ideology in this business? Prof. MULLER: Well, I think what's happened is that many scientists have gotten so concerned about global warming, correctly concerned I mean they look at it and they draw a conclusion, and then they're worried that the public has not been concerned, and so they become advocates. And at that point, it's unfortunate, I feel that they're not trusting the public. They're not presenting the science to the public. They're presenting only that aspect to the science that will convince the public. That's not the way science works. And because they don't trust the public, in the end the public doesn't trust them. And the saddest thing from this, I think, is a loss of credibility of scientists because so many of them have become advocates.
  • CONAN: And that's, you would say, would be at the heart of the so-called Climategate story, where emails from some scientists seemed to be working to prevent the work of other scientists from appearing in peer-reviewed journals. Prof. MULLER: That really shook me up when I learned about that. I think that Climategate is a very unfortunate thing that happened, that the scientists who were involved in that, from what I've read, didn't trust the public, didn't even trust the scientific public. They were not showing the discordant data. That's something that - as a scientist I was trained you always have to show the negative data, the data that disagrees with you, and then make the case that your case is stronger. And they were hiding the data, and a whole discussion of suppressing publications, I thought, was really unfortunate. It was not at a high point for science
  • And I really get even more upset when some other people say, oh, science is just a human activity. This is the way it happens. You have to recognize, these are people. No, no, no, no. These are not scientific standards. You don't hide the data. You don't play with the peer review system.
Weiye Loh

When Insurers Put Profits Before People - NYTimes.com - 0 views

  • Late in 2007
  • A 17-year-old girl named Nataline Sarkisyan was in desperate need of a transplant after receiving aggressive treatment that cured her recurrent leukemia but caused her liver to fail. Without a new organ, she would die in a matter of a days; with one, she had a 65 percent chance of surviving. Her doctors placed her on the liver transplant waiting list.
  • She was critically ill, as close to death as one could possibly be while technically still alive, and her fate was inextricably linked to another’s. Somewhere, someone with a compatible organ had to die in time for Nataline to live.
  • ...9 more annotations...
  • But even when the perfect liver became available a few days after she was put on the list, doctors could not operate. What made Nataline different from most transplant patients, and what eventually brought her case to the attention of much of the country, was that her survival did not depend on the availability of an organ or her clinicians or even the quality of care she received. It rested on her health insurance company.
  • Cigna had denied the initial request to cover the costs of the liver transplant. And the insurer persisted in its refusal, claiming that the treatment was “experimental” and unproven, and despite numerous pleas from Nataline’s physicians to the contrary.
  • But as relatives and friends organized campaigns to draw public attention to Nataline’s plight, the insurance conglomerate found itself embroiled in a public relations nightmare, one that could jeopardize its very existence. The company reversed its decision. But the change came too late. Nataline died just a few hours after Cigna authorized the transplant.
  • Mr. Potter was the head of corporate communications at two major insurers, first at Humana and then at Cigna. Now Mr. Potter has written a fascinating book that details the methods he and his colleagues used to manipulate public opinion
  • Mr. Potter goes on to describe the myth-making he did, interspersing descriptions of front groups, paid spies and jiggered studies with a deft retelling of the convoluted (and usually eye-glazing) history of health care insurance policies.
  • We learn that executives at Cigna worried that Nataline’s situation would only add fire to the growing public discontent with a health care system anchored by private insurance. As the case drew more national attention, the threat of a legislative overhaul that would ban for-profit insurers became real, and Mr. Potter found himself working on the biggest P.R. campaign of his career.
  • Cigna hired a large international law firm and a P.R. firm already well known to them from previous work aimed at discrediting Michael Moore and his film “Sicko.” Together with Cigna, these outside firms waged a campaign that would eventually include the aggressive placement of articles with friendly “third party” reporters, editors and producers who would “disabuse the media, politicians and the public of the notion that Nataline would have gotten the transplant if she had lived in Canada or France or England or any other developed country.” A “spy” was dispatched to Nataline’s funeral; and when the Sarkisyan family filed a lawsuit against the insurer, a team of lawyers was assigned to keep track of actions and comments by the family’s lawyer.
  • In the end, however, Nataline’s death proved to be the final straw for Mr. Potter. “It became clearer to me than ever that I was part of an industry that would do whatever it took to perpetuate its extraordinarily profitable existence,” he writes. “I had sold my soul.” He left corporate public relations for good less than six months after her death.
  • “I don’t mean to imply that all people who work for health insurance companies are greedier or more evil than other Americans,” he writes. “In fact, many of them feel — and justifiably so — that they are helping millions of people get they care they need.” The real problem, he says, lies in the fact that the United States “has entrusted one of the most important societal functions, providing health care, to private health insurance companies.” Therefore, the top executives of these companies become beholden not to the patients they have pledged to cover, but to the shareholders who hold them responsible for the bottom line.
Weiye Loh

Free Speech under Siege - Robert Skidelsky - Project Syndicate - 0 views

  • Breaking the cultural code damages a person’s reputation, and perhaps one’s career. Britain’s Home Secretary Kenneth Clarke recently had to apologize for saying that some rapes were less serious than others, implying the need for legal discrimination. The parade of gaffes and subsequent groveling apologies has become a regular feature of public life. In his classic essay On Liberty, John Stuart Mill defended free speech on the ground that free inquiry was necessary to advance knowledge. Restrictions on certain areas of historical inquiry are based on the opposite premise: the truth is known, and it is impious to question it. This is absurd; every historian knows that there is no such thing as final historical truth.
  • It is not the task of history to defend public order or morals, but to establish what happened. Legally protected history ensures that historians will play safe. To be sure, living by Mill’s principle often requires protecting the rights of unsavory characters. David Irving writes mendacious history, but his prosecution and imprisonment in Austria for “Holocaust denial” would have horrified Mill.
  • the pressure for “political correctness” rests on the argument that the truth is unknowable. Statements about the human condition are essentially matters of opinion.  Because a statement of opinion by some individuals is almost certain to offend others, and since such statements make no contribution to the discovery of truth, their degree of offensiveness becomes the sole criterion for judging their admissibility. Hence the taboo on certain words, phrases, and arguments that imply that certain individuals, groups, or practices are superior or inferior, normal or abnormal; hence the search for ever more neutral ways to label social phenomena, thereby draining language of its vigor and interest.
  • ...3 more annotations...
  • A classic example is the way that “family” has replaced “marriage” in public discourse, with the implication that all “lifestyles” are equally valuable, despite the fact that most people persist in wanting to get married. It has become taboo to describe homosexuality as a “perversion,” though this was precisely the word used in the 1960’s by the radical philosopher Herbert Marcuse (who was praising homosexuality as an expression of dissent). In today’s atmosphere of what Marcuse would call “repressive tolerance,” such language would be considered “stigmatizing.”
  • The sociological imperative behind the spread of “political correctness” is the fact that we no longer live in patriarchal, hierarchical, mono-cultural societies, which exhibit general, if unreflective, agreement on basic values. The pathetic efforts to inculcate a common sense of “Britishness” or “Dutchness” in multi-cultural societies, however well-intentioned, attest to the breakdown of a common identity.
  • The defense of free speech is made no easier by the abuses of the popular press. We need free media to expose abuses of power. But investigative journalism becomes discredited when it is suborned to “expose” the private lives of the famous when no issue of public interest is involved. Entertaining gossip has mutated into an assault on privacy, with newspapers claiming that any attempt to keep them out of people’s bedrooms is an assault on free speech. You know that a doctrine is in trouble when not even those claiming to defend it understand what it means. By that standard, the classic doctrine of free speech is in crisis. We had better sort it out quickly – legally, morally, and culturally – if we are to retain a proper sense of what it means to live in a free society.
  •  
    Yet freedom of speech in the West is under strain. Traditionally, British law imposed two main limitations on the "right to free speech." The first prohibited the use of words or expressions likely to disrupt public order; the second was the law against libel. There are good grounds for both - to preserve the peace, and to protect individuals' reputations from lies. Most free societies accept such limits as reasonable. But the law has recently become more restrictive. "Incitement to religious and racial hatred" and "incitement to hatred on the basis of sexual orientation" are now illegal in most European countries, independent of any threat to public order. The law has shifted from proscribing language likely to cause violence to prohibiting language intended to give offense. A blatant example of this is the law against Holocaust denial. To deny or minimize the Holocaust is a crime in 15 European countries and Israel. It may be argued that the Holocaust was a crime so uniquely abhorrent as to qualify as a special case. But special cases have a habit of multiplying.
Jody Poh

Australia's porn-blocking plan unveiled - 10 views

Elaine said: What are the standards put in place to determine whether something is of adult content? Who set those standards? Based on 'general' beliefs and what the government/"web police'' think ...

Weiye Loh

Censorship of War News Undermines Public Trust - 20 views

I posted a bookmark on something related to this issue. http://www.todayonline.com/World/EDC090907-0000047/The-photo-thats-caused-a-stir AP decided to publish a photo of a fatally wounded young ...

censorship PR

Weiye Loh

Open data, democracy and public sector reform - 0 views

  •  
    Governments are increasingly making their data available online in standard formats and under licenses that permit the free re-use of data. The justifications advanced for this include claims regarding the economic potential of open government data (OGD), the potential for OGD to promote transparency and accountability of government and the role of OGD in supporting the reform and reshaping of public services. This paper takes a pragmatic mixed-methods approach to exploring uses of data from the UK national open government data portal, data.gov.uk, and identifies how the emerging practices of OGD use are developing. It sets out five 'processes' of data use, and describes a series of embedded cases of education OGD use, and use of public-spending OGD. Drawing upon quantitative and qualitative data it presents an outline account of the motivations driving different individuals to engage with open government data, and it identifies a range of connections between open government data use of processes of civic change. It argues that a "data for developers" narrative that assumes OGD use will primarily be mediated by technology developers is misplaced, and that whilst innovation-based routes to OGD-driven public sector reform are evident, the relationship between OGD and democracy is less clear. As strategic research it highlights a number of emerging policy issues for developing OGD provision and use, and makes a contribution towards theoretical understandings of OGD use in practice.
Jody Poh

BBC NEWS | Technology | Defamation lawsuit for US tweeter - 0 views

  •  
    This news story is about Horizon realty suing a woman called Amanda Bonnen for defamation on Twitter. Amanda Bonnen has micro blogged her feelings towards her apartment on Twitter. She was unhappy with the mould she found in her apartment. This has stirred a response from Horizon realty as it sees the comment she made as false. Also as Twitter is such a widespread network, the company sees that it has to protect its reputation online. Thus, they have decided to sue Amanda Bonnen. Ms Bonnen has already recently moved out of the apartment and has been unavailable to comment on the lawsuit. Her Twitter account has also been deleted. Ethical question: I think many consider posting complaints and comments on Twitter similar to complaining to or having a conversation with friends over coffee. If this is the case, is it ethical or 'right' to be allowed to sue people like Amanda Bonnen? Ethical problem: This case brings up the point of the freedom of speech in public and private spaces. What are the boundaries and definitions of public and private space with the rise of new technologies such as Twitter? On what space (public or private) is Twitter then operating on and how much freedom of speech is allowed?
Weiye Loh

nanopolitan: From the latest issue of Current Science: Scientometric Analysis of Indian... - 0 views

  • We have carried out a three-part study comparing the research performance of Indian institutions with that of other international institutions. In the first part, the publication profiles of various Indian institutions were examined and ranked based on the h-index and p-index. We found that the institutions of national importance contributed the highest in terms of publications and citations per institution. In the second part of the study, we looked at the publication profiles of various Indian institutions in the high-impact journals and compared these profiles against that of the top Asian and US universities. We found that the number of papers in these journals from India was miniscule compared to the US universities. Recognizing that the publication profiles of various institutions depend on the field/departments, we studied [in Part III] the publication profiles of many science and engineering departments at the Indian Institute of Science (IISc), Bangalore, the Indian Institutes of Technology, as well as top Indian universities. Because the number of faculty in each department varies widely, we have computed the publications and citations per faculty per year for each department. We have also compared this with other departments in various Asian and US universities. We found that the top Indian institution based on various parameters in various disciplines was IISc, but overall even the top Indian institutions do not compare favourably with the top US or Asian universities.
  • The comparison groups of institutions include MIT, UMinn, Purdue, PSU, MSU, OSU, Caltech, UCB, UTexas (all from the US), National University of Singapore, Tsing Hua Univerrsity (China), Seoul National University (South Korea), National Taiwan University (Taiwan), Kyushu University (Japan) and Chinese Academy of Sciences.
  • ... [T]he number of papers in these [high impact] journals from India was miniscule compared to [that from] the US universities. ... [O]verall even the top Indian institutions do not compare favourably with the top US or Asian universities.
  •  
    Scientometric analysis of some disciplines: Comparison of Indian institutions with other international institutions
Weiye Loh

Religion as a catalyst of rationalization « The Immanent Frame - 0 views

  • For Habermas, religion has been a continuous concern precisely because it is related to both the emergence of reason and the development of a public space of reason-giving. Religious ideas, according to Habermas, are never mere irrational speculation. Rather, they possess a form, a grammar or syntax, that unleashes rational insights, even arguments; they contain, not just specific semantic contents about God, but also a particular structure that catalyzes rational argumentation.
  • in his earliest, anthropological-philosophical stage, Habermas approaches religion from a predominantly philosophical perspective. But as he undertakes the task of “transforming historical materialism” that will culminate in his magnum opus, The Theory of Communicative Action, there is a shift from philosophy to sociology and, more generally, social theory. With this shift, religion is treated, not as a germinal for philosophical concepts, but instead as the source of the social order.
  • What is noteworthy about this juncture in Habermas’s writings is that secularization is explained as “pressure for rationalization” from “above,” which meets the force of rationalization from below, from the realm of technical and practical action oriented to instrumentalization. Additionally, secularization here is not simply the process of the profanation of the world—that is, the withdrawal of religious perspectives as worldviews and the privatization of belief—but, perhaps most importantly, religion itself becomes the means for the translation and appropriation of the rational impetus released by its secularization.
  • ...6 more annotations...
  • religion becomes its own secular catalyst, or, rather, secularization itself is the result of religion. This approach will mature in the most elaborate formulation of what Habermas calls the “linguistification of the sacred,” in volume two of The Theory of Communicative Action. There, basing himself on Durkheim and Mead, Habermas shows how ritual practices and religious worldviews release rational imperatives through the establishment of a communicative grammar that conditions how believers can and should interact with each other, and how they relate to the idea of a supreme being. Habermas writes: worldviews function as a kind of drive belt that transforms the basic religious consensus into the energy of social solidarity and passes it on to social institutions, thus giving them a moral authority. [. . .] Whereas ritual actions take place at a pregrammatical level, religious worldviews are connected with full-fledged communicative actions.
  • The thrust of Habermas’s argumentation in this section of The Theory of Communicative Action is to show that religion is the source of the normative binding power of ethical and moral commandments. Yet there is an ambiguity here. While the contents of worldviews may be sublimated into the normative, binding of social systems, it is not entirely clear that the structure, or the grammar, of religious worldviews is itself exhausted. Indeed, in “A Genealogical Analysis of the Cognitive Content of Morality,” Habermas resolves this ambiguity by claiming that the horizontal relationship among believers and the vertical relationship between each believer and God shape the structure of our moral relationship to our neighbour, but now under two corresponding aspects: that of solidarity and that of justice. Here, the grammar of one’s religious relationship to God and the corresponding community of believers are like the exoskeleton of a magnificent species, which, once the religious worldviews contained in them have desiccated under the impact of the forces of secularization, leave behind a casing to be used as a structuring shape for other contents.
  • Metaphysical thinking, which for Habermas has become untenable by the very logic of philosophical development, is characterized by three aspects: identity thinking, or the philosophy of origins that postulates the correspondence between being and thought; the doctrine of ideas, which becomes the foundation for idealism, which in turn postulates a tension between what is perceived and what can be conceptualized; and a concomitant strong concept of theory, where the bios theoretikos takes on a quasi-sacred character, and where philosophy becomes the path to salvation through dedication to a life of contemplation. By “postmetaphysical” Habermas means the new self-understanding of reason that we are able to obtain after the collapse of the Hegelian idealist system—the historicization of reason, or the de-substantivation that turns it into a procedural rationality, and, above all, its humbling. It is noteworthy that one of the main aspects of the new postmetaphysical constellation is that in the wake of the collapse of metaphysics, philosophy is forced to recognize that it must co-exist with religious practices and language: Philosophy, even in its postmetaphysical form, will be able neither to replace nor to repress religion as long as religious language is the bearer of semantic content that is inspiring and even indispensable, for this content eludes (for the time being?) the explanatory force of philosophical language and continues to resist translation into reasoning discourses.
  • metaphysical thinking either surrendered philosophy to religion or sought to eliminate religion altogether. In contrast, postmetaphysical thinking recognizes that philosophy can neither replace nor dismissively reject religion, for religion continues to articulate a language whose syntax and content elude philosophy, but from which philosophy continues to derive insights into the universal dimensions of human existence.
  • Habermas claims that even moral discourse cannot translate religious language without something being lost: “Secular languages which only eliminate the substance once intended leave irritations. When sin was converted to culpability, and the breaking of divine commands to an offence against human laws, something was lost.” Still, Habermas’s concern with religion is no longer solely philosophical, nor merely socio-theoretical, but has taken on political urgency. Indeed, he now asks whether modern rule of law and constitutional democracies can generate the motivational resources that nourish them and make them durable. In a series of essays, now gathered in Between Naturalism and Religion, as well as in his Europe: The Faltering Project, Habermas argues that as we have become members of a world society (Weltgesellschaft), we have also been forced to adopt a societal “post-secular self-consciousness.” By this term Habermas does not mean that secularization has come to an end, and even less that it has to be reversed. Instead, he now clarifies that secularization refers very specifically to the secularization of state power and to the general dissolution of metaphysical, overarching worldviews (among which religious views are to be counted). Additionally, as members of a world society that has, if not a fully operational, at least an incipient global public sphere, we have been forced to witness the endurance and vitality of religion. As members of this emergent global public sphere, we are also forced to recognize the plurality of forms of secularization. Secularization did not occur in one form, but in a variety of forms and according to different chronologies.
  • through a critical reading of Rawls, Habermas has begun to translate the postmetaphysical orientation of modern philosophy into a postsecular self-understanding of modern rule of law societies in such a way that religious citizens as well as secular citizens can co-exist, not just by force of a modus vivendi, but out of a sincere mutual respect. “Mutual recognition implies, among other things, that religious and secular citizens are willing to listen and to learn from each other in public debates. The political virtue of treating each other civilly is an expression of distinctive cognitive attitudes.” The cognitive attitudes Habermas is referring to here are the very cognitive competencies that are distinctive of modern, postconventional social agents. Habermas’s recent work on religion, then, is primarily concerned with rescuing for the modern liberal state those motivational and moral resources that it cannot generate or provide itself. At the same time, his recent work is concerned with foregrounding the kind of ethical and moral concerns, preoccupations, and values that can guide us between the Scylla of a society administered from above by the system imperatives of a global economy and political power and the Charybdis of a technological frenzy that places us on the slippery slope of a liberally sanctioned eugenics.
  •  
    Religion in the public sphere: Religion as a catalyst of rationalization posted by Eduardo Mendieta
Weiye Loh

takchek (读书 ): How Nature selects manuscripts for publication - 0 views

  • the explanation's pretty weak on the statistics given that it is a scientific journal. Drug Monkey and writedit have more on commentary about this particular editorial.
  • Good science, bad science, and whether it will lead to publication or not all rests on the decision of the editor. The gatekeeper.
  • do you know that Watson and Crick's landmark 1953 paper on the structure of DNA in the journal was not sent out for peer review at all?The reasons, as stated by Nature's Emeritus Editor John Maddox were:First, the Crick and Watson paper could not have been refereed: its correctness is self-evident. No referee working in the field (Linus Pauling?) could have kept his mouth shut once he saw the structure. Second, it would have been entirely consistent with my predecessor L. J. F. Brimble's way of working that Bragg's commendation should have counted as a referee's approval.
  • ...1 more annotation...
  • The whole business of scientific publishing is murky and sometimes who you know counts more than what you know in order to get your foot into the 'club'. Even Maddox alluded to the existence of such an 'exclusive' club:Brimble, who used to "take luncheon" at the Athenaeum in London most days, preferred to carry a bundle of manuscripts with him in the pocket of his greatcoat and pass them round among his chums "taking coffee" in the drawing-room after lunch. I set up a more systematic way of doing the job when I became editor in April 1966.
  •  
    How Nature selects manuscripts for publication Nature actually devoted an editorial (doi:10.1038/463850a) explaining its publication process.
Weiye Loh

The Decline Effect and the Scientific Method : The New Yorker - 0 views

  • On September 18, 2007, a few dozen neuroscientists, psychiatrists, and drug-company executives gathered in a hotel conference room in Brussels to hear some startling news. It had to do with a class of drugs known as atypical or second-generation antipsychotics, which came on the market in the early nineties.
  • the therapeutic power of the drugs appeared to be steadily waning. A recent study showed an effect that was less than half of that documented in the first trials, in the early nineteen-nineties. Many researchers began to argue that the expensive pharmaceuticals weren’t any better than first-generation antipsychotics, which have been in use since the fifties. “In fact, sometimes they now look even worse,” John Davis, a professor of psychiatry at the University of Illinois at Chicago, told me.
  • Before the effectiveness of a drug can be confirmed, it must be tested and tested again. Different scientists in different labs need to repeat the protocols and publish their results. The test of replicability, as it’s known, is the foundation of modern research. Replicability is how the community enforces itself. It’s a safeguard for the creep of subjectivity. Most of the time, scientists know what results they want, and that can influence the results they get. The premise of replicability is that the scientific community can correct for these flaws.
  • ...30 more annotations...
  • But now all sorts of well-established, multiply confirmed findings have started to look increasingly uncertain. It’s as if our facts were losing their truth: claims that have been enshrined in textbooks are suddenly unprovable. This phenomenon doesn’t yet have an official name, but it’s occurring across a wide range of fields, from psychology to ecology. In the field of medicine, the phenomenon seems extremely widespread, affecting not only antipsychotics but also therapies ranging from cardiac stents to Vitamin E and antidepressants: Davis has a forthcoming analysis demonstrating that the efficacy of antidepressants has gone down as much as threefold in recent decades.
  • In private, Schooler began referring to the problem as “cosmic habituation,” by analogy to the decrease in response that occurs when individuals habituate to particular stimuli. “Habituation is why you don’t notice the stuff that’s always there,” Schooler says. “It’s an inevitable process of adjustment, a ratcheting down of excitement. I started joking that it was like the cosmos was habituating to my ideas. I took it very personally.”
  • At first, he assumed that he’d made an error in experimental design or a statistical miscalculation. But he couldn’t find anything wrong with his research. He then concluded that his initial batch of research subjects must have been unusually susceptible to verbal overshadowing. (John Davis, similarly, has speculated that part of the drop-off in the effectiveness of antipsychotics can be attributed to using subjects who suffer from milder forms of psychosis which are less likely to show dramatic improvement.) “It wasn’t a very satisfying explanation,” Schooler says. “One of my mentors told me that my real mistake was trying to replicate my work. He told me doing that was just setting myself up for disappointment.”
  • the effect is especially troubling because of what it exposes about the scientific process. If replication is what separates the rigor of science from the squishiness of pseudoscience, where do we put all these rigorously validated findings that can no longer be proved? Which results should we believe? Francis Bacon, the early-modern philosopher and pioneer of the scientific method, once declared that experiments were essential, because they allowed us to “put nature to the question.” But it appears that nature often gives us different answers.
  • The most likely explanation for the decline is an obvious one: regression to the mean. As the experiment is repeated, that is, an early statistical fluke gets cancelled out. The extrasensory powers of Schooler’s subjects didn’t decline—they were simply an illusion that vanished over time. And yet Schooler has noticed that many of the data sets that end up declining seem statistically solid—that is, they contain enough data that any regression to the mean shouldn’t be dramatic. “These are the results that pass all the tests,” he says. “The odds of them being random are typically quite remote, like one in a million. This means that the decline effect should almost never happen. But it happens all the time!
  • this is why Schooler believes that the decline effect deserves more attention: its ubiquity seems to violate the laws of statistics. “Whenever I start talking about this, scientists get very nervous,” he says. “But I still want to know what happened to my results. Like most scientists, I assumed that it would get easier to document my effect over time. I’d get better at doing the experiments, at zeroing in on the conditions that produce verbal overshadowing. So why did the opposite happen? I’m convinced that we can use the tools of science to figure this out. First, though, we have to admit that we’ve got a problem.”
  • In 2001, Michael Jennions, a biologist at the Australian National University, set out to analyze “temporal trends” across a wide range of subjects in ecology and evolutionary biology. He looked at hundreds of papers and forty-four meta-analyses (that is, statistical syntheses of related studies), and discovered a consistent decline effect over time, as many of the theories seemed to fade into irrelevance. In fact, even when numerous variables were controlled for—Jennions knew, for instance, that the same author might publish several critical papers, which could distort his analysis—there was still a significant decrease in the validity of the hypothesis, often within a year of publication. Jennions admits that his findings are troubling, but expresses a reluctance to talk about them publicly. “This is a very sensitive issue for scientists,” he says. “You know, we’re supposed to be dealing with hard facts, the stuff that’s supposed to stand the test of time. But when you see these trends you become a little more skeptical of things.”
  • the worst part was that when I submitted these null results I had difficulty getting them published. The journals only wanted confirming data. It was too exciting an idea to disprove, at least back then.
  • the steep rise and slow fall of fluctuating asymmetry is a clear example of a scientific paradigm, one of those intellectual fads that both guide and constrain research: after a new paradigm is proposed, the peer-review process is tilted toward positive results. But then, after a few years, the academic incentives shift—the paradigm has become entrenched—so that the most notable results are now those that disprove the theory.
  • Jennions, similarly, argues that the decline effect is largely a product of publication bias, or the tendency of scientists and scientific journals to prefer positive data over null results, which is what happens when no effect is found. The bias was first identified by the statistician Theodore Sterling, in 1959, after he noticed that ninety-seven per cent of all published psychological studies with statistically significant data found the effect they were looking for. A “significant” result is defined as any data point that would be produced by chance less than five per cent of the time. This ubiquitous test was invented in 1922 by the English mathematician Ronald Fisher, who picked five per cent as the boundary line, somewhat arbitrarily, because it made pencil and slide-rule calculations easier. Sterling saw that if ninety-seven per cent of psychology studies were proving their hypotheses, either psychologists were extraordinarily lucky or they published only the outcomes of successful experiments. In recent years, publication bias has mostly been seen as a problem for clinical trials, since pharmaceutical companies are less interested in publishing results that aren’t favorable. But it’s becoming increasingly clear that publication bias also produces major distortions in fields without large corporate incentives, such as psychology and ecology.
  • While publication bias almost certainly plays a role in the decline effect, it remains an incomplete explanation. For one thing, it fails to account for the initial prevalence of positive results among studies that never even get submitted to journals. It also fails to explain the experience of people like Schooler, who have been unable to replicate their initial data despite their best efforts
  • an equally significant issue is the selective reporting of results—the data that scientists choose to document in the first place. Palmer’s most convincing evidence relies on a statistical tool known as a funnel graph. When a large number of studies have been done on a single subject, the data should follow a pattern: studies with a large sample size should all cluster around a common value—the true result—whereas those with a smaller sample size should exhibit a random scattering, since they’re subject to greater sampling error. This pattern gives the graph its name, since the distribution resembles a funnel.
  • The funnel graph visually captures the distortions of selective reporting. For instance, after Palmer plotted every study of fluctuating asymmetry, he noticed that the distribution of results with smaller sample sizes wasn’t random at all but instead skewed heavily toward positive results.
  • Palmer has since documented a similar problem in several other contested subject areas. “Once I realized that selective reporting is everywhere in science, I got quite depressed,” Palmer told me. “As a researcher, you’re always aware that there might be some nonrandom patterns, but I had no idea how widespread it is.” In a recent review article, Palmer summarized the impact of selective reporting on his field: “We cannot escape the troubling conclusion that some—perhaps many—cherished generalities are at best exaggerated in their biological significance and at worst a collective illusion nurtured by strong a-priori beliefs often repeated.”
  • Palmer emphasizes that selective reporting is not the same as scientific fraud. Rather, the problem seems to be one of subtle omissions and unconscious misperceptions, as researchers struggle to make sense of their results. Stephen Jay Gould referred to this as the “shoehorning” process. “A lot of scientific measurement is really hard,” Simmons told me. “If you’re talking about fluctuating asymmetry, then it’s a matter of minuscule differences between the right and left sides of an animal. It’s millimetres of a tail feather. And so maybe a researcher knows that he’s measuring a good male”—an animal that has successfully mated—“and he knows that it’s supposed to be symmetrical. Well, that act of measurement is going to be vulnerable to all sorts of perception biases. That’s not a cynical statement. That’s just the way human beings work.”
  • One of the classic examples of selective reporting concerns the testing of acupuncture in different countries. While acupuncture is widely accepted as a medical treatment in various Asian countries, its use is much more contested in the West. These cultural differences have profoundly influenced the results of clinical trials. Between 1966 and 1995, there were forty-seven studies of acupuncture in China, Taiwan, and Japan, and every single trial concluded that acupuncture was an effective treatment. During the same period, there were ninety-four clinical trials of acupuncture in the United States, Sweden, and the U.K., and only fifty-six per cent of these studies found any therapeutic benefits. As Palmer notes, this wide discrepancy suggests that scientists find ways to confirm their preferred hypothesis, disregarding what they don’t want to see. Our beliefs are a form of blindness.
  • John Ioannidis, an epidemiologist at Stanford University, argues that such distortions are a serious issue in biomedical research. “These exaggerations are why the decline has become so common,” he says. “It’d be really great if the initial studies gave us an accurate summary of things. But they don’t. And so what happens is we waste a lot of money treating millions of patients and doing lots of follow-up studies on other themes based on results that are misleading.”
  • In 2005, Ioannidis published an article in the Journal of the American Medical Association that looked at the forty-nine most cited clinical-research studies in three major medical journals. Forty-five of these studies reported positive results, suggesting that the intervention being tested was effective. Because most of these studies were randomized controlled trials—the “gold standard” of medical evidence—they tended to have a significant impact on clinical practice, and led to the spread of treatments such as hormone replacement therapy for menopausal women and daily low-dose aspirin to prevent heart attacks and strokes. Nevertheless, the data Ioannidis found were disturbing: of the thirty-four claims that had been subject to replication, forty-one per cent had either been directly contradicted or had their effect sizes significantly downgraded.
  • The situation is even worse when a subject is fashionable. In recent years, for instance, there have been hundreds of studies on the various genes that control the differences in disease risk between men and women. These findings have included everything from the mutations responsible for the increased risk of schizophrenia to the genes underlying hypertension. Ioannidis and his colleagues looked at four hundred and thirty-two of these claims. They quickly discovered that the vast majority had serious flaws. But the most troubling fact emerged when he looked at the test of replication: out of four hundred and thirty-two claims, only a single one was consistently replicable. “This doesn’t mean that none of these claims will turn out to be true,” he says. “But, given that most of them were done badly, I wouldn’t hold my breath.”
  • the main problem is that too many researchers engage in what he calls “significance chasing,” or finding ways to interpret the data so that it passes the statistical test of significance—the ninety-five-per-cent boundary invented by Ronald Fisher. “The scientists are so eager to pass this magical test that they start playing around with the numbers, trying to find anything that seems worthy,” Ioannidis says. In recent years, Ioannidis has become increasingly blunt about the pervasiveness of the problem. One of his most cited papers has a deliberately provocative title: “Why Most Published Research Findings Are False.”
  • The problem of selective reporting is rooted in a fundamental cognitive flaw, which is that we like proving ourselves right and hate being wrong. “It feels good to validate a hypothesis,” Ioannidis said. “It feels even better when you’ve got a financial interest in the idea or your career depends upon it. And that’s why, even after a claim has been systematically disproven”—he cites, for instance, the early work on hormone replacement therapy, or claims involving various vitamins—“you still see some stubborn researchers citing the first few studies that show a strong effect. They really want to believe that it’s true.”
  • scientists need to become more rigorous about data collection before they publish. “We’re wasting too much time chasing after bad studies and underpowered experiments,” he says. The current “obsession” with replicability distracts from the real problem, which is faulty design. He notes that nobody even tries to replicate most science papers—there are simply too many. (According to Nature, a third of all studies never even get cited, let alone repeated.)
  • Schooler recommends the establishment of an open-source database, in which researchers are required to outline their planned investigations and document all their results. “I think this would provide a huge increase in access to scientific work and give us a much better way to judge the quality of an experiment,” Schooler says. “It would help us finally deal with all these issues that the decline effect is exposing.”
  • Although such reforms would mitigate the dangers of publication bias and selective reporting, they still wouldn’t erase the decline effect. This is largely because scientific research will always be shadowed by a force that can’t be curbed, only contained: sheer randomness. Although little research has been done on the experimental dangers of chance and happenstance, the research that exists isn’t encouraging
  • John Crabbe, a neuroscientist at the Oregon Health and Science University, conducted an experiment that showed how unknowable chance events can skew tests of replicability. He performed a series of experiments on mouse behavior in three different science labs: in Albany, New York; Edmonton, Alberta; and Portland, Oregon. Before he conducted the experiments, he tried to standardize every variable he could think of. The same strains of mice were used in each lab, shipped on the same day from the same supplier. The animals were raised in the same kind of enclosure, with the same brand of sawdust bedding. They had been exposed to the same amount of incandescent light, were living with the same number of littermates, and were fed the exact same type of chow pellets. When the mice were handled, it was with the same kind of surgical glove, and when they were tested it was on the same equipment, at the same time in the morning.
  • The premise of this test of replicability, of course, is that each of the labs should have generated the same pattern of results. “If any set of experiments should have passed the test, it should have been ours,” Crabbe says. “But that’s not the way it turned out.” In one experiment, Crabbe injected a particular strain of mouse with cocaine. In Portland the mice given the drug moved, on average, six hundred centimetres more than they normally did; in Albany they moved seven hundred and one additional centimetres. But in the Edmonton lab they moved more than five thousand additional centimetres. Similar deviations were observed in a test of anxiety. Furthermore, these inconsistencies didn’t follow any detectable pattern. In Portland one strain of mouse proved most anxious, while in Albany another strain won that distinction.
  • The disturbing implication of the Crabbe study is that a lot of extraordinary scientific data are nothing but noise. The hyperactivity of those coked-up Edmonton mice wasn’t an interesting new fact—it was a meaningless outlier, a by-product of invisible variables we don’t understand. The problem, of course, is that such dramatic findings are also the most likely to get published in prestigious journals, since the data are both statistically significant and entirely unexpected. Grants get written, follow-up studies are conducted. The end result is a scientific accident that can take years to unravel.
  • This suggests that the decline effect is actually a decline of illusion.
  • While Karl Popper imagined falsification occurring with a single, definitive experiment—Galileo refuted Aristotelian mechanics in an afternoon—the process turns out to be much messier than that. Many scientific theories continue to be considered true even after failing numerous experimental tests. Verbal overshadowing might exhibit the decline effect, but it remains extensively relied upon within the field. The same holds for any number of phenomena, from the disappearing benefits of second-generation antipsychotics to the weak coupling ratio exhibited by decaying neutrons, which appears to have fallen by more than ten standard deviations between 1969 and 2001. Even the law of gravity hasn’t always been perfect at predicting real-world phenomena. (In one test, physicists measuring gravity by means of deep boreholes in the Nevada desert found a two-and-a-half-per-cent discrepancy between the theoretical predictions and the actual data.) Despite these findings, second-generation antipsychotics are still widely prescribed, and our model of the neutron hasn’t changed. The law of gravity remains the same.
  • Such anomalies demonstrate the slipperiness of empiricism. Although many scientific ideas generate conflicting results and suffer from falling effect sizes, they continue to get cited in the textbooks and drive standard medical practice. Why? Because these ideas seem true. Because they make sense. Because we can’t bear to let them go. And this is why the decline effect is so troubling. Not because it reveals the human fallibility of science, in which data are tweaked and beliefs shape perceptions. (Such shortcomings aren’t surprising, at least for scientists.) And not because it reveals that many of our most exciting theories are fleeting fads and will soon be rejected. (That idea has been around since Thomas Kuhn.) The decline effect is troubling because it reminds us how difficult it is to prove anything. We like to pretend that our experiments define the truth for us. But that’s often not the case. Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true. When the experiments are done, we still have to choose what to believe.
1 - 20 of 284 Next › Last »
Showing 20 items per page