Skip to main content

Home/ New Media Ethics 2009 course/ Group items tagged lies

Rss Feed Group items tagged

Weiye Loh

Book Review: "Merchants of Doubt: How a Handful of Scientists Obscured the Tr... - 0 views

  • Merchant of Doubt is exactly what its subtitle says: a historical view of how a handful of scientists have obscured the truth on matters of scientific fact.
  • it was a very small group who were responsible for creating a great deal of doubt on a variety of issues. The book opens in 1953, where the tobacco industry began to take action to obscure the truth about smoking’s harmful effects, when its relationship to cancer first received widespread media attention.
  • The tobacco industry exploited scientific tendency to be conservative in drawing conclusions, to throw up a handful of cherry-picked data and misleading statistics and to “spin unreasonable doubt.” This tactic, combined with the media’s adherence to the “fairness doctrine” which was interpreted as giving equal time “to both sides [of an issue], rather than giving accurate weight to both sides” allowed the tobacco industry to delay regulation for decades.
  • ...8 more annotations...
  • The natural scientific doubt was this: scientists could not say with absolute certainty that smoking caused cancer, because there wasn’t an invariable effect. “Smoking does not kill everyone who smokes, it only kills about half of them.” All scientists could say was that there was an extremely strong association between smoking and serious health risks
  • the “Tobacco Strategy” was created, and had two tactics: To “use normal scientific doubt to undermine the status of actual scientific knowledge” and To exploit the media’s adherence to the fairness doctrine, which would give equal weight to each side of a debate, regardless of any disparity in the supporting scientific evidence
  • Fred Seitz was a scientist who learned the Tobacco Strategy first-hand. He had an impressive resume. An actual rocket scientist, he helped build the atomic bomb in the 1940s, worked for NATO in the 1950s, was president of the U.S. National Academy of Sciences in the 1960s, and of Rockefeller University in the 1970s.
  • After his retirement in 1979, Seitz took on a job for the tobacco industry. Over the next 6 years, he doled out $45 million of R.J. Reynolds’ money to fund biomedical research to create “an extensive body of scientifically well-grounded data useful in defending the industry against attacks” by such means as focussing on alternative “causes or development mechanisms of chronic degenerative diseases imputed to cigarettes.” He was joined by, most notably, two other physicists: William Nierenberg, who also worked on the atom bomb in the 1940s, submarine warfare, NATO, and was appointed director or the Scripps Institution of Oceanography in 1965; and Robert Jastrow, who founded NASA’s Goddard Institute for Space Studies, which he directed until he retired in 1981 to teach at Dartmouth College.
  • In 1984, these three founded the think tank, the George C. Marshall Institute
  • None of these men were experts in environmental and health issues, but they all “used their scientific credentials to present themselves as authorities, and they used their authority to discredit any science they didn’t like.” They turned out to be wrong, in terms of the science, on every issue they weighed in on. But they turned out to be highly successful in preventing or limiting regulation that the scientific evidence would warrant.
  • The bulk of the book details at how these men and others applied the Tobacco Strategy to create doubt on the following issues: the unfeasibility of the Strategic Defense Initiative (Ronald Reagan’s “Star Wars”), and the resultant threat of nuclear winter that Carl Sagan, among others, pointed out acid rain depletion of the ozone layer second-hand smoke, and most recently, and significantly, global warming.
  • Having pointed out the dangers the doubt-mongers pose, Oreskes and Conway propose a remedy: an emphasis on scientific literacy, not in the sense of memorizing scientific facts, but in being able to assess which scientists to trust.
Weiye Loh

Royal Society launches study on openness in science | Royal Society - 0 views

  • Science as a public enterprise: opening up scientific information will look at how scientific information should best be managed to improve the quality of research and build public trust.
  • “Science has always been about open debate. But incidents such as the UEA email leaks have prompted the Royal Society to look at how open science really is.  With the advent of the Internet, the public now expect a greater degree of transparency. The impact of science on people’s lives, and the implications of scientific assessments for society and the economy are now so great that  people won’t just believe scientists when they say “trust me, I’m an expert.” It is not just scientists who want to be able to see inside scientific datasets, to see how robust they are and ask difficult questions about their implications. Science has to adapt.”
  • The study will look at questions such as: What are the benefits and risks of openly sharing scientific data? How does the rise of the blogosphere change scientific research? What responsibility should scientists, their institutions and the funders of research have for open data? How do we make information more accessible and who will pay to do it? Should privately funded scientists be held to the same standards as those who are publicly funded? How do we balance openness against intellectual property rights and in the case of medical information how do protect patient confidentiality?  Will the same rules apply to scientists across the world?
  • ...1 more annotation...
  • “Different scientific disciplines share their information very differently.  The human genome project was incredibly open in how data were shared. But in biomedical science you also have drug trials conducted where no results are made public.” 
Weiye Loh

When big pharma pays a publisher to publish a fake journal... : Respectful Insolence - 0 views

  • pharmaceutical company Merck, Sharp & Dohme paid Elsevier to produce a fake medical journal that, to any superficial examination, looked like a real medical journal but was in reality nothing more than advertising for Merck
  • As reported by The Scientist: Merck paid an undisclosed sum to Elsevier to produce several volumes of a publication that had the look of a peer-reviewed medical journal, but contained only reprinted or summarized articles--most of which presented data favorable to Merck products--that appeared to act solely as marketing tools with no disclosure of company sponsorship. "I've seen no shortage of creativity emanating from the marketing departments of drug companies," Peter Lurie, deputy director of the public health research group at the consumer advocacy nonprofit Public Citizen, said, after reviewing two issues of the publication obtained by The Scientist. "But even for someone as jaded as me, this is a new wrinkle." The Australasian Journal of Bone and Joint Medicine, which was published by Exerpta Medica, a division of scientific publishing juggernaut Elsevier, is not indexed in the MEDLINE database, and has no website (not even a defunct one). The Scientist obtained two issues of the journal: Volume 2, Issues 1 and 2, both dated 2003. The issues contained little in the way of advertisements apart from ads for Fosamax, a Merck drug for osteoporosis, and Vioxx.
  • there are numerous "throwaway" journals out there. "Throwaway" journals tend to be defined as journals that are provided free of charge, have a lot of advertising (a high "advertising-to-text" ratio, as it is often described), and contain no original investigations. Other relevant characteristics include: Supported virtually entirely by advertising revenue. Ads tend to be placed within article pages interrupting the articles, rather than between articles, as is the case with most medical journals that accept ads Virtually the entire content is reviews of existing content of variable (and often dubious) quality. Parasitic. Throwaways often summarize peer-reviewed research from real journals. Questionable (at best) peer review. Throwaways tend to cater to an uninvolved and uncritical readership. No original work.
Weiye Loh

After Wakefield: Undoing a decade of damaging debate « Skepticism « Critical ... - 0 views

  • Mass vaccination completely eradicated smallpox, which had been killing one in seven children.  Public health campaigns have also eliminated diptheria, and reduced the incidence of pertussis, tetanus, measles, rubella and mumps to near zero.
  • when vaccination rates drop, diseases can reemerge in the population again. Measles is currently endemic in the United Kingdom, after vaccination rates dropped below 80%. When diptheria immunization dropped in Russia and Ukraine in the early 1990′s, there were over 100,000 cases with 1,200 deaths.  In Nigeria in 2001, unfounded fears of the polio vaccine led to a drop in vaccinations, an re-emergence of infection, and the spread of polio to ten other countries.
  • one reason that has experienced a dramatic upsurge over the past decade or so has been the fear that vaccines cause autism. The connection between autism and vaccines, in particular the measles, mumps, rubella (MMR) vaccine, has its roots in a paper published by Andrew Wakefield in 1998 in the medical journal The Lancet.  This link has already been completely and thoroughly debunked – there is no evidence to substantiate this connection. But over the past two weeks, the full extent of the deception propagated by Wakefield was revealed. The British Medical Journal has a series of articles from journalist Brian Deer (part 1, part 2), who spent years digging into the facts behind Wakefield,  his research, and the Lancet paper
  • ...3 more annotations...
  • Wakefield’s original paper (now retracted) attempted to link gastrointestinal symptoms and regressive autism in 12 children to the administration of the MMR vaccine. Last year Wakefield was stripped of his medical license for unethical behaviour, including undeclared conflicts of interest.  The most recent revelations demonstrate that it wasn’t just sloppy research – it was fraud.
  • Unbelievably, some groups still hold Wakefield up as some sort of martyr, but now we have the facts: Three of the 9 children said to have autism didn’t have autism at all. The paper claimed all 12 children were normal, before administration of the vaccine. In fact, 5 had developmental delays that were detected prior to the administration of the vaccine. Behavioural symptoms in some children were claimed in the paper as being closely related to the vaccine administration, but documentation showed otherwise. What were initially determined to be “unremarkable” colon pathology reports were changed to “non-specific colitis” after a secondary review. Parents were recruited for the “study” by anti-vaccinationists. The study was designed and funded to support future litigation.
  • As Dr. Paul Offit has been quoted as saying, you can’t unring a bell. So what’s going to stop this bell from ringing? Perhaps an awareness of its fraudulent basis will do more to change perceptions than a decade of scientific investigation has been able to achieve. For the sake of population health, we hope so.
Weiye Loh

nanopolitan: "Lies, Damned Lies, and Medical Science" - 0 views

  • That's the title of The Atlantic profile of Dr. John Ioannidis who "has spent his career challenging his peers by exposing their bad science." His 2005 paper in PLoS Medicine was on why most published research findings are false.
  • Ioannidis anticipated that the community might shrug off his findings: sure, a lot of dubious research makes it into journals, but we researchers and physicians know to ignore it and focus on the good stuff, so what’s the big deal? The other paper headed off that claim.
  • He zoomed in on 49 of the most highly regarded research findings in medicine over the previous 13 years, as judged by the science community’s two standard measures: the papers had appeared in the journals most widely cited in research articles, and the 49 articles themselves were the most widely cited articles in these journals.
  • ...7 more annotations...
  • Of the 49 articles, 45 claimed to have uncovered effective interventions. Thirty-four of these claims had been retested, and 14 of these, or 41 percent, had been convincingly shown to be wrong or significantly exaggerated. If between a third and a half of the most acclaimed research in medicine was proving untrustworthy, the scope and impact of the problem were undeniable. That article was published in the Journal of the American Medical Association. [here's the link.]
  • David Freedman -- has quite a bit on the sociology of research in medical science. Here are a few quotes:
  • Even when the evidence shows that a particular research idea is wrong, if you have thousands of scientists who have invested their careers in it, they’ll continue to publish papers on it,” he says. “It’s like an epidemic, in the sense that they’re infected with these wrong ideas, and they’re spreading it to other researchers through journals.”
  • the peer-review process often pressures researchers to shy away from striking out in genuinely new directions, and instead to build on the findings of their colleagues (that is, their potential reviewers) in ways that only seem like breakthroughs—as with the exciting-sounding gene linkages (autism genes identified!) and nutritional findings (olive oil lowers blood pressure!) that are really just dubious and conflicting variations on a theme.
  • The ultimate protection against research error and bias is supposed to come from the way scientists constantly retest each other’s results—except they don’t. Only the most prominent findings are likely to be put to the test, because there’s likely to be publication payoff in firming up the proof, or contradicting it.
  • Doctors may notice that their patients don’t seem to fare as well with certain treatments as the literature would lead them to expect, but the field is appropriately conditioned to subjugate such anecdotal evidence to study findings.
  • [B]eing wrong in science is fine, and even necessary—as long as scientists recognize that they blew it, report their mistake openly instead of disguising it as a success, and then move on to the next thing, until they come up with the very occasional genuine breakthrough. But as long as careers remain contingent on producing a stream of research that’s dressed up to seem more right than it is, scientists will keep delivering exactly that.
  •  
    "Lies, Damned Lies, and Medical Science"
Weiye Loh

The Dawn of Paid Search Without Keywords - Search Engine Watch (SEW) - 0 views

  • This year will fundamentally change how we think about and buy access to prospects, namely keywords. It is the dawn of paid search without keywords.
  • Google's search results were dominated by the "10 blue links" -- simple headlines, descriptions, and URLs to entice and satisfy searchers. Until it wasn't. Universal search wove in images, video, and real-time updates.
  • For most of its history, too, AdWords been presented in a text format even as the search results morphed into a multimedia experience. The result is that attention was pulled towards organic results at the expense of ads.
  • ...8 more annotations...
  • Google countered that trend with their big push for universal paid search in 2010. It was, perhaps, the most radical evolution to the paid search results since the introduction of Quality Score. Consider the changes:
  • New ad formats: Text is no longer the exclusive medium for advertising on Google. No format exemplifies that more than Product List Ads (and their cousin, Product Extensions). There is no headline, copy or display URL. Instead, it's just a product image, name, price and vendor slotted in the highest positions on the right side. What's more, you don't choose keywords. We also saw display creep into image search results with Image Search Ads and traditional display ads.
  • New calls-to-action: The way you satisfy your search with advertising on Google has evolved as well. Most notably, through the introduction of click-to-call as an option for mobile search ads (as well as the limited release AdWords call metrics). Similarly, more of the site experience is being pulled into the search results. The beta Comparison Ads creates a marketplace for loan and credit card comparison all on Google. The call to action is comparison and filtering, not just clicking on an ad.
  • New buying/monetization models: Cost-per-click (CPC) and cost-per-thousand-impressions (CPM) are no longer the only ways you can buy. Comparison Ads are sold on a cost-per-lead basis. Product listing ads are sold on a cost-per-acquisition (CPA) basis for some advertisers (CPC for most).
  • New display targeting options: Remarketing (a.k.a. retargeting) brought highly focused display buys to the AdWords interface. Specifically, the ability to only show display ads to segments of people who visit your site, in many cases after clicking on a text ad.
  • New advertising automation: In a move that radically simplifies advertising for small businesses, Google began testing Google Boost. It involves no keyword research and no bidding. If you have a Google Places page, you can even do it without a website. It's virtually hands-off advertising for SMBs.
  • Of those changes, Google Product Listing Ads and Google Boost offer the best glimpse into the future of paid search without keywords. They're notable for dramatic departures in every step of how you advertise on Google: Targeting: Automated targeting toward certain audiences as determined by Google vs. keywords chosen by the advertiser. Ads: Product listing ads bring a product search like result in the top position in the right column and Boost promotes a map-like result in a preferred position above organic results. Pricing: CPA and monthly budget caps replace daily budgets and CPC bids.
  • For Google to continue their pace of growth, they need two things: Another line of business to complement AdWords, and display advertising is it. They've pushed even more aggressively into the channel, most notably with the acquisition of Invite Media, a demand side platform. To remove obstacles to profit and incremental growth within AdWords. These barriers are primarily how wide advertisers target and how much they pay for the people they reach (see: "Why Google Wants to Eliminate Bidding In Exchange for Your Profits").
Weiye Loh

Information technology and economic change: The impact of the printing press | vox - Re... - 0 views

  • Despite the revolutionary technological advance of the printing press in the 15th century, there is precious little economic evidence of its benefits. Using data on 200 European cities between 1450 and 1600, this column finds that economic growth was higher by as much as 60 percentage points in cities that adopted the technology.
  • Historians argue that the printing press was among the most revolutionary inventions in human history, responsible for a diffusion of knowledge and ideas, “dwarfing in scale anything which had occurred since the invention of writing” (Roberts 1996, p. 220). Yet economists have struggled to find any evidence of this information technology revolution in measures of aggregate productivity or per capita income (Clark 2001, Mokyr 2005). The historical data thus present us with a puzzle analogous to the famous Solow productivity paradox – that, until the mid-1990s, the data on macroeconomic productivity showed no effect of innovations in computer-based information technology.
  • In recent work (Dittmar 2010a), I examine the revolution in Renaissance information technology from a new perspective by assembling city-level data on the diffusion of the printing press in 15th-century Europe. The data record each city in which a printing press was established 1450-1500 – some 200 out of over 1,000 historic cities (see also an interview on this site, Dittmar 2010b). The research emphasises cities for three principal reasons. First, the printing press was an urban technology, producing for urban consumers. Second, cities were seedbeds for economic ideas and social groups that drove the emergence of modern growth. Third, city sizes were historically important indicators of economic prosperity, and broad-based city growth was associated with macroeconomic growth (Bairoch 1988, Acemoglu et al. 2005).
  • ...8 more annotations...
  • Figure 1 summarises the data and shows how printing diffused from Mainz 1450-1500. Figure 1. The diffusion of the printing press
  • City-level data on the adoption of the printing press can be exploited to examine two key questions: Was the new technology associated with city growth? And, if so, how large was the association? I find that cities in which printing presses were established 1450-1500 had no prior growth advantage, but subsequently grew far faster than similar cities without printing presses. My work uses a difference-in-differences estimation strategy to document the association between printing and city growth. The estimates suggest early adoption of the printing press was associated with a population growth advantage of 21 percentage points 1500-1600, when mean city growth was 30 percentage points. The difference-in-differences model shows that cities that adopted the printing press in the late 1400s had no prior growth advantage, but grew at least 35 percentage points more than similar non-adopting cities from 1500 to 1600.
  • The restrictions on diffusion meant that cities relatively close to Mainz were more likely to receive the technology other things equal. Printing presses were established in 205 cities 1450-1500, but not in 40 of Europe’s 100 largest cities. Remarkably, regulatory barriers did not limit diffusion. Printing fell outside existing guild regulations and was not resisted by scribes, princes, or the Church (Neddermeyer 1997, Barbier 2006, Brady 2009).
  • Historians observe that printing diffused from Mainz in “concentric circles” (Barbier 2006). Distance from Mainz was significantly associated with early adoption of the printing press, but neither with city growth before the diffusion of printing nor with other observable determinants of subsequent growth. The geographic pattern of diffusion thus arguably allows us to identify exogenous variation in adoption. Exploiting distance from Mainz as an instrument for adoption, I find large and significant estimates of the relationship between the adoption of the printing press and city growth. I find a 60 percentage point growth advantage between 1500-1600.
  • The importance of distance from Mainz is supported by an exercise using “placebo” distances. When I employ distance from Venice, Amsterdam, London, or Wittenberg instead of distance from Mainz as the instrument, the estimated print effect is statistically insignificant.
  • Cities that adopted print media benefitted from positive spillovers in human capital accumulation and technological change broadly defined. These spillovers exerted an upward pressure on the returns to labour, made cities culturally dynamic, and attracted migrants. In the pre-industrial era, commerce was a more important source of urban wealth and income than tradable industrial production. Print media played a key role in the development of skills that were valuable to merchants. Following the invention printing, European presses produced a stream of math textbooks used by students preparing for careers in business.
  • These and hundreds of similar texts worked students through problem sets concerned with calculating exchange rates, profit shares, and interest rates. Broadly, print media was also associated with the diffusion of cutting-edge business practice (such as book-keeping), literacy, and the social ascent of new professionals – merchants, lawyers, officials, doctors, and teachers.
  • The printing press was one of the greatest revolutions in information technology. The impact of the printing press is hard to identify in aggregate data. However, the diffusion of the technology was associated with extraordinary subsequent economic dynamism at the city level. European cities were seedbeds of ideas and business practices that drove the transition to modern growth. These facts suggest that the printing press had very far-reaching consequences through its impact on the development of cities.
Weiye Loh

World Bank Institute: We're also the data bank - video | Media | guardian.co.uk - 0 views

  •  
    Aleem Walji, practice manager for innovation at the World Bank Institute, which assists and advises policy makers and NGOs, tells the Guardian's Activate summit in London about the organisation's commitment to open data
Weiye Loh

Letter from China: China and the Unofficial Truth : The New Yorker - 0 views

  • Chinese citizens are busy dissecting and taunting the meeting on social media. While Premier Wen Jiabao was pledging that the government would “quickly” reverse the widening gap between rich and poor—last year he said it would do so “gradually”—Chinese Web users were scrutinizing photos of delegates arriving for the meeting, and posting photos of their nine-hundred dollar Hermès belts and Birkin and Celine and Louis Vuitton purses that retail for car prices. As Danwei points out, an image that has been making the rounds with particular relish shows the C.E.O. of China Power International Development Ltd, Li Xiaolin, in a salmon-colored suit from Emilio Pucci’s spring-summer 2012 collection—price: nearly two thousand dollars. Web user Cairangduoji paired her photo with the image of dirt-covered barefoot kids in the countryside and the comment: “That amount could help two hundred children wear warm clothes, and avoid the chilly attacks of winter.” And it appended a quote from Li, of the salmon suit, who purportedly once said, “I think we should open a morality file on all citizens to control everyone and give them a ‘sense of shame.’” (This is no ordinary delegate: Li Xiaolin happens to be the daughter of former Premier Li Peng, who oversaw the crackdown at Tiananmen Square.)
  • Another message making the rounds uses an official high-res photo of the gathering to zoom in on delegates who were captured fast asleep or typing on their smart phones.
Weiye Loh

Skepticblog » Why are textbooks so expensive? - 0 views

  • As an author, I’ve seen how the sales histories of textbooks work. Typically they have a big spike of sales for the first 1-2 years after they are introduced, and that’s when most the new copies are sold and most of the publisher’s money is made. But by year 3  (and sometimes sooner), the sales plunge and within another year or two, the sales are miniscule. The publishers have only a few options in a situation like this. One option: they can price the book so that the first two years’ worth of sales will pay their costs back before the used copies wipe out their market, which is the major reason new copies cost so much. Another option (especially with high-volume introductory textbooks) is to revise it within 2-3 years after the previous edition, so the new edition will drive all the used copies off the shelves for another two years or so. This is also a common strategy. For my most popular books, the publisher expected me to be working on a new edition almost as soon as the previous edition came out, and 2-3 years later, the new edition (with a distinctive new cover, and sometimes with significant new content as well) starts the sales curve cycle all over again. One of my books is in its eighth edition, but there are introductory textbooks that are in the 15th or 20th edition.
  • For over 20 years now, I’ve heard all sorts of prophets saying that paper textbooks are dead, and predicting that all textbooks would be electronic within a few years. Year after year, I  hear this prediction—and paper textbooks continue to sell just fine, thank you.  Certainly, electronic editions of mass market best-sellers, novels and mysteries (usually cheaply produced with few illustrations) seem to do fine as Kindle editions or eBooks, and that market is well established. But electronic textbooks have never taken off, at least in science textbooks, despite numerous attempts to make them work. Watching students study, I have a few thoughts as to why this is: Students seem to feel that they haven’t “studied” unless they’ve covered their textbook with yellow highlighter markings. Although there are electronic equivalents of the highlighter marker pen, most of today’s students seem to prefer physically marking on a real paper book. Textbooks (especially science books) are heavy with color photographs and other images that don’t often look good on a tiny screen, don’t print out on ordinary paper well, but raise the price of the book. Even an eBook is going to be a lot more expensive with lots of images compared to a mass-market book with no art whatsoever. I’ve watched my students study, and they like the flexibility of being able to use their book just about anywhere—in bright light outdoors away from a power supply especially. Although eBooks are getting better, most still have screens that are hard to read in bright light, and eventually their battery will run out, whether you’re near a power supply or not. Finally, if  you drop your eBook or get it wet, you have a disaster. A textbook won’t even be dented by hard usage, and unless it’s totally soaked and cannot be dried, it does a lot better when wet than any electronic book.
  • A recent study found that digital textbooks were no panacea after all. Only one-third of the students said they were comfortable reading e-textbooks, and three-fourths preferred a paper textbook to an e-textbook if the costs were equal. And the costs have hidden jokers in the deck: e-textbooks may seem cheaper, but they tend to have built-in expiration dates and cannot be resold, so they may be priced below paper textbooks but end up costing about the same. E-textbooks are not that much cheaper for publishers, either, since the writing, editing, art manuscript, promotion, etc., all cost the publisher the same whether the final book is in paper or electronic. The only cost difference is printing and binding and shipping and storage vs. creating the electronic version.
  •  
    But in the 1980s and 1990s, the market changed drastically with the expansion of used book recyclers. They set up shop at the bookstore door near the end of the semester and bought students' new copies for pennies on the dollar. They would show up in my office uninvited and ask if I want to sell any of the free adopter's copies that I get from publishers trying to entice me. If you walk through any campus bookstore, nearly all the new copies have been replaced by used copies, usually very tattered and with broken spines. The students naturally gravitate to the cheaper used books (and some prefer them because they like it if a previous owner has highlighted the important stuff). In many bookstores, there are no new copies at all, or just a few that go unsold. What these bargain hunters don't realize is that every used copy purchased means a new copy unsold. Used copies pay nothing to the publisher (or the author, either), so to recoup their costs, publishers must price their new copies to offset the loss of sales by used copies. And so the vicious circle begins-publisher raises the price on the book again, more students buy used copies, so a new copy keeps climbing in price.
Weiye Loh

Skepticblog » Further Thoughts on the Ethics of Skepticism - 0 views

  • My recent post “The War Over ‘Nice’” (describing the blogosphere’s reaction to Phil Plait’s “Don’t Be a Dick” speech) has topped out at more than 200 comments.
  • Many readers appear to object (some strenuously) to the very ideas of discussing best practices, seeking evidence of efficacy for skeptical outreach, matching strategies to goals, or encouraging some methods over others. Some seem to express anger that a discussion of best practices would be attempted at all. 
  • No Right or Wrong Way? The milder forms of these objections run along these lines: “Everyone should do their own thing.” “Skepticism needs all kinds of approaches.” “There’s no right or wrong way to do skepticism.” “Why are we wasting time on these abstract meta-conversations?”
  • ...12 more annotations...
  • More critical, in my opinion, is the implication that skeptical research and communication happens in an ethical vacuum. That just isn’t true. Indeed, it is dangerous for a field which promotes and attacks medical treatments, accuses people of crimes, opines about law enforcement practices, offers consumer advice, and undertakes educational projects to pretend that it is free from ethical implications — or obligations.
  • there is no monolithic “one true way to do skepticism.” No, the skeptical world does not break down to nice skeptics who get everything right, and mean skeptics who get everything wrong. (I’m reminded of a quote: “If only there were evil people somewhere insidiously committing evil deeds, and it were necessary only to separate them from the rest of us and destroy them. But the line dividing good and evil cuts through the heart of every human being.”) No one has all the answers. Certainly I don’t, and neither does Phil Plait. Nor has anyone actually proposed a uniform, lockstep approach to skepticism. (No one has any ability to enforce such a thing, in any event.)
  • However, none of that implies that all approaches to skepticism are equally valid, useful, or good. As in other fields, various skeptical practices do more or less good, cause greater or lesser harm, or generate various combinations of both at the same time. For that reason, skeptics should strive to find ways to talk seriously about the practices and the ethics of our field. Skepticism has blossomed into something that touches a lot of lives — and yet it is an emerging field, only starting to come into its potential. We need to be able to talk about that potential, and about the pitfalls too.
  • All of the fields from which skepticism borrows (such as medicine, education, psychology, journalism, history, and even arts like stage magic and graphic design) have their own standards of professional ethics. In some cases those ethics are well-explored professional fields in their own right (consider medical ethics, a field with its own academic journals and doctoral programs). In other cases those ethical guidelines are contested, informal, vague, or honored more in the breach. But in every case, there are serious conversations about the ethical implications of professional practice, because those practices impact people’s lives. Why would skepticism be any different?
  • , Skeptrack speaker Barbara Drescher (a cognitive pyschologist who teaches research methodology) described the complexity of research ethics in her own field. Imagine, she said, that a psychologist were to ask research subjects a question like, “Do your parents like the color red?” Asking this may seem trivial and harmless, but it is nonetheless an ethical trade-off with associated risks (however small) that psychological researchers are ethically obliged to confront. What harm might that question cause if a research subject suffers from erythrophobia, or has a sick parent — or saw their parents stabbed to death?
  • When skeptics undertake scientific, historical, or journalistic research, we should (I argue) consider ourselves bound by some sort of research ethics. For now, we’ll ignore the deeper, detailed question of what exactly that looks like in practical terms (when can skeptics go undercover or lie to get information? how much research does due diligence require? and so on). I’d ask only that we agree on the principle that skeptical research is not an ethical free-for-all.
  • when skeptics communicate with the public, we take on further ethical responsibilities — as do doctors, journalists, and teachers. We all accept that doctors are obliged to follow some sort of ethical code, not only of due diligence and standard of care, but also in their confidentiality, manner, and the factual information they disclose to patients. A sentence that communicates a diagnosis, prescription, or piece of medical advice (“you have cancer” or “undertake this treatment”) is not a contextless statement, but a weighty, risky, ethically serious undertaking that affects people’s lives. It matters what doctors say, and it matters how they say it.
  • Grassroots Ethics It happens that skepticism is my professional field. It’s natural that I should feel bound by the central concerns of that field. How can we gain reliable knowledge about weird things? How can we communicate that knowledge effectively? And, how can we pursue that practice ethically?
  • At the same time, most active skeptics are not professionals. To what extent should grassroots skeptics feel obligated to consider the ethics of skeptical activism? Consider my own status as a medical amateur. I almost need super-caps-lock to explain how much I am not a doctor. My medical training began and ended with a couple First Aid courses (and those way back in the day). But during those short courses, the instructors drummed into us the ethical considerations of our minimal training. When are we obligated to perform first aid? When are we ethically barred from giving aid? What if the injured party is unconscious or delirious? What if we accidentally kill or injure someone in our effort to give aid? Should we risk exposure to blood-borne illnesses? And so on. In a medical context, ethics are determined less by professional status, and more by the harm we can cause or prevent by our actions.
  • police officers are barred from perjury, and journalists from libel — and so are the lay public. We expect schoolteachers not to discuss age-inappropriate topics with our young children, or to persuade our children to adopt their religion; when we babysit for a neighbor, we consider ourselves bound by similar rules. I would argue that grassroots skeptics take on an ethical burden as soon as they speak out on medical matters, legal matters, or other matters of fact, whether from platforms as large as network television, or as small as a dinner party. The size of that burden must depend somewhat on the scale of the risks: the number of people reached, the certainty expressed, the topics tackled.
  • tu-quoque argument.
  • How much time are skeptics going to waste, arguing in a circular firing squad about each other’s free speech? Like it or not, there will always be confrontational people. You aren’t going to get a group of people as varied as skeptics are, and make them all agree to “be nice”. It’s a pipe dream, and a waste of time.
  •  
    FURTHER THOUGHTS ON THE ETHICS OF SKEPTICISM
Weiye Loh

Lies, damned lies, and impact factors - The Dayside - 0 views

  • a journal's impact factor for a given year is the average number of citations received by papers published in the journal during the two preceding years. Letters to the editor, editorials, book reviews, and other non-papers are excluded from the impact factor calculation.
  • Review papers that don't necessarily contain new scientific knowledge yet provide useful overviews garner lots of citations. Five of the top 10 perennially highest-impact-factor journals, including the top four, are review journals.
  • Now suppose you're a journal editor or publisher. In these tough financial times, cash-strapped libraries use impact factors to determine which subscriptions to keep and which to cancel. How would you raise your journal's impact factor? Publishing fewer and better papers is one method. Or you could run more review articles. But, as a paper posted recently on arXiv describes, there's another option: You can manipulate the impact factor by publishing your own papers that cite your own journal.
  • ...1 more annotation...
  • Douglas Arnold and Kristine Fowler. "Nefarious Numbers" is the title they chose for the paper. Its abstract reads as follows: We investigate the journal impact factor, focusing on the applied mathematics category. We demonstrate that significant manipulation of the impact factor is being carried out by the editors of some journals and that the impact factor gives a very inaccurate view of journal quality, which is poorly correlated with expert opinion.
  •  
    Lies, damned lies, and impact factors
Weiye Loh

Lies, Damned Lies, and Medical Science - Magazine - The Atlantic - 0 views

  • In 2001, rumors were circulating in Greek hospitals that surgery residents, eager to rack up scalpel time, were falsely diagnosing hapless Albanian immigrants with appendicitis. At the University of Ioannina medical school’s teaching hospital, a newly minted doctor named Athina Tatsioni was discussing the rumors with colleagues when a professor who had overheard asked her if she’d like to try to prove whether they were true—he seemed to be almost daring her. She accepted the challenge and, with the professor’s and other colleagues’ help, eventually produced a formal study showing that, for whatever reason, the appendices removed from patients with Albanian names in six Greek hospitals were more than three times as likely to be perfectly healthy as those removed from patients with Greek names. “It was hard to find a journal willing to publish it, but we did,” recalls Tatsioni. “I also discovered that I really liked research.” Good thing, because the study had actually been a sort of audition. The professor, it turned out, had been putting together a team of exceptionally brash and curious young clinicians and Ph.D.s to join him in tackling an unusual and controversial agenda.
  • were drug companies manipulating published research to make their drugs look good? Salanti ticked off data that seemed to indicate they were, but the other team members almost immediately started interrupting. One noted that Salanti’s study didn’t address the fact that drug-company research wasn’t measuring critically important “hard” outcomes for patients, such as survival versus death, and instead tended to measure “softer” outcomes, such as self-reported symptoms (“my chest doesn’t hurt as much today”). Another pointed out that Salanti’s study ignored the fact that when drug-company data seemed to show patients’ health improving, the data often failed to show that the drug was responsible, or that the improvement was more than marginal.
  • but a single study can’t prove everything, she said. Just as I was getting the sense that the data in drug studies were endlessly malleable, Ioannidis, who had mostly been listening, delivered what felt like a coup de grâce: wasn’t it possible, he asked, that drug companies were carefully selecting the topics of their studies—for example, comparing their new drugs against those already known to be inferior to others on the market—so that they were ahead of the game even before the data juggling began? “Maybe sometimes it’s the questions that are biased, not the answers,” he said, flashing a friendly smile. Everyone nodded. Though the results of drug studies often make newspaper headlines, you have to wonder whether they prove anything at all. Indeed, given the breadth of the potential problems raised at the meeting, can any medical-research studies be trusted?
  •  
    Lies, Damned Lies, and Medical Science
Weiye Loh

Referees' quotes - 2010 - 2010 - Environmental Microbiology - Wiley Online Library - 0 views

  • This paper is desperate. Please reject it completely and then block the author's email ID so they can't use the online system in future.
  • The type of lava vs. diversity has no meaning if only one of each sample is analyzed; multiple samples are required for generality. This controls provenance (e.g. maybe some beetle took a pee on one or the other of the samples, seriously skewing relevance to lava composition).
  • Merry X-mas! First, my recommendation was reject with new submission, because it is necessary to investigate further, but reading a well written manuscript before X-mas makes me feel like Santa Claus.
  • ...6 more annotations...
  • Season's Greetings! I apologise for my slow response but a roast goose prevented me from answering emails for a few days.• I started to review this but could not get much past the abstract.
  • Stating that the study is confirmative is not a good start for the Discussion. Rephrasing the first sentence of the Discussion would seem to be a good idea.
  • Reject – More holes than my grandad's string vest!• The writing and data presentation are so bad that I had to leave work and go home early and then spend time to wonder what life is about.
  • Sorry for the overdue, it seems to me that ‘overdue’ is my constant, persistent and chronic EMI status. Good that the reviewers are not getting red cards! The editors could create, in addition to the referees quotes, a ranking for ‘on-time’ referees. I would get the bottom place. But fast is not equal to good (I am consoling myself!)
  • It hurts me a little to have so little criticism of a manuscript.
  • Based on titles seen in journals, many authors seem to be more fascinated these days by their methods than by their science. The authors should be encouraged to abstract the main scientific (i.e., novel) finding into the title.
Weiye Loh

RealClimate: Going to extremes - 0 views

  • There are two new papers in Nature this week that go right to the heart of the conversation about extreme events and their potential relationship to climate change.
  • Let’s start with some very basic, but oft-confused points: Not all extremes are the same. Discussions of ‘changes in extremes’ in general without specifying exactly what is being discussed are meaningless. A tornado is an extreme event, but one whose causes, sensitivity to change and impacts have nothing to do with those related to an ice storm, or a heat wave or cold air outbreak or a drought. There is no theory or result that indicates that climate change increases extremes in general. This is a corollary of the previous statement – each kind of extreme needs to be looked at specifically – and often regionally as well. Some extremes will become more common in future (and some less so). We will discuss the specifics below. Attribution of extremes is hard. There are limited observational data to start with, insufficient testing of climate model simulations of extremes, and (so far) limited assessment of model projections.
  • The two new papers deal with the attribution of a single flood event (Pall et al), and the attribution of increased intensity of rainfall across the Northern Hemisphere (Min et al). While these issues are linked, they are quite distinct, and the two approaches are very different too.
  • ...4 more annotations...
  • The aim of the Pall et al paper was to examine a specific event – floods in the UK in Oct/Nov 2000. Normally, with a single event there isn’t enough information to do any attribution, but Pall et al set up a very large ensemble of runs starting from roughly the same initial conditions to see how often the flooding event occurred. Note that flooding was defined as more than just intense rainfall – the authors tracked runoff and streamflow as part of their modelled setup. Then they repeated the same experiments with pre-industrial conditions (less CO2 and cooler temperatures). If the amount of times a flooding event would occur increased in the present-day setup, you can estimate how much more likely the event would have been because of climate change. The results gave varying numbers but in nine out of ten cases the chance increased by more than 20%, and in two out of three cases by more than 90%. This kind of fractional attribution (if an event is 50% more likely with anthropogenic effects, that implies it is 33% attributable) has been applied also to the 2003 European heatwave, and will undoubtedly be applied more often in future. One neat and interesting feature of these experiments was that they used the climateprediction.net set up to harness the power of the public’s idle screensaver time.
  • The second paper is a more standard detection and attribution study. By looking at the signatures of climate change in precipitation intensity and comparing that to the internal variability and the observation, the researchers conclude that the probability of intense precipitation on any given day has increased by 7 percent over the last 50 years – well outside the bounds of natural variability. This is a result that has been suggested before (i.e. in the IPCC report (Groisman et al, 2005), but this was the first proper attribution study (as far as I know). The signal seen in the data though, while coherent and similar to that seen in the models, was consistently larger, perhaps indicating the models are not sensitive enough, though the El Niño of 1997/8 may have had an outsize effect.
  • Both papers were submitted in March last year, prior to the 2010 floods in Pakistan, Australia, Brazil or the Philippines, and so did not deal with any of the data or issues associated with those floods. However, while questions of attribution come up whenever something weird happens to the weather, these papers demonstrate clearly that the instant pop-attributions we are always being asked for are just not very sensible. It takes an enormous amount of work to do these kinds of tests, and they just can’t be done instantly. As they are done more often though, we will develop a better sense for the kinds of events that we can say something about, and those we can’t.
  • There is always concern that the start and end points for any trend study are not appropriate (both sides are guilty on this IMO). I have read precipitation studies were more difficult due to sparse data, and it seems we would have seen precipitation trend graphs a lot more often by now if it was straight forward. 7% seems to be a large change to not have been noted (vocally) earlier, seems like there is more to this story.
Weiye Loh

Science, Strong Inference -- Proper Scientific Method - 0 views

  • Scientists these days tend to keep up a polite fiction that all science is equal. Except for the work of the misguided opponent whose arguments we happen to be refuting at the time, we speak as though every scientist's field and methods of study are as good as every other scientist's and perhaps a little better. This keeps us all cordial when it comes to recommending each other for government grants.
  • Why should there be such rapid advances in some fields and not in others? I think the usual explanations that we tend to think of - such as the tractability of the subject, or the quality or education of the men drawn into it, or the size of research contracts - are important but inadequate. I have begun to believe that the primary factor in scientific advance is an intellectual one. These rapidly moving fields are fields where a particular method of doing scientific research is systematically used and taught, an accumulative method of inductive inference that is so effective that I think it should be given the name of "strong inference." I believe it is important to examine this method, its use and history and rationale, and to see whether other groups and individuals might learn to adopt it profitably in their own scientific and intellectual work. In its separate elements, strong inference is just the simple and old-fashioned method of inductive inference that goes back to Francis Bacon. The steps are familiar to every college student and are practiced, off and on, by every scientist. The difference comes in their systematic application. Strong inference consists of applying the following steps to every problem in science, formally and explicitly and regularly: Devising alternative hypotheses; Devising a crucial experiment (or several of them), with alternative possible outcomes, each of which will, as nearly is possible, exclude one or more of the hypotheses; Carrying out the experiment so as to get a clean result; Recycling the procedure, making subhypotheses or sequential hypotheses to refine the possibilities that remain, and so on.
  • On any new problem, of course, inductive inference is not as simple and certain as deduction, because it involves reaching out into the unknown. Steps 1 and 2 require intellectual inventions, which must be cleverly chosen so that hypothesis, experiment, outcome, and exclusion will be related in a rigorous syllogism; and the question of how to generate such inventions is one which has been extensively discussed elsewhere (2, 3). What the formal schema reminds us to do is to try to make these inventions, to take the next step, to proceed to the next fork, without dawdling or getting tied up in irrelevancies.
  • ...28 more annotations...
  • It is clear why this makes for rapid and powerful progress. For exploring the unknown, there is no faster method; this is the minimum sequence of steps. Any conclusion that is not an exclusion is insecure and must be rechecked. Any delay in recycling to the next set of hypotheses is only a delay. Strong inference, and the logical tree it generates, are to inductive reasoning what the syllogism is to deductive reasoning in that it offers a regular method for reaching firm inductive conclusions one after the other as rapidly as possible.
  • "But what is so novel about this?" someone will say. This is the method of science and always has been, why give it a special name? The reason is that many of us have almost forgotten it. Science is now an everyday business. Equipment, calculations, lectures become ends in themselves. How many of us write down our alternatives and crucial experiments every day, focusing on the exclusion of a hypothesis? We may write our scientific papers so that it looks as if we had steps 1, 2, and 3 in mind all along. But in between, we do busywork. We become "method- oriented" rather than "problem-oriented." We say we prefer to "feel our way" toward generalizations. We fail to teach our students how to sharpen up their inductive inferences. And we do not realize the added power that the regular and explicit use of alternative hypothesis and sharp exclusion could give us at every step of our research.
  • A distinguished cell biologist rose and said, "No two cells give the same properties. Biology is the science of heterogeneous systems." And he added privately. "You know there are scientists, and there are people in science who are just working with these over-simplified model systems - DNA chains and in vitro systems - who are not doing science at all. We need their auxiliary work: they build apparatus, they make minor studies, but they are not scientists." To which Cy Levinthal replied: "Well, there are two kinds of biologists, those who are looking to see if there is one thing that can be understood and those who keep saying it is very complicated and that nothing can be understood. . . . You must study the simplest system you think has the properties you are interested in."
  • At the 1958 Conference on Biophysics, at Boulder, there was a dramatic confrontation between the two points of view. Leo Szilard said: "The problems of how enzymes are induced, of how proteins are synthesized, of how antibodies are formed, are closer to solution than is generally believed. If you do stupid experiments, and finish one a year, it can take 50 years. But if you stop doing experiments for a little while and think how proteins can possibly be synthesized, there are only about 5 different ways, not 50! And it will take only a few experiments to distinguish these." One of the young men added: "It is essentially the old question: How small and elegant an experiment can you perform?" These comments upset a number of those present. An electron microscopist said. "Gentlemen, this is off the track. This is philosophy of science." Szilard retorted. "I was not quarreling with third-rate scientists: I was quarreling with first-rate scientists."
  • Any criticism or challenge to consider changing our methods strikes of course at all our ego-defenses. But in this case the analytical method offers the possibility of such great increases in effectiveness that it is unfortunate that it cannot be regarded more often as a challenge to learning rather than as challenge to combat. Many of the recent triumphs in molecular biology have in fact been achieved on just such "oversimplified model systems," very much along the analytical lines laid down in the 1958 discussion. They have not fallen to the kind of men who justify themselves by saying "No two cells are alike," regardless of how true that may ultimately be. The triumphs are in fact triumphs of a new way of thinking.
  • the emphasis on strong inference
  • is also partly due to the nature of the fields themselves. Biology, with its vast informational detail and complexity, is a "high-information" field, where years and decades can easily be wasted on the usual type of "low-information" observations or experiments if one does not think carefully in advance about what the most important and conclusive experiments would be. And in high-energy physics, both the "information flux" of particles from the new accelerators and the million-dollar costs of operation have forced a similar analytical approach. It pays to have a top-notch group debate every experiment ahead of time; and the habit spreads throughout the field.
  • Historically, I think, there have been two main contributions to the development of a satisfactory strong-inference method. The first is that of Francis Bacon (13). He wanted a "surer method" of "finding out nature" than either the logic-chopping or all-inclusive theories of the time or the laudable but crude attempts to make inductions "by simple enumeration." He did not merely urge experiments as some suppose, he showed the fruitfulness of interconnecting theory and experiment so that the one checked the other. Of the many inductive procedures he suggested, the most important, I think, was the conditional inductive tree, which proceeded from alternative hypothesis (possible "causes," as he calls them), through crucial experiments ("Instances of the Fingerpost"), to exclusion of some alternatives and adoption of what is left ("establishing axioms"). His Instances of the Fingerpost are explicitly at the forks in the logical tree, the term being borrowed "from the fingerposts which are set up where roads part, to indicate the several directions."
  • ere was a method that could separate off the empty theories! Bacon, said the inductive method could be learned by anybody, just like learning to "draw a straighter line or more perfect circle . . . with the help of a ruler or a pair of compasses." "My way of discovering sciences goes far to level men's wit and leaves but little to individual excellence, because it performs everything by the surest rules and demonstrations." Even occasional mistakes would not be fatal. "Truth will sooner come out from error than from confusion."
  • Nevertheless there is a difficulty with this method. As Bacon emphasizes, it is necessary to make "exclusions." He says, "The induction which is to be available for the discovery and demonstration of sciences and arts, must analyze nature by proper rejections and exclusions, and then, after a sufficient number of negatives come to a conclusion on the affirmative instances." "[To man] it is granted only to proceed at first by negatives, and at last to end in affirmatives after exclusion has been exhausted." Or, as the philosopher Karl Popper says today there is no such thing as proof in science - because some later alternative explanation may be as good or better - so that science advances only by disproofs. There is no point in making hypotheses that are not falsifiable because such hypotheses do not say anything, "it must be possible for all empirical scientific system to be refuted by experience" (14).
  • The difficulty is that disproof is a hard doctrine. If you have a hypothesis and I have another hypothesis, evidently one of them must be eliminated. The scientist seems to have no choice but to be either soft-headed or disputatious. Perhaps this is why so many tend to resist the strong analytical approach and why some great scientists are so disputatious.
  • Fortunately, it seems to me, this difficulty can be removed by the use of a second great intellectual invention, the "method of multiple hypotheses," which is what was needed to round out the Baconian scheme. This is a method that was put forward by T.C. Chamberlin (15), a geologist at Chicago at the turn of the century, who is best known for his contribution to the Chamberlain-Moulton hypothesis of the origin of the solar system.
  • Chamberlin says our trouble is that when we make a single hypothesis, we become attached to it. "The moment one has offered an original explanation for a phenomenon which seems satisfactory, that moment affection for his intellectual child springs into existence, and as the explanation grows into a definite theory his parental affections cluster about his offspring and it grows more and more dear to him. . . . There springs up also unwittingly a pressing of the theory to make it fit the facts and a pressing of the facts to make them fit the theory..." "To avoid this grave danger, the method of multiple working hypotheses is urged. It differs from the simple working hypothesis in that it distributes the effort and divides the affections. . . . Each hypothesis suggests its own criteria, its own method of proof, its own method of developing the truth, and if a group of hypotheses encompass the subject on all sides, the total outcome of means and of methods is full and rich."
  • The conflict and exclusion of alternatives that is necessary to sharp inductive inference has been all too often a conflict between men, each with his single Ruling Theory. But whenever each man begins to have multiple working hypotheses, it becomes purely a conflict between ideas. It becomes much easier then for each of us to aim every day at conclusive disproofs - at strong inference - without either reluctance or combativeness. In fact, when there are multiple hypotheses, which are not anyone's "personal property," and when there are crucial experiments to test them, the daily life in the laboratory takes on an interest and excitement it never had, and the students can hardly wait to get to work to see how the detective story will come out. It seems to me that this is the reason for the development of those distinctive habits of mind and the "complex thought" that Chamberlin described, the reason for the sharpness, the excitement, the zeal, the teamwork - yes, even international teamwork - in molecular biology and high- energy physics today. What else could be so effective?
  • Unfortunately, I think, there are other other areas of science today that are sick by comparison, because they have forgotten the necessity for alternative hypotheses and disproof. Each man has only one branch - or none - on the logical tree, and it twists at random without ever coming to the need for a crucial decision at any point. We can see from the external symptoms that there is something scientifically wrong. The Frozen Method, The Eternal Surveyor, The Never Finished, The Great Man With a Single Hypothcsis, The Little Club of Dependents, The Vendetta, The All-Encompassing Theory Which Can Never Be Falsified.
  • a "theory" of this sort is not a theory at all, because it does not exclude anything. It predicts everything, and therefore does not predict anything. It becomes simply a verbal formula which the graduate student repeats and believes because the professor has said it so often. This is not science, but faith; not theory, but theology. Whether it is hand-waving or number-waving, or equation-waving, a theory is not a theory unless it can be disproved. That is, unless it can be falsified by some possible experimental outcome.
  • the work methods of a number of scientists have been testimony to the power of strong inference. Is success not due in many cases to systematic use of Bacon's "surest rules and demonstrations" as much as to rare and unattainable intellectual power? Faraday's famous diary (16), or Fermi's notebooks (3, 17), show how these men believed in the effectiveness of daily steps in applying formal inductive methods to one problem after another.
  • Surveys, taxonomy, design of equipment, systematic measurements and tables, theoretical computations - all have their proper and honored place, provided they are parts of a chain of precise induction of how nature works. Unfortunately, all too often they become ends in themselves, mere time-serving from the point of view of real scientific advance, a hypertrophied methodology that justifies itself as a lore of respectability.
  • We speak piously of taking measurements and making small studies that will "add another brick to the temple of science." Most such bricks just lie around the brickyard (20). Tables of constraints have their place and value, but the study of one spectrum after another, if not frequently re-evaluated, may become a substitute for thinking, a sad waste of intelligence in a research laboratory, and a mistraining whose crippling effects may last a lifetime.
  • Beware of the man of one method or one instrument, either experimental or theoretical. He tends to become method-oriented rather than problem-oriented. The method-oriented man is shackled; the problem-oriented man is at least reaching freely toward that is most important. Strong inference redirects a man to problem-orientation, but it requires him to be willing repeatedly to put aside his last methods and teach himself new ones.
  • anyone who asks the question about scientific effectiveness will also conclude that much of the mathematizing in physics and chemistry today is irrelevant if not misleading. The great value of mathematical formulation is that when an experiment agrees with a calculation to five decimal places, a great many alternative hypotheses are pretty well excluded (though the Bohr theory and the Schrödinger theory both predict exactly the same Rydberg constant!). But when the fit is only to two decimal places, or one, it may be a trap for the unwary; it may be no better than any rule-of-thumb extrapolation, and some other kind of qualitative exclusion might be more rigorous for testing the assumptions and more important to scientific understanding than the quantitative fit.
  • Today we preach that science is not science unless it is quantitative. We substitute correlations for causal studies, and physical equations for organic reasoning. Measurements and equations are supposed to sharpen thinking, but, in my observation, they more often tend to make the thinking noncausal and fuzzy. They tend to become the object of scientific manipulation instead of auxiliary tests of crucial inferences.
  • Many - perhaps most - of the great issues of science are qualitative, not quantitative, even in physics and chemistry. Equations and measurements are useful when and only when they are related to proof; but proof or disproof comes first and is in fact strongest when it is absolutely convincing without any quantitative measurement.
  • you can catch phenomena in a logical box or in a mathematical box. The logical box is coarse but strong. The mathematical box is fine-grained but flimsy. The mathematical box is a beautiful way of wrapping up a problem, but it will not hold the phenomena unless they have been caught in a logical box to begin with.
  • Of course it is easy - and all too common - for one scientist to call the others unscientific. My point is not that my particular conclusions here are necessarily correct, but that we have long needed some absolute standard of possible scientific effectiveness by which to measure how well we are succeeding in various areas - a standard that many could agree on and one that would be undistorted by the scientific pressures and fashions of the times and the vested interests and busywork that they develop. It is not public evaluation I am interested in so much as a private measure by which to compare one's own scientific performance with what it might be. I believe that strong inference provides this kind of standard of what the maximum possible scientific effectiveness could be - as well as a recipe for reaching it.
  • The strong-inference point of view is so resolutely critical of methods of work and values in science that any attempt to compare specific cases is likely to sound but smug and destructive. Mainly one should try to teach it by example and by exhorting to self-analysis and self-improvement only in general terms
  • one severe but useful private test - a touchstone of strong inference - that removes the necessity for third-person criticism, because it is a test that anyone can learn to carry with him for use as needed. It is our old friend the Baconian "exclusion," but I call it "The Question." Obviously it should be applied as much to one's own thinking as to others'. It consists of asking in your own mind, on hearing any scientific explanation or theory put forward, "But sir, what experiment could disprove your hypothesis?"; or, on hearing a scientific experiment described, "But sir, what hypothesis does your experiment disprove?"
  • It is not true that all science is equal; or that we cannot justly compare the effectiveness of scientists by any method other than a mutual-recommendation system. The man to watch, the man to put your money on, is not the man who wants to make "a survey" or a "more detailed study" but the man with the notebook, the man with the alternative hypotheses and the crucial experiments, the man who knows how to answer your Question of disproof and is already working on it.
  •  
    There is so much bad science and bad statistics information in media reports, publications, and shared between conversants that I think it is important to understand about facts and proofs and the associated pitfalls.
Weiye Loh

Rationally Speaking: The problem of replicability in science - 0 views

  • The problem of replicability in science from xkcdby Massimo Pigliucci
  • In recent months much has been written about the apparent fact that a surprising, indeed disturbing, number of scientific findings cannot be replicated, or when replicated the effect size turns out to be much smaller than previously thought.
  • Arguably, the recent streak of articles on this topic began with one penned by David Freedman in The Atlantic, and provocatively entitled “Lies, Damned Lies, and Medical Science.” In it, the major character was John Ioannidis, the author of some influential meta-studies about the low degree of replicability and high number of technical flaws in a significant portion of published papers in the biomedical literature.
  • ...18 more annotations...
  • As Freedman put it in The Atlantic: “80 percent of non-randomized studies (by far the most common type) turn out to be wrong, as do 25 percent of supposedly gold-standard randomized trials, and as much as 10 percent of the platinum-standard large randomized trials.” Ioannidis himself was quoted uttering some sobering words for the medical community (and the public at large): “Science is a noble endeavor, but it’s also a low-yield endeavor. I’m not sure that more than a very small percentage of medical research is ever likely to lead to major improvements in clinical outcomes and quality of life. We should be very comfortable with that fact.”
  • Julia and I actually addressed this topic during a Rationally Speaking podcast, featuring as guest our friend Steve Novella, of Skeptics’ Guide to the Universe and Science-Based Medicine fame. But while Steve did quibble with the tone of the Atlantic article, he agreed that Ioannidis’ results are well known and accepted by the medical research community. Steve did point out that it should not be surprising that results get better and better as one moves toward more stringent protocols like large randomized trials, but it seems to me that one should be surprised (actually, appalled) by the fact that even there the percentage of flawed studies is high — not to mention the fact that most studies are in fact neither large nor properly randomized.
  • The second big recent blow to public perception of the reliability of scientific results is an article published in The New Yorker by Jonah Lehrer, entitled “The truth wears off.” Lehrer also mentions Ioannidis, but the bulk of his essay is about findings in psychiatry, psychology and evolutionary biology (and even in research on the paranormal!).
  • In these disciplines there are now several documented cases of results that were initially spectacularly positive — for instance the effects of second generation antipsychotic drugs, or the hypothesized relationship between a male’s body symmetry and the quality of his genes — that turned out to be increasingly difficult to replicate over time, with the original effect sizes being cut down dramatically, or even disappearing altogether.
  • As Lehrer concludes at the end of his article: “Such anomalies demonstrate the slipperiness of empiricism. Although many scientific ideas generate conflicting results and suffer from falling effect sizes, they continue to get cited in the textbooks and drive standard medical practice. Why? Because these ideas seem true. Because they make sense. Because we can’t bear to let them go. And this is why the decline effect is so troubling.”
  • None of this should actually be particularly surprising to any practicing scientist. If you have spent a significant time of your life in labs and reading the technical literature, you will appreciate the difficulties posed by empirical research, not to mention a number of issues such as the fact that few scientists ever actually bother to replicate someone else’s results, for the simple reason that there is no Nobel (or even funded grant, or tenured position) waiting for the guy who arrived second.
  • n the midst of this I was directed by a tweet by my colleague Neil deGrasse Tyson (who has also appeared on the RS podcast, though in a different context) to a recent ABC News article penned by John Allen Paulos, which meant to explain the decline effect in science.
  • Paulos’ article is indeed concise and on the mark (though several of the explanations he proposes were already brought up in both the Atlantic and New Yorker essays), but it doesn’t really make things much better.
  • Paulos suggests that one explanation for the decline effect is the well known statistical phenomenon of the regression toward the mean. This phenomenon is responsible, among other things, for a fair number of superstitions: you’ve probably heard of some athletes’ and other celebrities’ fear of being featured on the cover of a magazine after a particularly impressive series of accomplishments, because this brings “bad luck,” meaning that the following year one will not be able to repeat the performance at the same level. This is actually true, not because of magical reasons, but simply as a result of the regression to the mean: extraordinary performances are the result of a large number of factors that have to line up just right for the spectacular result to be achieved. The statistical chances of such an alignment to repeat itself are low, so inevitably next year’s performance will likely be below par. Paulos correctly argues that this also explains some of the decline effect of scientific results: the first discovery might have been the result of a number of factors that are unlikely to repeat themselves in exactly the same way, thus reducing the effect size when the study is replicated.
  • nother major determinant of the unreliability of scientific results mentioned by Paulos is the well know problem of publication bias: crudely put, science journals (particularly the high-profile ones, like Nature and Science) are interested only in positive, spectacular, “sexy” results. Which creates a powerful filter against negative, or marginally significant results. What you see in science journals, in other words, isn’t a statistically representative sample of scientific results, but a highly biased one, in favor of positive outcomes. No wonder that when people try to repeat the feat they often come up empty handed.
  • A third cause for the problem, not mentioned by Paulos but addressed in the New Yorker article, is the selective reporting of results by scientists themselves. This is essentially the same phenomenon as the publication bias, except that this time it is scientists themselves, not editors and reviewers, who don’t bother to submit for publication results that are either negative or not strongly conclusive. Again, the outcome is that what we see in the literature isn’t all the science that we ought to see. And it’s no good to argue that it is the “best” science, because the quality of scientific research is measured by the appropriateness of the experimental protocols (including the use of large samples) and of the data analyses — not by whether the results happen to confirm the scientist’s favorite theory.
  • The conclusion of all this is not, of course, that we should throw the baby (science) out with the bath water (bad or unreliable results). But scientists should also be under no illusion that these are rare anomalies that do not affect scientific research at large. Too much emphasis is being put on the “publish or perish” culture of modern academia, with the result that graduate students are explicitly instructed to go for the SPU’s — Smallest Publishable Units — when they have to decide how much of their work to submit to a journal. That way they maximize the number of their publications, which maximizes the chances of landing a postdoc position, and then a tenure track one, and then of getting grants funded, and finally of getting tenure. The result is that, according to statistics published by Nature, it turns out that about ⅓ of published studies is never cited (not to mention replicated!).
  • “Scientists these days tend to keep up the polite fiction that all science is equal. Except for the work of the misguided opponent whose arguments we happen to be refuting at the time, we speak as though every scientist’s field and methods of study are as good as every other scientist’s, and perhaps a little better. This keeps us all cordial when it comes to recommending each other for government grants. ... We speak piously of taking measurements and making small studies that will ‘add another brick to the temple of science.’ Most such bricks lie around the brickyard.”
    • Weiye Loh
       
      Written by John Platt in a "Science" article published in 1964
  • Most damning of all, however, is the potential effect that all of this may have on science’s already dubious reputation with the general public (think evolution-creation, vaccine-autism, or climate change)
  • “If we don’t tell the public about these problems, then we’re no better than non-scientists who falsely claim they can heal. If the drugs don’t work and we’re not sure how to treat something, why should we claim differently? Some fear that there may be less funding because we stop claiming we can prove we have miraculous treatments. But if we can’t really provide those miracles, how long will we be able to fool the public anyway? The scientific enterprise is probably the most fantastic achievement in human history, but that doesn’t mean we have a right to overstate what we’re accomplishing.”
  • Joseph T. Lapp said... But is any of this new for science? Perhaps science has operated this way all along, full of fits and starts, mostly duds. How do we know that this isn't the optimal way for science to operate?My issues are with the understanding of science that high school graduates have, and with the reporting of science.
    • Weiye Loh
       
      It's the media at fault again.
  • What seems to have emerged in recent decades is a change in the institutional setting that got science advancing spectacularly since the establishment of the Royal Society. Flaws in the system such as corporate funded research, pal-review instead of peer-review, publication bias, science entangled with policy advocacy, and suchlike, may be distorting the environment, making it less suitable for the production of good science, especially in some fields.
  • Remedies should exist, but they should evolve rather than being imposed on a reluctant sociological-economic science establishment driven by powerful motives such as professional advance or funding. After all, who or what would have the authority to impose those rules, other than the scientific establishment itself?
Weiye Loh

Small answers to the big questions - Chris Blattman - 0 views

  • A reporter emailed me this morning to see if I could answer a few questions about poverty. Sure I said. The emailed questions that followed?It is realistic to think that poverty can one day end?What, in your view, are the best global solutions?How urgent is it to act (in the context of climate change)?
  • My first reaction: thanks for asking the easy questions, lady. Was this serious? How can one possibly answer the grand questions of development in a few sentences?
  • It is realistic to think that poverty can one day end?In America, you can be poor but own a car, a television, and have food on the table every day. In northern Uganda, that would make you a very wealthy man.Do I see a world where nearly every household has their basic needs covered, plus some of the comforts of life? Absolutely. I imagine most places on the planet will get to what we now think of as middle-income status—perhaps $8,000 to $14,000 per head in 2011 dollars and purchasing ability. The poorest nations will probably be in those places least advantageous to trade (the landlocked, for instance) and where cultures or political systems restrict innovation and freedoms.But poverty is a relative measure, and short of a Star Trek world where you can summon food and items out of a wall unit, there will always be people who struggle to keep up.
  • ...5 more annotations...
  • What, in your view, are the best global solutions?
  • There are plenty aid programs that seem to work, from de-worming to small business grants to incentives to send children to school. But none of these programs are likely to have transformative effects.
  • The difference between a country with $1,500 and $15,000 of income a head a head is simple: industry. All the microfinance and microenterprise programs in the world are not going to build large firms and import technology and provide most people with what they really want: a stable job, regular wages, and a decent work environment.
  • How you get these firms is the tricky question. Only a few firms will be home grown; most will be firms that spread across borders, because they have the markets and know-how. Probably we’ll need to see wages rise in China and India before manufacturing ever spreads to the poorest places on the planet, like Central Asia and Africa.The countries that will get them first are the ones that are close to trade routes, have stable political climates, make it easy to get finance, are open to trade, have large domestic markets, have able and educated workforces (i.e. secondary education), and have leaders in charge who don’t see the industrial sector as either a threat to their power or a garden from which they get to select the sweetest fruits for themselves.
  • How urgent is it to act (in the context of climate change)?The short answer: I wouldn’t know. For the US and China and Europe and India, they must change because if they don’t nothing will.For the Ugandas or Uzbekistans or Bolivias of the world, I can’t see it making a difference. Let them develop as green as possible, but let’s not impede their growth because of it, and rob them of the opportunity we took ourselves.
Weiye Loh

EdgeRank: The Secret Sauce That Makes Facebook's News Feed Tick - 0 views

  • but News Feed only displays a subset of the stories generated by your friends — if it displayed everything, there’s a good chance you’d be overwhelmed. Developers are always trying to make sure their sites and apps are publishing stories that make the cut, which has led to the concept of “News Feed Optimization”, and their success is dictated by EdgeRank.
  • At a high level, the EdgeRank formula is fairly straightforward. But first, some definitions: every item that shows up in your News Feed is considered an Object. If you have an Object in the News Feed (say, a status update), whenever another user interacts with that Object they’re creating what Facebook calls an Edge, which includes actions like tags and comments. Each Edge has three components important to Facebook’s algorithm: First, there’s an affinity score between the viewing user and the item’s creator — if you send your friend a lot of Facebook messages and check their profile often, then you’ll have a higher affinity score for that user than you would, say, an old acquaintance you haven’t spoken to in years. Second, there’s a weight given to each type of Edge. A comment probably has more importance than a Like, for example. And finally there’s the most obvious factor — time. The older an Edge is, the less important it becomes.
  • Multiply these factors for each Edge then add the Edge scores up and you have an Object’s EdgeRank. And the higher that is, the more likely your Object is to appear in the user’s feed. It’s worth pointing out that the act of creating an Object is also considered an Edge, which is what allows Objects to show up in your friends’ feeds before anyone has interacted with them.
  • ...3 more annotations...
  • an Object is more likely to show up in your News Feed if people you know have been interacting with it recently. That really isn’t particularly surprising. Neither is the resulting message to developers: if you want your posts to show up in News Feed, make sure people will actually want to interact with them.
  • Steinberg hinted that a simpler version of News Feed may be on the way, as the current two-tabbed system is a bit complicated. That said, many people still use both tabs, with over 50% of users clicking over to the ‘most recent’ tab on a regular basis.
  • If you want to watch the video for yourself, click here, navigate to the Techniques sessions, and click on ‘Focus on Feed’. The talk about Facebook’s algorithms begins around 22 minutes in.
Weiye Loh

ST Forum Editor was right after all | The Online Citizen - 0 views

  • I refer to the article “Straits Times! Why you edit until like that?” (theonlinecitizen, Mar 24). In my view, the Straits Times Forum Editor was not wrong to edit the letter.
  • From a statistical pespective, the forum letter writer, Mr Samuel Wee, was quoting the wrong statistics.
  • For example, the Education Minister said “How children from the bottom one-third by socio-economic background fare: One in two scores in the top two-thirds at PSLE” - But, Mr Samuel Wee wrote “His statement is backed up with the statistic that 50% of children from the bottom third of the socio-economic ladder score in the bottom third of the Primary School Leaving Examination”. Another example is Mr Wee’s: “it is indeed heartwarming to learn that only 90% of children from one-to-three-room flats do not make it to university”, when the Straits Times article “New chapter in the Singapore Story”http://pdfcast.org/pdf/new-chapter-in-singapore-story of 8 March, on the Minister’s speech in Parliament, clearly showed in the graph “Progression to Unis and Polys” (Source: MOE  (Ministry of Eduction)), that the “percentage of P1 pupils who lived in 1- to 3-room HDB flats and subsequently progressed to tertiary education”, was about 50 per cent, and not the ’90 per cent who do not make it’ cited by Mr Samuel Wee.
  • ...7 more annotations...
  • The whole point of Samuel Wee’s letter is to present Dr Ng’s statistics from a different angle, so as to show that things are not as rosy as Dr Ng made them seem. As posters above have pointed out, if 50% of poor students score in the top 2/3s, that means the other 50% score in the bottom 1/3. In other words, poor students still score disproportionately lower grades. As for the statistic that 90% of poor students do not make it to university, this was shown a graph provided in the ST. You can see it here: http://www.straitstimes.com/STI/STIMEDIA/pdf/20110308/a10.pdf
  • Finally, Dr Ng did say: “[Social mobility] cannot be about neglecting those with abilities, just because they come from middle-income homes or are rich. It cannot mean holding back those who are able so that others can catch up.” Samuel Wee paraphrased this as: “…good, able students from the middle-and-high income groups are not circumscribed or restricted in any way in the name of helping financially disadvantaged students.” I think it was an accurate paraphrase, because that was essentially what Dr Ng was saying. Samuel Wee’s paraphrase merely makes the callousness of Dr Ng’s remark stand out more clearly.
  • As to Mr Wee’s: “Therefore, it was greatly reassuring to read about Dr Ng’s great faith in our “unique, meritocratic Singapore system”, which ensures that good, able students from the middle-and-high income groups are not circumscribed or restricted in any way in the name of helping financially disadvantaged students”, there was nothing in the Minister’s speech, Straits Times and all other media reports, that quoted the Minister, in this context. In my opinion, the closest that I could find in all the reports, to link in context to the Minister’s faith in our meritocratic system, was what the Straits Times Forum Editor edited – “Therefore, it was reassuring to read about Dr Ng’s own experience of the ‘unique, meritocratic Singapore system’: he grew up in a three-room flat with five other siblings, and his medical studies at the National University of Singapore were heavily subsidised; later, he trained as a cancer surgeon in the United States using a government scholarship”.
  • To the credit of the Straits Times Forum Editor, inspite of the hundreds of letters that he receives in a day, he took the time and effort to:- Check the accuracy of the letter writer’s ‘quoted’ statistics Find the correct ‘quoted’ statistics to replace the writer’s wrongly ‘quoted’ statistics Check for misquotes out of context (in this case, what the Education Minister actually said), and then find the correct quote to amend the writer’s statement
  • Kind sir, the statistics state that 1 in 2 are in the top 66.6% (Which, incidentally, includes the top fifth of the bottom 50%!) Does it not stand to reason, then, that if 50% are in the top 66.6%, the remaining 50% are in the bottom 33.3%, as I stated in my letter?
  • Also, perhaps you were not aware of the existence of this resource, but here is a graph from the Straits Times illustrating the fact that only 10% of children from one-to-three room flats make it to university–which is to say, 90% of them don’t. http://www.straitstimes.com/STI/STIMEDIA/pdf/20110308/a10.pdf
  • The writer made it point to say that only 90% did not make it to university. It has been edited to say 50% made it to university AND POLYTECHNIC. Both are right, and that one is made to make the government look good
1 - 20 of 72 Next › Last »
Showing 20 items per page