Skip to main content

Home/ New Media Ethics 2009 course/ Group items tagged Drugs

Rss Feed Group items tagged

Weiye Loh

Roger Pielke Jr.'s Blog: Innovation in Drug Development: An Inverse Moore's Law? - 0 views

  • Today's FT has this interesting graph and an accompanying story, showing a sort of inverse Moore's Law of drug development.  Over almost 60 years the number of new drugs developed per unit of investment has declined in a fairly constant manner, and some drug companies are now slashing their R&D budgets.
  • why this trend has occurred.  The FT points to a combination of low-hanging fruit that has been plucked and increasing costs of drug development. To some observers, that reflects the end of the mid to late 20th century golden era for drug discovery, when first-generation medicines such as antibiotics and beta-blockers to treat high blood pressure transformed healthcare. At the same time, regulatory demands to prove safety and efficacy have grown firmer. The result is larger and more costly clinical trials, and high failure rates for experimental drugs.
  • Others point to flawed innovation policies in industry and governments: “The markets treat drug companies as though research and development spending destroys value,” says Jack Scannell, an analyst at Bernstein Research. “People have stopped distinguishing the good from the bad. All those which performed well returned cash to shareholders. Unless the industry can articulate what the problem is, I don’t expect that to change.”
  • ...6 more annotations...
  • Mr [Andrew] Baum [of Morgan Stanley] argues that the solution for drug companies is to share the risks of research with others. That means reducing in-house investment in research, and instead partnering and licensing experimental medicines from smaller companies after some of the early failures have been eliminated.
  • Chas Bountra of Oxford university calls for a more radical partnership combining industry and academic research. “What we are trying to do is just too difficult,” he says. “No one organisation can do it, so we have to pool resources and expertise.” He suggests removing intellectual property rights until a drug is in mid-stage testing in humans, which would make academics more willing to co-operate because they could publish their results freely. The sharing of data would enable companies to avoid duplicating work.
  • The challenge is for academia and biotech companies to fill the research gap. Mr Ratcliffe argues that after a lull in 2009 and 2010, private capital is returning to the sector – as demonstrated by a particular buzz at JPMorgan’s new year biotech conference in California.
  • Patrick Vallance, senior vice-president for discovery at GSK, is cautious about deferring patents until so late, arguing that drug companies need to be able to protect their intellectual property in order to fund expensive late-stage development. But he too is experimenting with ways to co-operate more closely with academics over longer periods. He is also championing the “externalisation” of the company’s pipeline, with biotech and university partners accounting for half the total. GSK has earmarked £50m to support fledgling British companies, many “wrapped around” the group’s sites. One such example is Convergence, a spin-out from a GSK lab researching pain relief.
  • Big pharmaceutical companies are scrambling to find ways to overcome the loss of tens of billions of dollars in revenue as patents on top-selling drugs run out. Many sound similar notes about encouraging entrepreneurialism in their ranks, making smart deals and capitalizing on emerging-market growth, But their actual plans are often quite different—and each carries significant risks. Novartis AG, for instance, is so convinced that diversification is the best course that the company has a considerable business selling low-priced generics. Meantime, Bristol-Myers Squibb Co. has decided to concentrate on innovative medicines, shedding so many nonpharmaceutical units that it' has become midsize. GlaxoSmithKline PLC is still investing in research, but like Pfizer it has narrowed the range of disease areas in which it's seeking new treatments. Underlying the divergence is a deep-seated philosophical dispute over the merits of the heavy investment that companies must make to discover new drugs. By most estimates, bringing a new molecule to market costs drug makers more than $1 billion. Industry officials have been engaged in a vigorous debate over whether the investment is worth it, or whether they should leave it to others whose work they can acquire or license after a demonstration of strong potential.
  • To what extent can approached to innovation influence the trend line in the graph above?  I don't think that anyone really knows the answer.  The different approaches being taken by Merck and Pfizer, for instance, represent a real world policy experiment: The contrast between Merck and Pfizer reflects the very different personal approaches of their CEOs. An accountant by training, Mr. Read has held various business positions during a three-decade career at Pfizer. The 57-year-old cited torcetrapib, a cholesterol medicine that the company spent more than $800 million developing but then pulled due to safety concerns, as an example of the kind of wasteful spending Pfizer would avoid. "We're going to have metrics," Mr. Read said. He wants Pfizer to stop "always investing on hope rather than strong signals and the quality of the science, the quality of the medicine." Mr. Frazier, 56, a Harvard-educated lawyer who joined Merck in 1994 from private practice, said the company was sticking by its own troubled heart drug, vorapaxar. Mr. Frazier said he wanted to see all of the data from the trials before rushing to judgment. "We believe in the innovation approach," he said.
Weiye Loh

How drug companies' PR tactics skew the presentation of medical research | Science | gu... - 0 views

  • Drug companies exert this hold on knowledge through publication planning agencies, an obscure subsection of the pharmaceutical industry that has ballooned in size in recent years, and is now a key lever in the commercial machinery that gets drugs sold.The planning companies are paid to implement high-impact publication strategies for specific drugs. They target the most influential academics to act as authors, draft the articles, and ensure that these include clearly-defined branding messages and appear in the most prestigious journals.
  • In selling their services to drug companies, the agencies' explain their work in frank language. Current Medical Directions, a medical communications company based in New York, promises to create "scientific content in support of our clients' messages". A rival firm from Macclesfield, Complete HealthVizion, describes what it does as "a fusion of evidence and inspiration."
  • There are now at least 250 different companies engaged in the business of planning clinical publications for the pharmaceutical industry, according to the International Society for Medical Publication Professionals, which said it has over 1000 individual members.Many firms are based in the UK and the east coast of the United States in traditional "pharma" centres like Pennsylvania and New Jersey.Precise figures are hard to pin down because publication planning is widely dispersed and is only beginning to be recognized as something like a discrete profession.
  • ...6 more annotations...
  • the standard approach to article preparation is for planners to work hand-in-glove with drug companies to create a first draft. "Key messages" laid out by the drug companies are accommodated to the extent that they can be supported by available data.Planners combine scientific information about a drug with two kinds of message that help create a "drug narrative". "Environmental" messages are intended to forge the sense of a gap in available medicine within a specific clinical field, while "product" messages show how the new drug meets this need.
  • In a flow-chart drawn up by Eric Crown, publications manager at Merck (the company that sold the controversial painkiller Vioxx), the determination of authorship appears as the fourth stage of the article preparation procedure. That is, only after company employees have presented clinical study data, discussed the findings, finalised "tactical plans" and identified where the article should be published.Perhaps surprisingly to the casual observer, under guidelines tightened up in recent years by the International Committee of Journal Editors (ICMJE), Crown's approach, typical among pharmaceutical companies, does not constitute ghostwriting.
  • What publication planners understand by the term is precise but it is also quite distinct from the popular interpretation.
  • "We may have written a paper, but the people we work with have to have some input and approve it."
  • "I feel that we're doing something good for mankind in the long-run," said Kimberly Goldin, head of the International Society for Medical Publication Professionals (ISMPP). "We want to influence healthcare in a very positive, scientifically sound way.""The profession grew out of a marketing umbrella, but has moved under the science umbrella," she said.But without the window of court documents to show how publication planning is being carried out today, the public simply cannot know if reforms the industry says it has made are genuine.
  • Dr Leemon McHenry, a medical ethicist at California State University, says nothing has changed. "They've just found more clever ways of concealing their activities. There's a whole army of hidden scribes. It's an epistemological morass where you can't trust anything."Alastair Matheson is a British medical writer who has worked extensively for medical communication agencies. He dismisses the planners' claims to having reformed as "bullshit"."The new guidelines work very nicely to permit the current system to continue as it has been", he said. "The whole thing is a big lie. They are promoting a product."
Weiye Loh

Expectations can cancel out the benefit of pain drugs - 0 views

  • People who don't believe their pain medicine will work can actually reduce or even cancel out the effectiveness of the drug, and images of their brains show how they are doing it, scientists said
  • Researchers from Britain and Germany used brain scans to map how a person's feelings and past experiences can influence the effectiveness of medicines, and found that a powerful painkilling drug with a true biological effect can appear not to be working if a patient has been primed to expect it to fail.
  • By contrast, positive expectations about the treatment doubled the natural physiological or biochemical effect of an opioid drug among 22 healthy volunteers in the study.
  • ...3 more annotations...
  • "The brain imaging is telling us that patients really are switching on and off parts of their brains through the mechanisms of expectation -- positive and negative," said Irene Tracy of Britain's Oxford University, who led the research. "(The effect of expectations) is powerful enough to give real added benefits of the drug, and unfortunately it is also very capable of overriding the true analgesic effect." The placebo effect is the real benefit seen when patients are given dummy treatments but believe they will do them good. The nocebo effect is the opposite, when patients get real negative effects when they have doubts about a treatment.
  • For their study, the scientists used the drug remifentanil, a potent ultra short-acting synthetic opioid painkiller which is marketed by drugmakers GlaxoSmithKline and Abbott as Ultiva. The study was published in the Science Translational Medicine journal on Wednesday. Volunteers were put in an MRI scanner and had heat applied to one leg. They were asked to rate pain on a 1 to 100 scale. Unknown to the volunteers, the researchers started giving the drug via infusion to see what effects there would be when the volunteers had no knowledge or expectation of treatment. The average initial pain rating of 66 went down to 55. The volunteers were then told they would now start to get the drug, although no change was actually made and they just continued receiving the opioid at the same dose. The average pain ratings dropped further to 39. The volunteers were then told the drug had been stopped and warned that there may be an increase in pain. In reality, the drug was still being given at the same dose, but their pain intensity increased to 64 -- meaning the pain was almost as bad as it had been at the beginning, before they had had any drug.
  • Tracey said there may be lessons for the design of clinical trials, which often compare an experimental drug against a dummy pill to see if there is any effect beyond the placebo effect. "We should control for the effect of people's expectations on the results of any clinical trial," she said. "At the very least we should make sure we minimize any negative expectations to make sure we're not masking true efficacy in a trial drug."
yongernn teo

Eli Lilly Accused of Unethical Marketing of Zyprexa - 0 views

  •  
    Summary of the Unethical Marketing of Zyprexa by Eli Lilly: \n\nEli Lilly is a global pharmaceutical company. In the year 2006, it was charged with unethical marketing of Zyprexa, the top-selling drug. It is approved only for the treatment of schizophrenia and bipolar disorder. \nFirstly, Eli Lilly in a report downplayed the risks of obesity and increased blood sugar associated with Zyprexa. Although Eli Lilly was aware of these risks for at least a decade, they went ahead without emphasizing the significance of these risks, in fear of jeopardizing their sales. \nSecondly, Eli Lilly held a promotional campaign called Viva Zyprexa, encouraging off-label usage of this drug in patients who had neither schizophrenia nor bipolar disorder. This campaign was targeted at the elderly who had dementia. However, this drug was not approved to treat dementia. In fact, it could increase the risk of death in older patients who had dementia-related psychosis. \nAll these were done to boost the sale of Zyprexa and to bring in more revenue for Eli Lilly. Zyprexa could alone bring in $4billion worth of sales annually. \n\nEthical Question:\nTo what extent should pharmaceutical companies go to inform potential consumers on the side-effects of their drugs? \n\nEthical Problem: \nThe information that is disseminated through marketing campaigns have to be true and transparent. There should not be any hidden agenda behind the amount of information being released. In this case, to prevent sales from plummeting, Eli Lilly downplayed the side-effects of Zyprexa. It also encouraged off-label usage. \nIt is very important that pharmaceutical companies practice good ethics as this concerns the health of its consumers. While one drug may act as a remedy for a health-problem, it could possibly lead to other health problems due to the side-effects. All these have to be conveyed to the consumer who exchanges his money for the product. \nNot being transparent and honest with the information of the pr
Weiye Loh

Meet the Ethical Placebo: A Story that Heals | NeuroTribes - 0 views

  • In modern medicine, placebos are associated with another form of deception — a kind that has long been thought essential for conducting randomized clinical trials of new drugs, the statistical rock upon which the global pharmaceutical industry was built. One group of volunteers in an RCT gets the novel medication; another group (the “control” group) gets pills or capsules that look identical to the allegedly active drug, but contain only an inert substance like milk sugar. These faux drugs are called placebos.
  • Inevitably, the health of some people in both groups improves, while the health of others grows worse. Symptoms of illness fluctuate for all sorts of reasons, including regression to the mean.
  • Since the goal of an RCT, from Big Pharma’s perspective, is to demonstrate the effectiveness of a new drug, the return to robust health of a volunteer in the control group is considered a statistical distraction. If too many people in the trial get better after downing sugar pills, the real drug will look worse by comparison — sometimes fatally so for the purpose of earning approval from the Food and Drug Adminstration.
  • ...12 more annotations...
  • For a complex and somewhat mysterious set of reasons, it is becoming increasingly difficult for experimental drugs to prove their superiority to sugar pills in RCTs
  • in recent years, however, has it become obvious that the abatement of symptoms in control-group volunteers — the so-called placebo effect — is worthy of study outside the context of drug trials, and is in fact profoundly good news to anyone but investors in Pfizer, Roche, and GlaxoSmithKline.
  • The emerging field of placebo research has revealed that the body’s repertoire of resilience contains a powerful self-healing network that can help reduce pain and inflammation, lower the production of stress chemicals like cortisol, and even tame high blood pressure and the tremors of Parkinson’s disease.
  • more and more studies each year — by researchers like Fabrizio Benedetti at the University of Turin, author of a superb new book called The Patient’s Brain, and neuroscientist Tor Wager at the University of Colorado — demonstrate that the placebo effect might be potentially useful in treating a wide range of ills. Then why aren’t doctors supposed to use it?
  • The medical establishment’s ethical problem with placebo treatment boils down to the notion that for fake drugs to be effective, doctors must lie to their patients. It has been widely assumed that if a patient discovers that he or she is taking a placebo, the mind/body password will no longer unlock the network, and the magic pills will cease to do their job.
  • For “Placebos Without Deception,” the researchers tracked the health of 80 volunteers with irritable bowel syndrome for three weeks as half of them took placebos and the other half didn’t.
  • In a previous study published in the British Medical Journal in 2008, Kaptchuk and Kirsch demonstrated that placebo treatment can be highly effective for alleviating the symptoms of IBS. This time, however, instead of the trial being “blinded,” it was “open.” That is, the volunteers in the placebo group knew that they were getting only inert pills — which they were instructed to take religiously, twice a day. They were also informed that, just as Ivan Pavlov trained his dogs to drool at the sound of a bell, the body could be trained to activate its own built-in healing network by the act of swallowing a pill.
  • In other words, in addition to the bogus medication, the volunteers were given a true story — the story of the placebo effect. They also received the care and attention of clinicians, which have been found in many other studies to be crucial for eliciting placebo effects. The combination of the story and a supportive clinical environment were enough to prevail over the knowledge that there was really nothing in the pills. People in the placebo arm of the trial got better — clinically, measurably, significantly better — on standard scales of symptom severity and overall quality of life. In fact, the volunteers in the placebo group experienced improvement comparable to patients taking a drug called alosetron, the standard of care for IBS. Meet the ethical placebo: a powerfully effective faux medication that meets all the standards of informed consent.
  • The study is hardly the last word on the subject, but more like one of the first. Its modest sample size and brief duration leave plenty of room for followup research. (What if “ethical” placebos wear off more quickly than deceptive ones? Does the fact that most of the volunteers in this study were women have any bearing on the outcome? Were any of the volunteers skeptical that the placebo effect is real, and did that affect their response to treatment?) Before some eager editor out there composes a tweet-baiting headline suggesting that placebos are about to drive Big Pharma out of business, he or she should appreciate the fact that the advent of AMA-approved placebo treatments would open numerous cans of fascinatingly tangled worms. For example, since the precise nature of placebo effects is shaped largely by patients’ expectations, would the advertised potency and side effects of theoretical products like Placebex and Therastim be subject to change by Internet rumors, requiring perpetual updating?
  • It’s common to use the word “placebo” as a synonym for “scam.” Economists talk about placebo solutions to our economic catastrophe (tax cuts for the rich, anyone?). Online skeptics mock the billion-dollar herbal-medicine industry by calling it Big Placebo. The fact that our brains and bodies respond vigorously to placebos given in warm and supportive clinical environments, however, turns out to be very real.
  • We’re also discovering that the power of narrative is embedded deeply in our physiology.
  • in the real world of doctoring, many physicians prescribe medications at dosages too low to have an effect on their own, hoping to tap into the body’s own healing resources — though this is mostly acknowledged only in whispers, as a kind of trade secret.
Weiye Loh

Lies, Damned Lies, and Medical Science - Magazine - The Atlantic - 0 views

  • In 2001, rumors were circulating in Greek hospitals that surgery residents, eager to rack up scalpel time, were falsely diagnosing hapless Albanian immigrants with appendicitis. At the University of Ioannina medical school’s teaching hospital, a newly minted doctor named Athina Tatsioni was discussing the rumors with colleagues when a professor who had overheard asked her if she’d like to try to prove whether they were true—he seemed to be almost daring her. She accepted the challenge and, with the professor’s and other colleagues’ help, eventually produced a formal study showing that, for whatever reason, the appendices removed from patients with Albanian names in six Greek hospitals were more than three times as likely to be perfectly healthy as those removed from patients with Greek names. “It was hard to find a journal willing to publish it, but we did,” recalls Tatsioni. “I also discovered that I really liked research.” Good thing, because the study had actually been a sort of audition. The professor, it turned out, had been putting together a team of exceptionally brash and curious young clinicians and Ph.D.s to join him in tackling an unusual and controversial agenda.
  • were drug companies manipulating published research to make their drugs look good? Salanti ticked off data that seemed to indicate they were, but the other team members almost immediately started interrupting. One noted that Salanti’s study didn’t address the fact that drug-company research wasn’t measuring critically important “hard” outcomes for patients, such as survival versus death, and instead tended to measure “softer” outcomes, such as self-reported symptoms (“my chest doesn’t hurt as much today”). Another pointed out that Salanti’s study ignored the fact that when drug-company data seemed to show patients’ health improving, the data often failed to show that the drug was responsible, or that the improvement was more than marginal.
  • but a single study can’t prove everything, she said. Just as I was getting the sense that the data in drug studies were endlessly malleable, Ioannidis, who had mostly been listening, delivered what felt like a coup de grâce: wasn’t it possible, he asked, that drug companies were carefully selecting the topics of their studies—for example, comparing their new drugs against those already known to be inferior to others on the market—so that they were ahead of the game even before the data juggling began? “Maybe sometimes it’s the questions that are biased, not the answers,” he said, flashing a friendly smile. Everyone nodded. Though the results of drug studies often make newspaper headlines, you have to wonder whether they prove anything at all. Indeed, given the breadth of the potential problems raised at the meeting, can any medical-research studies be trusted?
  •  
    Lies, Damned Lies, and Medical Science
Weiye Loh

Odds Are, It's Wrong - Science News - 0 views

  • science has long been married to mathematics. Generally it has been for the better. Especially since the days of Galileo and Newton, math has nurtured science. Rigorous mathematical methods have secured science’s fidelity to fact and conferred a timeless reliability to its findings.
  • a mutant form of math has deflected science’s heart from the modes of calculation that had long served so faithfully. Science was seduced by statistics, the math rooted in the same principles that guarantee profits for Las Vegas casinos. Supposedly, the proper use of statistics makes relying on scientific results a safe bet. But in practice, widespread misuse of statistical methods makes science more like a crapshoot.
  • science’s dirtiest secret: The “scientific method” of testing hypotheses by statistical analysis stands on a flimsy foundation. Statistical tests are supposed to guide scientists in judging whether an experimental result reflects some real effect or is merely a random fluke, but the standard methods mix mutually inconsistent philosophies and offer no meaningful basis for making such decisions. Even when performed correctly, statistical tests are widely misunderstood and frequently misinterpreted. As a result, countless conclusions in the scientific literature are erroneous, and tests of medical dangers or treatments are often contradictory and confusing.
  • ...24 more annotations...
  • Experts in the math of probability and statistics are well aware of these problems and have for decades expressed concern about them in major journals. Over the years, hundreds of published papers have warned that science’s love affair with statistics has spawned countless illegitimate findings. In fact, if you believe what you read in the scientific literature, you shouldn’t believe what you read in the scientific literature.
  • “There are more false claims made in the medical literature than anybody appreciates,” he says. “There’s no question about that.”Nobody contends that all of science is wrong, or that it hasn’t compiled an impressive array of truths about the natural world. Still, any single scientific study alone is quite likely to be incorrect, thanks largely to the fact that the standard statistical system for drawing conclusions is, in essence, illogical. “A lot of scientists don’t understand statistics,” says Goodman. “And they don’t understand statistics because the statistics don’t make sense.”
  • In 2007, for instance, researchers combing the medical literature found numerous studies linking a total of 85 genetic variants in 70 different genes to acute coronary syndrome, a cluster of heart problems. When the researchers compared genetic tests of 811 patients that had the syndrome with a group of 650 (matched for sex and age) that didn’t, only one of the suspect gene variants turned up substantially more often in those with the syndrome — a number to be expected by chance.“Our null results provide no support for the hypothesis that any of the 85 genetic variants tested is a susceptibility factor” for the syndrome, the researchers reported in the Journal of the American Medical Association.How could so many studies be wrong? Because their conclusions relied on “statistical significance,” a concept at the heart of the mathematical analysis of modern scientific experiments.
  • Statistical significance is a phrase that every science graduate student learns, but few comprehend. While its origins stretch back at least to the 19th century, the modern notion was pioneered by the mathematician Ronald A. Fisher in the 1920s. His original interest was agriculture. He sought a test of whether variation in crop yields was due to some specific intervention (say, fertilizer) or merely reflected random factors beyond experimental control.Fisher first assumed that fertilizer caused no difference — the “no effect” or “null” hypothesis. He then calculated a number called the P value, the probability that an observed yield in a fertilized field would occur if fertilizer had no real effect. If P is less than .05 — meaning the chance of a fluke is less than 5 percent — the result should be declared “statistically significant,” Fisher arbitrarily declared, and the no effect hypothesis should be rejected, supposedly confirming that fertilizer works.Fisher’s P value eventually became the ultimate arbiter of credibility for science results of all sorts
  • But in fact, there’s no logical basis for using a P value from a single study to draw any conclusion. If the chance of a fluke is less than 5 percent, two possible conclusions remain: There is a real effect, or the result is an improbable fluke. Fisher’s method offers no way to know which is which. On the other hand, if a study finds no statistically significant effect, that doesn’t prove anything, either. Perhaps the effect doesn’t exist, or maybe the statistical test wasn’t powerful enough to detect a small but real effect.
  • Soon after Fisher established his system of statistical significance, it was attacked by other mathematicians, notably Egon Pearson and Jerzy Neyman. Rather than testing a null hypothesis, they argued, it made more sense to test competing hypotheses against one another. That approach also produces a P value, which is used to gauge the likelihood of a “false positive” — concluding an effect is real when it actually isn’t. What  eventually emerged was a hybrid mix of the mutually inconsistent Fisher and Neyman-Pearson approaches, which has rendered interpretations of standard statistics muddled at best and simply erroneous at worst. As a result, most scientists are confused about the meaning of a P value or how to interpret it. “It’s almost never, ever, ever stated correctly, what it means,” says Goodman.
  • experimental data yielding a P value of .05 means that there is only a 5 percent chance of obtaining the observed (or more extreme) result if no real effect exists (that is, if the no-difference hypothesis is correct). But many explanations mangle the subtleties in that definition. A recent popular book on issues involving science, for example, states a commonly held misperception about the meaning of statistical significance at the .05 level: “This means that it is 95 percent certain that the observed difference between groups, or sets of samples, is real and could not have arisen by chance.”
  • That interpretation commits an egregious logical error (technical term: “transposed conditional”): confusing the odds of getting a result (if a hypothesis is true) with the odds favoring the hypothesis if you observe that result. A well-fed dog may seldom bark, but observing the rare bark does not imply that the dog is hungry. A dog may bark 5 percent of the time even if it is well-fed all of the time. (See Box 2)
    • Weiye Loh
       
      Does the problem then, lie not in statistics, but the interpretation of statistics? Is the fallacy of appeal to probability is at work in such interpretation? 
  • Another common error equates statistical significance to “significance” in the ordinary use of the word. Because of the way statistical formulas work, a study with a very large sample can detect “statistical significance” for a small effect that is meaningless in practical terms. A new drug may be statistically better than an old drug, but for every thousand people you treat you might get just one or two additional cures — not clinically significant. Similarly, when studies claim that a chemical causes a “significantly increased risk of cancer,” they often mean that it is just statistically significant, possibly posing only a tiny absolute increase in risk.
  • Statisticians perpetually caution against mistaking statistical significance for practical importance, but scientific papers commit that error often. Ziliak studied journals from various fields — psychology, medicine and economics among others — and reported frequent disregard for the distinction.
  • “I found that eight or nine of every 10 articles published in the leading journals make the fatal substitution” of equating statistical significance to importance, he said in an interview. Ziliak’s data are documented in the 2008 book The Cult of Statistical Significance, coauthored with Deirdre McCloskey of the University of Illinois at Chicago.
  • Multiplicity of mistakesEven when “significance” is properly defined and P values are carefully calculated, statistical inference is plagued by many other problems. Chief among them is the “multiplicity” issue — the testing of many hypotheses simultaneously. When several drugs are tested at once, or a single drug is tested on several groups, chances of getting a statistically significant but false result rise rapidly.
  • Recognizing these problems, some researchers now calculate a “false discovery rate” to warn of flukes disguised as real effects. And genetics researchers have begun using “genome-wide association studies” that attempt to ameliorate the multiplicity issue (SN: 6/21/08, p. 20).
  • Many researchers now also commonly report results with confidence intervals, similar to the margins of error reported in opinion polls. Such intervals, usually given as a range that should include the actual value with 95 percent confidence, do convey a better sense of how precise a finding is. But the 95 percent confidence calculation is based on the same math as the .05 P value and so still shares some of its problems.
  • Statistical problems also afflict the “gold standard” for medical research, the randomized, controlled clinical trials that test drugs for their ability to cure or their power to harm. Such trials assign patients at random to receive either the substance being tested or a placebo, typically a sugar pill; random selection supposedly guarantees that patients’ personal characteristics won’t bias the choice of who gets the actual treatment. But in practice, selection biases may still occur, Vance Berger and Sherri Weinstein noted in 2004 in ControlledClinical Trials. “Some of the benefits ascribed to randomization, for example that it eliminates all selection bias, can better be described as fantasy than reality,” they wrote.
  • Randomization also should ensure that unknown differences among individuals are mixed in roughly the same proportions in the groups being tested. But statistics do not guarantee an equal distribution any more than they prohibit 10 heads in a row when flipping a penny. With thousands of clinical trials in progress, some will not be well randomized. And DNA differs at more than a million spots in the human genetic catalog, so even in a single trial differences may not be evenly mixed. In a sufficiently large trial, unrandomized factors may balance out, if some have positive effects and some are negative. (See Box 3) Still, trial results are reported as averages that may obscure individual differences, masking beneficial or harm­ful effects and possibly leading to approval of drugs that are deadly for some and denial of effective treatment to others.
  • nother concern is the common strategy of combining results from many trials into a single “meta-analysis,” a study of studies. In a single trial with relatively few participants, statistical tests may not detect small but real and possibly important effects. In principle, combining smaller studies to create a larger sample would allow the tests to detect such small effects. But statistical techniques for doing so are valid only if certain criteria are met. For one thing, all the studies conducted on the drug must be included — published and unpublished. And all the studies should have been performed in a similar way, using the same protocols, definitions, types of patients and doses. When combining studies with differences, it is necessary first to show that those differences would not affect the analysis, Goodman notes, but that seldom happens. “That’s not a formal part of most meta-analyses,” he says.
  • Meta-analyses have produced many controversial conclusions. Common claims that antidepressants work no better than placebos, for example, are based on meta-analyses that do not conform to the criteria that would confer validity. Similar problems afflicted a 2007 meta-analysis, published in the New England Journal of Medicine, that attributed increased heart attack risk to the diabetes drug Avandia. Raw data from the combined trials showed that only 55 people in 10,000 had heart attacks when using Avandia, compared with 59 people per 10,000 in comparison groups. But after a series of statistical manipulations, Avandia appeared to confer an increased risk.
  • combining small studies in a meta-analysis is not a good substitute for a single trial sufficiently large to test a given question. “Meta-analyses can reduce the role of chance in the interpretation but may introduce bias and confounding,” Hennekens and DeMets write in the Dec. 2 Journal of the American Medical Association. “Such results should be considered more as hypothesis formulating than as hypothesis testing.”
  • Some studies show dramatic effects that don’t require sophisticated statistics to interpret. If the P value is 0.0001 — a hundredth of a percent chance of a fluke — that is strong evidence, Goodman points out. Besides, most well-accepted science is based not on any single study, but on studies that have been confirmed by repetition. Any one result may be likely to be wrong, but confidence rises quickly if that result is independently replicated.“Replication is vital,” says statistician Juliet Shaffer, a lecturer emeritus at the University of California, Berkeley. And in medicine, she says, the need for replication is widely recognized. “But in the social sciences and behavioral sciences, replication is not common,” she noted in San Diego in February at the annual meeting of the American Association for the Advancement of Science. “This is a sad situation.”
  • Most critics of standard statistics advocate the Bayesian approach to statistical reasoning, a methodology that derives from a theorem credited to Bayes, an 18th century English clergyman. His approach uses similar math, but requires the added twist of a “prior probability” — in essence, an informed guess about the expected probability of something in advance of the study. Often this prior probability is more than a mere guess — it could be based, for instance, on previous studies.
  • it basically just reflects the need to include previous knowledge when drawing conclusions from new observations. To infer the odds that a barking dog is hungry, for instance, it is not enough to know how often the dog barks when well-fed. You also need to know how often it eats — in order to calculate the prior probability of being hungry. Bayesian math combines a prior probability with observed data to produce an estimate of the likelihood of the hunger hypothesis. “A scientific hypothesis cannot be properly assessed solely by reference to the observational data,” but only by viewing the data in light of prior belief in the hypothesis, wrote George Diamond and Sanjay Kaul of UCLA’s School of Medicine in 2004 in the Journal of the American College of Cardiology. “Bayes’ theorem is ... a logically consistent, mathematically valid, and intuitive way to draw inferences about the hypothesis.” (See Box 4)
  • In many real-life contexts, Bayesian methods do produce the best answers to important questions. In medical diagnoses, for instance, the likelihood that a test for a disease is correct depends on the prevalence of the disease in the population, a factor that Bayesian math would take into account.
  • But Bayesian methods introduce a confusion into the actual meaning of the mathematical concept of “probability” in the real world. Standard or “frequentist” statistics treat probabilities as objective realities; Bayesians treat probabilities as “degrees of belief” based in part on a personal assessment or subjective decision about what to include in the calculation. That’s a tough placebo to swallow for scientists wedded to the “objective” ideal of standard statistics. “Subjective prior beliefs are anathema to the frequentist, who relies instead on a series of ad hoc algorithms that maintain the facade of scientific objectivity,” Diamond and Kaul wrote.Conflict between frequentists and Bayesians has been ongoing for two centuries. So science’s marriage to mathematics seems to entail some irreconcilable differences. Whether the future holds a fruitful reconciliation or an ugly separation may depend on forging a shared understanding of probability.“What does probability mean in real life?” the statistician David Salsburg asked in his 2001 book The Lady Tasting Tea. “This problem is still unsolved, and ... if it remains un­solved, the whole of the statistical approach to science may come crashing down from the weight of its own inconsistencies.”
  •  
    Odds Are, It's Wrong Science fails to face the shortcomings of statistics
Weiye Loh

Mike Adams Remains True to Form « Alternative Medicine « Health « Skeptic North - 0 views

  • The 10:23 demonstrations and the CBC Marketplace coverage have elicited fascinating case studies in CAM professionalism. Rather than offering any new information or evidence about homeopathy itself, some homeopaths have spuriously accused skeptical groups of being malicious Big Pharma shills.
  • Mike Adams of the Natural News website
  • has decided to provide his own coverage of the 10:23 campaign
  • ...17 more annotations...
  • Mike’s thesis is essentially: Silly skeptics, it’s impossible to OD on homeopathy!
  • 1. “Notice that they never consume their own medicines in large doses? Chemotherapy? Statin drugs? Blood thinners? They wouldn’t dare drink those.
  • Of course we wouldn’t. Steven Novella rightly points out that, though Mike thinks he’s being clever here, he’s actually demonstrating a lack of understanding for what the 10:23 campaign is about by using a straw man. Mike later issues a challenge for skeptics to drink their favourite medicines while he drinks homeopathy. Since no one will agree to that for the reasons explained above, he can claim some sort of victory — hence his smugness. But no one is saying that drugs aren’t harmful.
  • The difference between medicine and poison is in the dose. The vitamins and herbs promoted by the CAM industry are just as potentially harmful as any pharmaceutical drug, given enough of it. Would Adams be willing to OD on the vitamins or herbal remedies that he sells?
  • Even Adams’ favorite panacea, vitamin D, is toxic if you take enough of it (just ask Gary Null). Notice how skeptics don’t consume those either, because that is not the point they’re making.
  • The point of these demonstrations is that homeopathy has nothing in it, has no measurable physiological effects, and does not do what is advertised on the package.
  • 2. “Homeopathy, you see, isn’t a drug. It’s not a chemical.” Well, he’s got that right. “You know the drugs are kicking in when you start getting worse. Toxicity and conventional medicine go hand in hand.” [emphasis his]
  • Here I have to wonder if Adams knows any people with diabetes, AIDS, or any other illness that used to mean a death sentence before the significant medical advances of the 20th century that we now take for granted. So far he seems to be a firm believer in the false dichotomy that drugs are bad and natural products are good, regardless of what’s in them or how they’re used (as we know, natural products can have biologically active substances and effectively act as impure drugs – but leave it to Adams not to get bogged down with details). There is nothing to support the assertion that conventional medicine is nothing but toxic symptom-inducers.
  • 3-11. “But homeopathy isn’t a chemical. It’s a resonance. A vibration, or a harmony. It’s the restructuring of water to resonate with the particular energy of a plant or substance. We can get into the physics of it in a subsequent article, but for now it’s easy to recognize that even from a conventional physics point of view, liquid water has tremendous energy, and it’s constantly in motion, not just at the molecular level but also at the level of its subatomic particles and so-called “orbiting electrons” which aren’t even orbiting in the first place. Electrons are vibrations and not physical objects.” [emphasis his]
  • This is Star Trek-like technobabble – lots of sciency words
  • if something — anything — has an effect, then that effect is measurable by definition. Either something works or it doesn’t, regardless of mechanism. In any case, I’d like to see the well-documented series of research that conclusively proves this supposed mechanism. Actually, I’d like to see any credible research at all. I know what the answer will be to that: science can’t detect this yet. Well if you agree with that statement, reader, ask yourself this: then how does Adams know? Where did he get this information? Without evidence, he is guessing, and what is that really worth?
  • 13. “But getting back to water and vibrations, which isn’t magic but rather vibrational physics, you can’t overdose on a harmony. If you have one violin playing a note in your room, and you add ten more violins — or a hundred more — it’s all still the same harmony (with all its complex higher frequencies, too). There’s no toxicity to it.” [emphasis his]
  • Homeopathy has standard dosing regimes (they’re all the same), but there is no “dose” to speak of: the ingredients have usually been diluted out to nothing. But Adams is also saying that homeopathy doesn’t work by dose at all, it works by the properties of “resonance” and “vibration”. Then why any dosing regimen? To maintain the resonance? How is this resonance measured? How long does the “resonance” last? Why does it wear off? Why does he think televisions can inactivate homeopathy? (I think I might know the answer to that last one, as electronic interference is a handy excuse for inefficacy.)
  • “These skeptics just want to kill themselves… and they wouldn’t mind taking a few of you along with them, too. Hence their promotion of vaccines, pharmaceuticals, chemotherapy and water fluoridation. We’ll title the video, “SKEPTICS COMMIT MASS SUICIDE BY DRINKING PHARMACEUTICALS AS IF THEY WERE KOOL-AID.” Jonestown, anyone?”
  • “Do you notice the irony here? The only medicines they’re willing to consume in large doses in public are homeopathic remedies! They won’t dare consume large quantities of the medicines they all say YOU should be taking! (The pharma drugs.)” [emphasis his]
  • what Adams seems to have missed is that the skeptics have no intention of killing themselves, so his bizarre claims that the 10:23 participants are psychopathic, self-loathing, and suicidal makes not even a little bit of sense. Skeptics know they aren’t going to die with these demonstrations, because homeopathy has no active ingredients and no evidence of efficacy.
  • The inventor of homeopathy himself, Samuel Hahnemann believed that excessive doses of homeopathy could be harmful (see sections 275 and 276 of his Organon). Homeopaths are pros at retconning their own field to fit in with Hahnemann’s original ideas (inventing new mechanisms, such as water memory and resonance, in the face of germ theory). So how does Adams reconcile this claim?
Weiye Loh

When Science Trumps Policy: The Triumph of Insite « British Columbia « Canada... - 0 views

  • As skeptics we obviously want to see science based medicine and effective methods to improve public health. What this means is that, we skeptics, want to see medicine like vaccines promoted instead of homeopathy; but, we also want to see science based policy as well. What Insite has proven is that the harm reduction policy is working, in fact, working better than the “war on drugs” policy that the Conservative government has been supporting. Since the evidence is pointing to harm reduction being a more effective method of controlling the harmful effects of drug addiction in society, it should follow that harm reduction as a policy gain the support of our government and health care providers.
  • what was really distressing was that the Harper Government wasn’t just arguing against the evidence (saying for instance that it was either wrong or misguided) but actually arguing in spite of the evidence. What they were saying was that, yes, harm reduction appears to be working…but that’s irrelevant because that isn’t the policy we want to use.
Weiye Loh

Manipulating morals: scientists target drugs that improve behaviour | Science | The Gua... - 0 views

  • Drugs that affect our moral thinking and behaviour already exist, but we tend not to think of them in that way. [Prozac] lowers aggression and bitterness against environment and so could be said to make people more agreeable. Or Oxytocin, the so-called love hormone ... increases feelings of social bonding and empathy while reducing anxiety," he said.
  • But would pharmacologically-induced altruism, for example, amount to genuine moral behaviour? Guy Kahane, deputy director of the Oxford Centre for Neuroethics and a Wellcome Trust biomedical ethics award winner, said: "We can change people's emotional responses but quite whether that improves their moral behaviour is not something science can answer."
  • it was unlikely people would "rush to take a pill that would make them morally better."Becoming more trusting, nicer, less aggressive and less violent can make you more vulnerable to exploitation," he said. "On the other hand, it could improve your relationships or help your career."Kahane does not advocate putting morality drugs in the water supply, but he suggests that if administered widely they might help humanity to tackle global issues.
  • ...1 more annotation...
  • Ruud ter Meulen, chair in ethics in medicine and director of the centre for ethics in medicine at the University of Bristol, warned that while some drugs can improve moral behaviour, other drugs - and sometimes the same ones - can have the opposite effect."While Oxytocin makes you more likely to trust and co-operate with others in your social group, it reduces empathy for those outside the group," Meulen said.
  •  
    Researchers have become very interested in developing biomedical technologies capable of intervening in the biological processes that affect moral behaviour and moral thinking, according to Dr Tom Douglas, a Wellcome Trust research fellow at Oxford University's Uehiro Centre. "It is a very hot area of scientific study right now."
Weiye Loh

The Decline Effect and the Scientific Method : The New Yorker - 0 views

  • On September 18, 2007, a few dozen neuroscientists, psychiatrists, and drug-company executives gathered in a hotel conference room in Brussels to hear some startling news. It had to do with a class of drugs known as atypical or second-generation antipsychotics, which came on the market in the early nineties.
  • the therapeutic power of the drugs appeared to be steadily waning. A recent study showed an effect that was less than half of that documented in the first trials, in the early nineteen-nineties. Many researchers began to argue that the expensive pharmaceuticals weren’t any better than first-generation antipsychotics, which have been in use since the fifties. “In fact, sometimes they now look even worse,” John Davis, a professor of psychiatry at the University of Illinois at Chicago, told me.
  • Before the effectiveness of a drug can be confirmed, it must be tested and tested again. Different scientists in different labs need to repeat the protocols and publish their results. The test of replicability, as it’s known, is the foundation of modern research. Replicability is how the community enforces itself. It’s a safeguard for the creep of subjectivity. Most of the time, scientists know what results they want, and that can influence the results they get. The premise of replicability is that the scientific community can correct for these flaws.
  • ...30 more annotations...
  • But now all sorts of well-established, multiply confirmed findings have started to look increasingly uncertain. It’s as if our facts were losing their truth: claims that have been enshrined in textbooks are suddenly unprovable. This phenomenon doesn’t yet have an official name, but it’s occurring across a wide range of fields, from psychology to ecology. In the field of medicine, the phenomenon seems extremely widespread, affecting not only antipsychotics but also therapies ranging from cardiac stents to Vitamin E and antidepressants: Davis has a forthcoming analysis demonstrating that the efficacy of antidepressants has gone down as much as threefold in recent decades.
  • In private, Schooler began referring to the problem as “cosmic habituation,” by analogy to the decrease in response that occurs when individuals habituate to particular stimuli. “Habituation is why you don’t notice the stuff that’s always there,” Schooler says. “It’s an inevitable process of adjustment, a ratcheting down of excitement. I started joking that it was like the cosmos was habituating to my ideas. I took it very personally.”
  • At first, he assumed that he’d made an error in experimental design or a statistical miscalculation. But he couldn’t find anything wrong with his research. He then concluded that his initial batch of research subjects must have been unusually susceptible to verbal overshadowing. (John Davis, similarly, has speculated that part of the drop-off in the effectiveness of antipsychotics can be attributed to using subjects who suffer from milder forms of psychosis which are less likely to show dramatic improvement.) “It wasn’t a very satisfying explanation,” Schooler says. “One of my mentors told me that my real mistake was trying to replicate my work. He told me doing that was just setting myself up for disappointment.”
  • the effect is especially troubling because of what it exposes about the scientific process. If replication is what separates the rigor of science from the squishiness of pseudoscience, where do we put all these rigorously validated findings that can no longer be proved? Which results should we believe? Francis Bacon, the early-modern philosopher and pioneer of the scientific method, once declared that experiments were essential, because they allowed us to “put nature to the question.” But it appears that nature often gives us different answers.
  • The most likely explanation for the decline is an obvious one: regression to the mean. As the experiment is repeated, that is, an early statistical fluke gets cancelled out. The extrasensory powers of Schooler’s subjects didn’t decline—they were simply an illusion that vanished over time. And yet Schooler has noticed that many of the data sets that end up declining seem statistically solid—that is, they contain enough data that any regression to the mean shouldn’t be dramatic. “These are the results that pass all the tests,” he says. “The odds of them being random are typically quite remote, like one in a million. This means that the decline effect should almost never happen. But it happens all the time!
  • this is why Schooler believes that the decline effect deserves more attention: its ubiquity seems to violate the laws of statistics. “Whenever I start talking about this, scientists get very nervous,” he says. “But I still want to know what happened to my results. Like most scientists, I assumed that it would get easier to document my effect over time. I’d get better at doing the experiments, at zeroing in on the conditions that produce verbal overshadowing. So why did the opposite happen? I’m convinced that we can use the tools of science to figure this out. First, though, we have to admit that we’ve got a problem.”
  • In 2001, Michael Jennions, a biologist at the Australian National University, set out to analyze “temporal trends” across a wide range of subjects in ecology and evolutionary biology. He looked at hundreds of papers and forty-four meta-analyses (that is, statistical syntheses of related studies), and discovered a consistent decline effect over time, as many of the theories seemed to fade into irrelevance. In fact, even when numerous variables were controlled for—Jennions knew, for instance, that the same author might publish several critical papers, which could distort his analysis—there was still a significant decrease in the validity of the hypothesis, often within a year of publication. Jennions admits that his findings are troubling, but expresses a reluctance to talk about them publicly. “This is a very sensitive issue for scientists,” he says. “You know, we’re supposed to be dealing with hard facts, the stuff that’s supposed to stand the test of time. But when you see these trends you become a little more skeptical of things.”
  • the worst part was that when I submitted these null results I had difficulty getting them published. The journals only wanted confirming data. It was too exciting an idea to disprove, at least back then.
  • the steep rise and slow fall of fluctuating asymmetry is a clear example of a scientific paradigm, one of those intellectual fads that both guide and constrain research: after a new paradigm is proposed, the peer-review process is tilted toward positive results. But then, after a few years, the academic incentives shift—the paradigm has become entrenched—so that the most notable results are now those that disprove the theory.
  • Jennions, similarly, argues that the decline effect is largely a product of publication bias, or the tendency of scientists and scientific journals to prefer positive data over null results, which is what happens when no effect is found. The bias was first identified by the statistician Theodore Sterling, in 1959, after he noticed that ninety-seven per cent of all published psychological studies with statistically significant data found the effect they were looking for. A “significant” result is defined as any data point that would be produced by chance less than five per cent of the time. This ubiquitous test was invented in 1922 by the English mathematician Ronald Fisher, who picked five per cent as the boundary line, somewhat arbitrarily, because it made pencil and slide-rule calculations easier. Sterling saw that if ninety-seven per cent of psychology studies were proving their hypotheses, either psychologists were extraordinarily lucky or they published only the outcomes of successful experiments. In recent years, publication bias has mostly been seen as a problem for clinical trials, since pharmaceutical companies are less interested in publishing results that aren’t favorable. But it’s becoming increasingly clear that publication bias also produces major distortions in fields without large corporate incentives, such as psychology and ecology.
  • While publication bias almost certainly plays a role in the decline effect, it remains an incomplete explanation. For one thing, it fails to account for the initial prevalence of positive results among studies that never even get submitted to journals. It also fails to explain the experience of people like Schooler, who have been unable to replicate their initial data despite their best efforts
  • an equally significant issue is the selective reporting of results—the data that scientists choose to document in the first place. Palmer’s most convincing evidence relies on a statistical tool known as a funnel graph. When a large number of studies have been done on a single subject, the data should follow a pattern: studies with a large sample size should all cluster around a common value—the true result—whereas those with a smaller sample size should exhibit a random scattering, since they’re subject to greater sampling error. This pattern gives the graph its name, since the distribution resembles a funnel.
  • The funnel graph visually captures the distortions of selective reporting. For instance, after Palmer plotted every study of fluctuating asymmetry, he noticed that the distribution of results with smaller sample sizes wasn’t random at all but instead skewed heavily toward positive results.
  • Palmer has since documented a similar problem in several other contested subject areas. “Once I realized that selective reporting is everywhere in science, I got quite depressed,” Palmer told me. “As a researcher, you’re always aware that there might be some nonrandom patterns, but I had no idea how widespread it is.” In a recent review article, Palmer summarized the impact of selective reporting on his field: “We cannot escape the troubling conclusion that some—perhaps many—cherished generalities are at best exaggerated in their biological significance and at worst a collective illusion nurtured by strong a-priori beliefs often repeated.”
  • Palmer emphasizes that selective reporting is not the same as scientific fraud. Rather, the problem seems to be one of subtle omissions and unconscious misperceptions, as researchers struggle to make sense of their results. Stephen Jay Gould referred to this as the “shoehorning” process. “A lot of scientific measurement is really hard,” Simmons told me. “If you’re talking about fluctuating asymmetry, then it’s a matter of minuscule differences between the right and left sides of an animal. It’s millimetres of a tail feather. And so maybe a researcher knows that he’s measuring a good male”—an animal that has successfully mated—“and he knows that it’s supposed to be symmetrical. Well, that act of measurement is going to be vulnerable to all sorts of perception biases. That’s not a cynical statement. That’s just the way human beings work.”
  • One of the classic examples of selective reporting concerns the testing of acupuncture in different countries. While acupuncture is widely accepted as a medical treatment in various Asian countries, its use is much more contested in the West. These cultural differences have profoundly influenced the results of clinical trials. Between 1966 and 1995, there were forty-seven studies of acupuncture in China, Taiwan, and Japan, and every single trial concluded that acupuncture was an effective treatment. During the same period, there were ninety-four clinical trials of acupuncture in the United States, Sweden, and the U.K., and only fifty-six per cent of these studies found any therapeutic benefits. As Palmer notes, this wide discrepancy suggests that scientists find ways to confirm their preferred hypothesis, disregarding what they don’t want to see. Our beliefs are a form of blindness.
  • John Ioannidis, an epidemiologist at Stanford University, argues that such distortions are a serious issue in biomedical research. “These exaggerations are why the decline has become so common,” he says. “It’d be really great if the initial studies gave us an accurate summary of things. But they don’t. And so what happens is we waste a lot of money treating millions of patients and doing lots of follow-up studies on other themes based on results that are misleading.”
  • In 2005, Ioannidis published an article in the Journal of the American Medical Association that looked at the forty-nine most cited clinical-research studies in three major medical journals. Forty-five of these studies reported positive results, suggesting that the intervention being tested was effective. Because most of these studies were randomized controlled trials—the “gold standard” of medical evidence—they tended to have a significant impact on clinical practice, and led to the spread of treatments such as hormone replacement therapy for menopausal women and daily low-dose aspirin to prevent heart attacks and strokes. Nevertheless, the data Ioannidis found were disturbing: of the thirty-four claims that had been subject to replication, forty-one per cent had either been directly contradicted or had their effect sizes significantly downgraded.
  • The situation is even worse when a subject is fashionable. In recent years, for instance, there have been hundreds of studies on the various genes that control the differences in disease risk between men and women. These findings have included everything from the mutations responsible for the increased risk of schizophrenia to the genes underlying hypertension. Ioannidis and his colleagues looked at four hundred and thirty-two of these claims. They quickly discovered that the vast majority had serious flaws. But the most troubling fact emerged when he looked at the test of replication: out of four hundred and thirty-two claims, only a single one was consistently replicable. “This doesn’t mean that none of these claims will turn out to be true,” he says. “But, given that most of them were done badly, I wouldn’t hold my breath.”
  • the main problem is that too many researchers engage in what he calls “significance chasing,” or finding ways to interpret the data so that it passes the statistical test of significance—the ninety-five-per-cent boundary invented by Ronald Fisher. “The scientists are so eager to pass this magical test that they start playing around with the numbers, trying to find anything that seems worthy,” Ioannidis says. In recent years, Ioannidis has become increasingly blunt about the pervasiveness of the problem. One of his most cited papers has a deliberately provocative title: “Why Most Published Research Findings Are False.”
  • The problem of selective reporting is rooted in a fundamental cognitive flaw, which is that we like proving ourselves right and hate being wrong. “It feels good to validate a hypothesis,” Ioannidis said. “It feels even better when you’ve got a financial interest in the idea or your career depends upon it. And that’s why, even after a claim has been systematically disproven”—he cites, for instance, the early work on hormone replacement therapy, or claims involving various vitamins—“you still see some stubborn researchers citing the first few studies that show a strong effect. They really want to believe that it’s true.”
  • scientists need to become more rigorous about data collection before they publish. “We’re wasting too much time chasing after bad studies and underpowered experiments,” he says. The current “obsession” with replicability distracts from the real problem, which is faulty design. He notes that nobody even tries to replicate most science papers—there are simply too many. (According to Nature, a third of all studies never even get cited, let alone repeated.)
  • Schooler recommends the establishment of an open-source database, in which researchers are required to outline their planned investigations and document all their results. “I think this would provide a huge increase in access to scientific work and give us a much better way to judge the quality of an experiment,” Schooler says. “It would help us finally deal with all these issues that the decline effect is exposing.”
  • Although such reforms would mitigate the dangers of publication bias and selective reporting, they still wouldn’t erase the decline effect. This is largely because scientific research will always be shadowed by a force that can’t be curbed, only contained: sheer randomness. Although little research has been done on the experimental dangers of chance and happenstance, the research that exists isn’t encouraging
  • John Crabbe, a neuroscientist at the Oregon Health and Science University, conducted an experiment that showed how unknowable chance events can skew tests of replicability. He performed a series of experiments on mouse behavior in three different science labs: in Albany, New York; Edmonton, Alberta; and Portland, Oregon. Before he conducted the experiments, he tried to standardize every variable he could think of. The same strains of mice were used in each lab, shipped on the same day from the same supplier. The animals were raised in the same kind of enclosure, with the same brand of sawdust bedding. They had been exposed to the same amount of incandescent light, were living with the same number of littermates, and were fed the exact same type of chow pellets. When the mice were handled, it was with the same kind of surgical glove, and when they were tested it was on the same equipment, at the same time in the morning.
  • The premise of this test of replicability, of course, is that each of the labs should have generated the same pattern of results. “If any set of experiments should have passed the test, it should have been ours,” Crabbe says. “But that’s not the way it turned out.” In one experiment, Crabbe injected a particular strain of mouse with cocaine. In Portland the mice given the drug moved, on average, six hundred centimetres more than they normally did; in Albany they moved seven hundred and one additional centimetres. But in the Edmonton lab they moved more than five thousand additional centimetres. Similar deviations were observed in a test of anxiety. Furthermore, these inconsistencies didn’t follow any detectable pattern. In Portland one strain of mouse proved most anxious, while in Albany another strain won that distinction.
  • The disturbing implication of the Crabbe study is that a lot of extraordinary scientific data are nothing but noise. The hyperactivity of those coked-up Edmonton mice wasn’t an interesting new fact—it was a meaningless outlier, a by-product of invisible variables we don’t understand. The problem, of course, is that such dramatic findings are also the most likely to get published in prestigious journals, since the data are both statistically significant and entirely unexpected. Grants get written, follow-up studies are conducted. The end result is a scientific accident that can take years to unravel.
  • This suggests that the decline effect is actually a decline of illusion.
  • While Karl Popper imagined falsification occurring with a single, definitive experiment—Galileo refuted Aristotelian mechanics in an afternoon—the process turns out to be much messier than that. Many scientific theories continue to be considered true even after failing numerous experimental tests. Verbal overshadowing might exhibit the decline effect, but it remains extensively relied upon within the field. The same holds for any number of phenomena, from the disappearing benefits of second-generation antipsychotics to the weak coupling ratio exhibited by decaying neutrons, which appears to have fallen by more than ten standard deviations between 1969 and 2001. Even the law of gravity hasn’t always been perfect at predicting real-world phenomena. (In one test, physicists measuring gravity by means of deep boreholes in the Nevada desert found a two-and-a-half-per-cent discrepancy between the theoretical predictions and the actual data.) Despite these findings, second-generation antipsychotics are still widely prescribed, and our model of the neutron hasn’t changed. The law of gravity remains the same.
  • Such anomalies demonstrate the slipperiness of empiricism. Although many scientific ideas generate conflicting results and suffer from falling effect sizes, they continue to get cited in the textbooks and drive standard medical practice. Why? Because these ideas seem true. Because they make sense. Because we can’t bear to let them go. And this is why the decline effect is so troubling. Not because it reveals the human fallibility of science, in which data are tweaked and beliefs shape perceptions. (Such shortcomings aren’t surprising, at least for scientists.) And not because it reveals that many of our most exciting theories are fleeting fads and will soon be rejected. (That idea has been around since Thomas Kuhn.) The decline effect is troubling because it reminds us how difficult it is to prove anything. We like to pretend that our experiments define the truth for us. But that’s often not the case. Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true. When the experiments are done, we still have to choose what to believe.
Weiye Loh

Rationally Speaking: The sorry state of higher education - 0 views

  • two disconcerting articles crossed my computer screen, both highlighting the increasingly sorry state of higher education, though from very different perspectives. The first is “Ed Dante’s” (actually a pseudonym) piece in the Chronicle of Higher Education, entitled The Shadow Scholar. The second is Gregory Petsko’s A Faustian Bargain, published of all places in Genome Biology.
  • There is much to be learned by educators in the Shadow Scholar piece, except the moral that “Dante” would like us to take from it. The anonymous author writes:“Pointing the finger at me is too easy. Why does my business thrive? Why do so many students prefer to cheat rather than do their own work? Say what you want about me, but I am not the reason your students cheat.
  • The point is that plagiarism and cheating happen for a variety of reasons, one of which is the existence of people like Mr. Dante and his company, who set up a business that is clearly unethical and should be illegal. So, pointing fingers at him and his ilk is perfectly reasonable. Yes, there obviously is a “market” for cheating in higher education, and there are complex reasons for it, but he is in a position similar to that of the drug dealer who insists that he is simply providing the commodity to satisfy society’s demand. Much too easy of a way out, and one that doesn’t fly in the case of drug dealers, and shouldn’t fly in the case of ghost cheaters.
  • ...16 more annotations...
  • As a teacher at the City University of New York, I am constantly aware of the possibility that my students might cheat on their tests. I do take some elementary precautionary steps
  • Still, my job is not that of the policeman. My students are adults who theoretically are there to learn. If they don’t value that learning and prefer to pay someone else to fake it, so be it, ultimately it is they who lose in the most fundamental sense of the term. Just like drug addicts, to return to my earlier metaphor. And just as in that other case, it is enablers like Mr. Dante who simply can’t duck the moral blame.
  • n open letter to the president of SUNY-Albany, penned by molecular biologist Gregory Petsko. The SUNY-Albany president has recently announced the closing — for budgetary reasons — of the departments of French, Italian, Classics, Russian and Theater Arts at his university.
  • Petsko begins by taking on one of the alleged reasons why SUNY-Albany is slashing the humanities: low enrollment. He correctly points out that the problem can be solved overnight at the stroke of a pen: stop abdicating your responsibilities as educators and actually put constraints on what your students have to take in order to graduate. Make courses in English literature, foreign languages, philosophy and critical thinking, the arts and so on, mandatory or one of a small number of options that the students must consider in order to graduate.
  • But, you might say, that’s cheating the market! Students clearly don’t want to take those courses, and a business should cater to its customers. That type of reasoning is among the most pernicious and idiotic I’ve ever heard. Students are not clients (if anything, their parents, who usually pay the tuition, are), they are not shopping for a new bag or pair of shoes. They do not know what is best for them educationally, that’s why they go to college to begin with. If you are not convinced about how absurd the students-as-clients argument is, consider an analogy: does anyone with functioning brain cells argue that since patients in a hospital pay a bill, they should be dictating how the brain surgeon operates? I didn’t think so.
  • Petsko then tackles the second lame excuse given by the president of SUNY-Albany (and common among the upper administration of plenty of public universities): I can’t do otherwise because of the legislature’s draconian cuts. Except that university budgets are simply too complicated for there not to be any other option. I know this first hand, I’m on a special committee at my own college looking at how to creatively deal with budget cuts handed down to us from the very same (admittedly small minded and dysfunctional) New York state legislature that has prompted SUNY-Albany’s action. As Petsko points out, the president there didn’t even think of involving the faculty and staff in a broad discussion of how to deal with the crisis, he simply announced the cuts on a Friday afternoon and then ran for cover. An example of very poor leadership to say the least, and downright hypocrisy considering all the talk that the same administrator has been dishing out about the university “community.”
  • Finally, there is the argument that the humanities don’t pay for their own way, unlike (some of) the sciences (some of the time). That is indubitably true, but irrelevant. Universities are not businesses, they are places of higher learning. Yes, of course they need to deal with budgets, fund raising and all the rest. But the financial and administrative side has one goal and one goal only: to provide the best education to the students who attend that university.
  • That education simply must include the sciences, philosophy, literature, and the arts, as well as more technical or pragmatic offerings such as medicine, business and law. Why? Because that’s the kind of liberal education that makes for an informed and intelligent citizenry, without which our democracy is but empty talk, and our lives nothing but slavery to the marketplace.
  • Maybe this is not how education works in the US. I thought that general (or compulsory) education (ie. up to high school) is designed to make sure that citizens in a democratic country can perform their civil duties. A balanced and well-rounded education, which includes a healthy mixture of science and humanities, is indeed very important for this purpose. However, college-level education is for personal growth and therefore the person must have a large say about what kind of classes he or she chooses to take. I am disturbed by Massimo's hospital analogy. Students are not ill. They don't go to college to be cured, or to be good citizens. They go to college to learn things that *they* want to learn. Patients are passive. Students are not.I agree that students typically do not know what kind of education is good for them. But who does?
  • students do have a saying in their education. They pick their major, and there are electives. But I object to the idea that they can customize their major any way they want. That assumes they know what the best education for them is, they don't. That's the point of education.
  • The students are in your class to get a good grade, any learning that takes place is purely incidental. Those good grades will look good on their transcript and might convince a future employer that they are smart and thus are worth paying more.
  • I don't know what the dollar to GPA exchange rate is these days, but I don't doubt that there is one.
  • Just how many of your students do you think will remember the extensive complex jargon of philosophy more than a couple of months after they leave your classroom?
  • and our lives nothing but slavery to the marketplace.We are there. Welcome. Where have you been all this time? In a capitalistic/plutocratic society money is power (and free speech too according to the supreme court). Money means a larger/better house/car/clothing/vacation than your neighbor and consequently better mating opportunities. You can mostly blame the women for that one I think just like the peacock's tail.
  • If a student of surgery fails to learn they might maim, kill or cripple someone. If an engineer of airplanes fails to learn they might design a faulty aircraft that fails and kills people. If a student of chemistry fails to learn they might design a faulty drug with unintended and unfortunate side effects, but what exactly would be the harm if a student of philosophy fails to learn Aristotle had to say about elements or Plato had to say about perfect forms? These things are so divorced from people's everyday activities as to be rendered all but meaningless.
  • human knowledge grows by leaps and bounds every day, but human brain capacity does not, so the portion of human knowledge you can personally hold gets smaller by the minute. Learn (and remember) as much as you can as fast as you can and you will still lose ground. You certainly have your work cut out for you emphasizing the importance of Thales in the Age of Twitter and whatever follows it next year.
Weiye Loh

Evidence: A Seductive but Slippery Concept - The Scientist - Magazine of the Life Sciences - 0 views

  • Much of what we know is wrong—or at least not definitively established to be right.
  • there were different schools of evidence-based medicine, reminding me of the feuding schools of psychoanalysis. For some it meant systematic reviews of well-conducted trials. For others it meant systematically searching for all evidence and then combining the evidence that passed a predefined quality hurdle. Quantification was essential for some but unimportant for others, and the importance of “clinical experience” was disputed.
  • There was also a backlash. Many doctors resented bitterly the implication that medicine had not always been based on evidence, while others saw unworthy people like statisticians and epidemiologists replacing the magnificence of clinicians. Many doctors thought evidence-based medicine a plot driven by insurance companies, politicians, and administrators in order to cut costs.
  • ...6 more annotations...
  • The discomfort of many clinicians comes from the fact that the data are derived mainly from clinical trials, which exclude the elderly and people with multiple problems. Yet in the “real world” of medicine, particularly general practice, most patients are elderly and most have multiple problems. So can the “evidence” be applied to these patients? Unthinking application of multiple evidence-based guidelines may cause serious problems, says Mike Rawlins, chairman of NICE.
  • There has always been anxiety that the zealots would insist evidence was all that was needed to make a decision, and in its early days NICE seemed to take this line. Critics quickly pointed out, however, that patients had things called values, as did clinicians, and that clinicians and patients needed to blend their values with the evidence in a way that was often a compromise.
  • Social scientists have tended to be wary of the reductionist approach of evidence-based medicine and have wanted a much broader range of information to be admissible.
  • Evidence-based medicine has been at its most confident when evaluating drug treatments, but many interventions in health care are far more complex than simply prescribing a drug. Insisting on randomized trials to evaluate these interventions may not only be inappropriate, but also misleading. Interventions may be stamped “ineffective” by the hardliners when they actually might offer substantial benefits. Then there is the constant confusion between “evidence of absence of effectiveness” with “absence of evidence of effectiveness”—two very different things.
  • even some of the strongest proponents of evidence-based medicine have become uneasy, as we have increasing evidence that drug companies have managed to manipulate data. In the heartland of evidence-based medicine—drug trials—the “evidence” may be unreliable and misleading.
  • All this doesn’t mean that evidence-based medicine should be abandoned. It means, rather, that we must never forget the complex relationship between evidence and truth.
Weiye Loh

Epiphenom: If God loves you, why take medicine? - 0 views

  • Sarah Finocchario-Kessler, at the University of Kansas, used data from one such drug trial to see what the effect of religious beliefs (and other psychological factors) was on medication taking.
  • One recent study looked at whether people with HIV took their medicine as they were supposed to. Most trials of new drugs monitor this, and it can be done very easily simply using special bottles that record each time they're opened.
  • people who used a passive religious deferral coping style (e.g. "I don’t try much of anything; simply expect God to take control") were less likely to take their medicine as often as they were supposed to.  On the other hand,  collaborative religious coping "I work together with God as partners" or self-directing religious coping (e.g., "I make decisions about what to do without God’s help" had no effect on whether people took their medicines.
  • ...4 more annotations...
  • The biggest effect was with those people who scored high on the "God as locus of health control" measure - that means people who agreed with statements like "Whether or not my HIV disease improves is up to God." Although this had no effect on medication taking at 3 months, the halfway point of the study, by the end of the study (at 6 months) people who scored high on this measure were 42% less likely to be taking their medication regularly.
  • This study is interesting because these aren't folks who have any crazy ideas that medicine is useless. Remember, they signed up to take part in a drug study, presumably because they thought they might benefit. What's more, they stayed in the study right to the end, and did take their medicine most of the time. It's just that they were more likely than others to 'forget' it.
  • Now, this is a complicated picture in other ways. People who are at death's door (unlike the mostly healthy people in this study) seem to be more likely to ask for 'heroic' interventions to try to keep them alive if they have strong beliefs in God's will.
  • Maybe confronting your own imminent death triggers some reconsiderations about the mysterious workings of the almighty!
Weiye Loh

You can't handle the truth - The Boston Globe - 0 views

  •  
    You can't handle the truth A respected scientist set out to determine which drugs are actually the most dangerous -- and discovered that the answers are, well, awkward
Weiye Loh

nanopolitan: Medicine, Trials, Conflict of Interest, Disclosures - 0 views

  • Some 1500 documents revealed in litigation provide unprecedented insights into how pharmaceutical companies promote drugs, including the use of vendors to produce ghostwritten manuscripts and place them into medical journals.
  • Dozens of ghostwritten reviews and commentaries published in medical journals and supplements were used to promote unproven benefits and downplay harms of menopausal hormone therapy (HT), and to cast raloxifene and other competing therapies in a negative light.
  • the pharmaceutical company Wyeth used ghostwritten articles to mitigate the perceived risks of breast cancer associated with HT, to defend the unsupported cardiovascular “benefits” of HT, and to promote off-label, unproven uses of HT such as the prevention of dementia, Parkinson's disease, vision problems, and wrinkles.
  • ...7 more annotations...
  • Given the growing evidence that ghostwriting has been used to promote HT and other highly promoted drugs, the medical profession must take steps to ensure that prescribers renounce participation in ghostwriting, and to ensure that unscrupulous relationships between industry and academia are avoided rather than courted.
  • Twenty-five out of 32 highly paid consultants to medical device companies in 2007, or their publishers, failed to reveal the financial connections in journal articles the following year, according to a [recent] study.
  • The study compared major payments to consultants by orthopedic device companies with financial disclosures the consultants later made in medical journal articles, and found them lacking in public transparency. “We found a massive, dramatic system failure,” said David J. Rothman, a professor and president of the Institute on Medicine as a Profession at Columbia University, who wrote the study with two other Columbia researchers, Susan Chimonas and Zachary Frosch.
  • Carl Elliot in The Chronicle of Higher Educations: The Secret Lives of Big Pharma's 'Thought Leaders':
  • See also a related NYTimes report -- Menopause, as Brought to You by Big Pharma by Natasha Singer and Duff Wilson -- from December 2009. Duff Wilson reports in the NYTimes: Medical Industry Ties Often Undisclosed in Journals:
  • Pharmaceutical companies hire KOL's [Key Opinion Leaders] to consult for them, to give lectures, to conduct clinical trials, and occasionally to make presentations on their behalf at regulatory meetings or hearings.
  • KOL's do not exactly endorse drugs, at least not in ways that are too obvious, but their opinions can be used to market them—sometimes by word of mouth, but more often by quasi-academic activities, such as grand-rounds lectures, sponsored symposia, or articles in medical journals (which may be ghostwritten by hired medical writers). While pharmaceutical companies seek out high-status KOL's with impressive academic appointments, status is only one determinant of a KOL's influence. Just as important is the fact that a KOL is, at least in theory, independent. [...]
  •  
    Medicine, Trials, Conflict of Interest, Disclosures Just a bunch of links -- mostly from the US -- that paint give us a troubling picture of the state of ethics in biomedical fields:
Weiye Loh

When big pharma pays a publisher to publish a fake journal... : Respectful Insolence - 0 views

  • pharmaceutical company Merck, Sharp & Dohme paid Elsevier to produce a fake medical journal that, to any superficial examination, looked like a real medical journal but was in reality nothing more than advertising for Merck
  • As reported by The Scientist: Merck paid an undisclosed sum to Elsevier to produce several volumes of a publication that had the look of a peer-reviewed medical journal, but contained only reprinted or summarized articles--most of which presented data favorable to Merck products--that appeared to act solely as marketing tools with no disclosure of company sponsorship. "I've seen no shortage of creativity emanating from the marketing departments of drug companies," Peter Lurie, deputy director of the public health research group at the consumer advocacy nonprofit Public Citizen, said, after reviewing two issues of the publication obtained by The Scientist. "But even for someone as jaded as me, this is a new wrinkle." The Australasian Journal of Bone and Joint Medicine, which was published by Exerpta Medica, a division of scientific publishing juggernaut Elsevier, is not indexed in the MEDLINE database, and has no website (not even a defunct one). The Scientist obtained two issues of the journal: Volume 2, Issues 1 and 2, both dated 2003. The issues contained little in the way of advertisements apart from ads for Fosamax, a Merck drug for osteoporosis, and Vioxx.
  • there are numerous "throwaway" journals out there. "Throwaway" journals tend to be defined as journals that are provided free of charge, have a lot of advertising (a high "advertising-to-text" ratio, as it is often described), and contain no original investigations. Other relevant characteristics include: Supported virtually entirely by advertising revenue. Ads tend to be placed within article pages interrupting the articles, rather than between articles, as is the case with most medical journals that accept ads Virtually the entire content is reviews of existing content of variable (and often dubious) quality. Parasitic. Throwaways often summarize peer-reviewed research from real journals. Questionable (at best) peer review. Throwaways tend to cater to an uninvolved and uncritical readership. No original work.
Weiye Loh

Titans of science: David Attenborough meets Richard Dawkins | Science | The Guardian - 0 views

  • What is the one bit of science from your field that you think everyone should know?David Attenborough: The unity of life.Richard Dawkins: The unity of life that comes about through evolution, since we're all descended from a single common ancestor. It's almost too good to be true, that on one planet this extraordinary complexity of life should have come about by what is pretty much an intelligible process. And we're the only species capable of understanding it.
  • RD: I know you're working on a programme about Cambrian and pre-Cambrian fossils, David. A lot of people might think, "These are very old animals, at the beginning of evolution; they weren't very good at what they did." I suspect that isn't the case?DA: They were just as good, but as generalists, most were ousted from the competition.RD: So it probably is true there's a progressive element to evolution in the short term but not in the long term – that when a lineage branches out, it gets better for about five million years but not 500 million years. You wouldn't see progressive improvement over that kind of time scale.DA: No, things get more and more specialised. Not necessarily better.RD: The "camera" eyes of any modern animal would be better than what had come before.DA: Certainly... but they don't elaborate beyond function. When I listen to a soprano sing a Handel aria with an astonishing coloratura from that particular larynx, I say to myself, there has to be a biological reason that was useful at some stage. The larynx of a human being did not evolve without having some function. And the only function I can see is sexual attraction.RD: Sexual selection is important and probably underrated.DA: What I like to think is that if I think the male bird of paradise is beautiful, my appreciation of it is precisely the same as a female bird of paradise.
    • Weiye Loh
       
      Is survivability really all about sex and reproduction of future generation? 
  • People say Richard Feynman had one of these extraordinary minds that could grapple with ideas of which I have no concept. And you hear all the ancillary bits – like he was a good bongo player – that make him human. So I admire this man who could not only deal with string theory but also play the bongos. But he is beyond me. I have no idea what he was talking of.
  • ...6 more annotations...
  • RD: There does seem to be a sense in which physics has gone beyond what human intuition can understand. We shouldn't be too surprised about that because we're evolved to understand things that move at a medium pace at a medium scale. We can't cope with the very tiny scale of quantum physics or the very large scale of relativity.
  • DA: A physicist will tell me that this armchair is made of vibrations and that it's not really here at all. But when Samuel Johnson was asked to prove the material existence of reality, he just went up to a big stone and kicked it. I'm with him.
  • RD: It's intriguing that the chair is mostly empty space and the thing that stops you going through it is vibrations or energy fields. But it's also fascinating that, because we're animals that evolved to survive, what solidity is to most of us is something you can't walk through.
  • the science of the future may be vastly different from the science of today, and you have to have the humility to admit when you don't know. But instead of filling that vacuum with goblins or spirits, I think you should say, "Science is working on it."
  • DA: Yes, there was a letter in the paper [about Stephen Hawking's comments on the nonexistence of God] saying, "It's absolutely clear that the function of the world is to declare the glory of God." I thought, what does that sentence mean?!
  • What is the most difficult ethical dilemma facing science today?DA: How far do you go to preserve individual human life?RD: That's a good one, yes.DA: I mean, what are we to do with the NHS? How can you put a value in pounds, shillings and pence on an individual's life? There was a case with a bowel cancer drug – if you gave that drug, which costs several thousand pounds, it continued life for six weeks on. How can you make that decision?
  •  
    Of mind and matter: David Attenborough meets Richard Dawkins We paired up Britain's most celebrated scientists to chat about the big issues: the unity of life, ethics, energy, Handel - and the joy of riding a snowmobile
Weiye Loh

nanopolitan: Plagiarizing from Wikipedia? - 0 views

  • This retraction notice made me go, "WTF were you folks thinking?"
  • Here's the text of the retraction notice: This article has been retracted. Please see Elsevier Policy on Article Withdrawal (http://www.elsevier.com/locate/withdrawalpolicy). Reason: This article has been retracted at the request of the editor as the authors have plagiarised part of several papers that had already appeared in several journals. One of the conditions of submission of a paper for publication is that authors declare explicitly that their work is original and has not appeared in a publication elsewhere. Re-use of any data should be appropriately cited. As such this article represents a severe abuse of the scientific publishing system. The scientific community takes a very strong view on this matter and we apologise to readers of the journal that this was not detected during the submission process. From a limited, non-exhaustive check of the text, several elements of the text had been plagiarised from the following list of sources: Dihydroxyacetone - Wikipedia, the free encyclopedia StateMaster - Encyclopedia: Dihydroxyacetone
  • From a quick scan, I found this section in the paper ... In the 1950s, Eva Wittgenstein at the University of Cincinnati did further research with dihydroxyacetone. Her studies involved using dihydroxyacetone as an oral drug for treating children with glycogen storage disease (Wittgenstein and Berry, 1960). The children received large oral doses of dihydroxyacetone, and sometimes spit or spilled the substance onto their skin. Healthcare workers noticed that the skin turned brown after a few hours of dihydroxyacetone exposure. Eva Wittgenstein continued to experiment with this unique substance, painting dihydroxyacetone liquid solutions onto her own skin. She was able to consistently reproduce the pigmentation effect, and noted that dihydroxyacetone did not penetrate beyond the stratum corneum, or dead skin surface layer. which is very similar to this section in the Wikipedia entry: In the 1950s Eva Wittgenstein at the University of Cincinnati did further research with dihydroxyacetone.[4][5][6][7] Her studies involved using DHA as an oral drug for assisting children with glycogen storage disease. The children received large doses of DHA by mouth, and sometimes spat or spilled the substance onto their skin. Healthcare workers noticed that the skin turned brown after a few hours of DHA exposure. Eva Wittgenstein continued to experiment with this unique substance, painting DHA liquid solutions onto her own skin. She was able to consistently reproduce the pigmentation effect, and noted that DHA did not penetrate beyond the stratum corneum, or dead skin surface layer.
  • ...1 more annotation...
  • I wonder if they can use the "Ananda Kumar gambit": claim that it was they who wrote the Wikipedia sentence and so they were justified in re-using their own material.
  •  
    Plagiarizing from Wikipedia?
Weiye Loh

To Die of Having Lived: an article by Richard Rapport | The American Scholar - 0 views

  • Although it may be a form of arrogance to attempt the management of one’s own death, is it better to surrender that management to the arrogance of someone else? We know we can’t avoid dying, but perhaps we can avoid dying badly.
  • Dodging a bad death has become more complicated over the past 30 or 40 years. Before the advent of technological creations that permit vital functions to be sustained so well artificially, medical ethics were less obstructed by abstract definitions of death.
  • generally agreed upon criteria for brain death have simplified some of these confusions, but they have not solved them. The broad middle ground between our usual health and consciousness as the expected norm on the one hand, and clear death of the brain on the other, lacks certainty.
    • Weiye Loh
       
      Isn't it always the case? That dichotomous relationships aren't clearly and equally demarcated but some how we attempt to split them up... through polemical discourses and rhetorics...
  • ...13 more annotations...
  • Doctors and other health-care workers can provide patients and families with probabilities for improvement or recovery, but statistics are hardly what is wanted. Even after profound injury or the diagnosis of an illness that statistically is nearly certain to be fatal, what people hear is the word nearly. How do we not allow the death of someone who might be saved? How do we avoid the equally intolerable salvation of a clinically dead person?
    • Weiye Loh
       
      In what situations do we hear the word "nearly" and in what situations do we hear the word "certain"? When we're dealing with a person's life, we hear "nearly", but when we're dealing with climate science we hear "certain"? 
  • Injecting political agendas into these end-of-life complexities only confuses the problem without providing a solution.
  • The questions are how, when, and on whose terms we depart. It is curious that people might be convinced to avoid confronting death while they are healthy, and that society tolerates ad hominem arguments that obstruct rational debate over an authentic problem of ethics in an uncertain world.
  • Any seriously ill older person who winds up in a modern CCU immediately yields his autonomy. Even if the doctors, nurses, and staff caring for him are intelligent, properly educated, humanistically motivated, and correct in the diagnosis, they are manipulated not only by the tyranny of technology but also by the rules established in their hospital. In addition, regulations of local and state licensing agencies and the federal government dictate the parameters of what the hospital workers do and how they do it, and every action taken is heavily influenced by legal experts committed to their client’s best interest—values frequently different from the patient’s. Once an acutely ill patient finds himself in this situation, everything possible will be done to save him; he is in no position to offer an opinion.
  • Eventually, after hours or days (depending on the illness and who is involved in the care), the wisdom of continuing treatment may come into question. But by then the patient will likely have been intubated and placed on a ventilator, a feeding tube may have been inserted, a catheter placed in the bladder, IVs started in peripheral veins or threaded through a major blood vessel near the heart, and monitors attached to record an EKG, arterial blood pressure, temperature, respirations, oxygen saturation, even pressure inside the skull. Sequential pressure devices will have been wrapped around the legs. All the digital marvels have alarms, so if one isn’t working properly, an annoying beep, like the sound of a backing truck, will fill the patient’s room. Vigilant nurses will add drugs by the dozens to the IV or push them into ports. Families will hover uncertainly. Meanwhile, tens and perhaps hundreds of thousands of dollars will have been transferred from one large corporation—an insurer of some kind—to another large corporation—a health care delivery system of some kind.
    • Weiye Loh
       
      Perhaps then, the value of life is not so much life in itself per se, but rather the transactive amount it generates. 
  • While the expense of the drugs, manpower, and technology required to make a diagnosis and deliver therapy does sop up resources and thereby deny treatment that might be more fruitful for others, including the 46.3 million Americans who, according to the Census Bureau, have no health insurance, that isn’t the real dilemma of the critical care unit.
  • the problem isn’t getting into or out of a CCU; the predicament is in knowing who should be there in the first place.
  • Before we become ill, we tend to assume that everything can be treated and treated successfully. The prelate in Willa Cather’s Death Comes for the Archbishop was wiser. Approaching the end, he said to a younger priest, “I shall not die of a cold, my son. I shall die of having lived.”
  • best way to avoid unwanted admission to a critical care unit at or near the end of life is to write an advance directive (a living will or durable power of attorney for health care) when healthy.
  • , not many people do this and, more regrettably, often the document is not included in the patient’s chart or it goes unnoticed.
  • Since we are sure to die of having lived, we should prepare for death before the last minute. Entire corporations are dedicated to teaching people how to retire well. All of their written materials, Web sites, and seminars begin with the same advice: start planning early. Shouldn’t we at least occasionally think about how we want to leave our lives?
  • Flannery O’Connor, who died young of systemic lupus, wrote, “Sickness before death is a very appropriate thing and I think those who don’t have it miss one of God’s mercies.”
  • Because we understand the metaphor of conflict so well, we are easily sold on the idea that we must resolutely fight against our afflictions (although there was once an article in The Onion titled “Man Loses Cowardly Battle With Cancer”). And there is a place to contest an abnormal metabolism, a mutation, a trauma, or an infection. But there is also a place to surrender. When the organs have failed, when the mind has dissolved, when the body that has faithfully housed us for our lifetime has abandoned us, what’s wrong with giving up?
  •  
    Spring 2010 To Die of Having Lived A neurological surgeon reflects on what patients and their families should and should not do when the end draws near
1 - 20 of 41 Next › Last »
Showing 20 items per page