Skip to main content

Home/ New Media Ethics 2009 course/ Group items tagged Placebo

Rss Feed Group items tagged

Weiye Loh

Meet the Ethical Placebo: A Story that Heals | NeuroTribes - 0 views

  • In modern medicine, placebos are associated with another form of deception — a kind that has long been thought essential for conducting randomized clinical trials of new drugs, the statistical rock upon which the global pharmaceutical industry was built. One group of volunteers in an RCT gets the novel medication; another group (the “control” group) gets pills or capsules that look identical to the allegedly active drug, but contain only an inert substance like milk sugar. These faux drugs are called placebos.
  • Inevitably, the health of some people in both groups improves, while the health of others grows worse. Symptoms of illness fluctuate for all sorts of reasons, including regression to the mean.
  • Since the goal of an RCT, from Big Pharma’s perspective, is to demonstrate the effectiveness of a new drug, the return to robust health of a volunteer in the control group is considered a statistical distraction. If too many people in the trial get better after downing sugar pills, the real drug will look worse by comparison — sometimes fatally so for the purpose of earning approval from the Food and Drug Adminstration.
  • ...12 more annotations...
  • For a complex and somewhat mysterious set of reasons, it is becoming increasingly difficult for experimental drugs to prove their superiority to sugar pills in RCTs
  • in recent years, however, has it become obvious that the abatement of symptoms in control-group volunteers — the so-called placebo effect — is worthy of study outside the context of drug trials, and is in fact profoundly good news to anyone but investors in Pfizer, Roche, and GlaxoSmithKline.
  • The emerging field of placebo research has revealed that the body’s repertoire of resilience contains a powerful self-healing network that can help reduce pain and inflammation, lower the production of stress chemicals like cortisol, and even tame high blood pressure and the tremors of Parkinson’s disease.
  • more and more studies each year — by researchers like Fabrizio Benedetti at the University of Turin, author of a superb new book called The Patient’s Brain, and neuroscientist Tor Wager at the University of Colorado — demonstrate that the placebo effect might be potentially useful in treating a wide range of ills. Then why aren’t doctors supposed to use it?
  • The medical establishment’s ethical problem with placebo treatment boils down to the notion that for fake drugs to be effective, doctors must lie to their patients. It has been widely assumed that if a patient discovers that he or she is taking a placebo, the mind/body password will no longer unlock the network, and the magic pills will cease to do their job.
  • For “Placebos Without Deception,” the researchers tracked the health of 80 volunteers with irritable bowel syndrome for three weeks as half of them took placebos and the other half didn’t.
  • In a previous study published in the British Medical Journal in 2008, Kaptchuk and Kirsch demonstrated that placebo treatment can be highly effective for alleviating the symptoms of IBS. This time, however, instead of the trial being “blinded,” it was “open.” That is, the volunteers in the placebo group knew that they were getting only inert pills — which they were instructed to take religiously, twice a day. They were also informed that, just as Ivan Pavlov trained his dogs to drool at the sound of a bell, the body could be trained to activate its own built-in healing network by the act of swallowing a pill.
  • In other words, in addition to the bogus medication, the volunteers were given a true story — the story of the placebo effect. They also received the care and attention of clinicians, which have been found in many other studies to be crucial for eliciting placebo effects. The combination of the story and a supportive clinical environment were enough to prevail over the knowledge that there was really nothing in the pills. People in the placebo arm of the trial got better — clinically, measurably, significantly better — on standard scales of symptom severity and overall quality of life. In fact, the volunteers in the placebo group experienced improvement comparable to patients taking a drug called alosetron, the standard of care for IBS. Meet the ethical placebo: a powerfully effective faux medication that meets all the standards of informed consent.
  • The study is hardly the last word on the subject, but more like one of the first. Its modest sample size and brief duration leave plenty of room for followup research. (What if “ethical” placebos wear off more quickly than deceptive ones? Does the fact that most of the volunteers in this study were women have any bearing on the outcome? Were any of the volunteers skeptical that the placebo effect is real, and did that affect their response to treatment?) Before some eager editor out there composes a tweet-baiting headline suggesting that placebos are about to drive Big Pharma out of business, he or she should appreciate the fact that the advent of AMA-approved placebo treatments would open numerous cans of fascinatingly tangled worms. For example, since the precise nature of placebo effects is shaped largely by patients’ expectations, would the advertised potency and side effects of theoretical products like Placebex and Therastim be subject to change by Internet rumors, requiring perpetual updating?
  • It’s common to use the word “placebo” as a synonym for “scam.” Economists talk about placebo solutions to our economic catastrophe (tax cuts for the rich, anyone?). Online skeptics mock the billion-dollar herbal-medicine industry by calling it Big Placebo. The fact that our brains and bodies respond vigorously to placebos given in warm and supportive clinical environments, however, turns out to be very real.
  • We’re also discovering that the power of narrative is embedded deeply in our physiology.
  • in the real world of doctoring, many physicians prescribe medications at dosages too low to have an effect on their own, hoping to tap into the body’s own healing resources — though this is mostly acknowledged only in whispers, as a kind of trade secret.
Weiye Loh

News Clips: Pinning down acupuncture: It's a placebo - 0 views

  • some doctors seem to have embraced even disproven remedies. Take, for instance, a review of acupuncture research that appeared last July in the New England Journal of Medicine. This highly respected journal is one of the most widely read by doctors across specialities.In Acupuncture For Chronic Low Back Pain, the authors reviewed clinical trials done to assess if acupuncture actually helps in chronic low back pain. The most important meta-analysis available was a 2008 study involving 6,359 patients, which 'showed that real acupuncture treatments were no more effective than sham acupuncture treatments'.
  • The authors then editorialised: 'There was nevertheless evidence that both real acupuncture and sham acupuncture were more effective than no treatment and that acupuncture can be a useful supplement to other forms of conventional therapy for low back pain.'
  • First, they admit that pooled clinical trials of the best sort show that real acupuncture does no better than sham acupuncture. This should mean that acupuncture does not work - full stop. But then they say that both sham and real acupuncture work as well as the other and thus is useful. Translation: Please use acupuncture as a placebo on your patients; just don't let them know it is a placebo.
  • ...6 more annotations...
  • I should add that I am not criticising TCM per se. Only acupuncture, a facet of TCM, albeit its most dramatic, is being scrutinised here. Chinese herbology must be analysed on its own merits.Interestingly, although acupuncture may be TCM's poster boy today, the Chinese physician in days of yore would have looked askance at it. Instead, his practice and prestige were based upon his grasp of the Chinese pharmacopoeia.
  • Acupuncture was left to the shamans and blood letters. After all, it was grounded, not in the knowledge of which herbs were best for what conditions, but astrology.
  • In Giovanni Maciocia's 2005 book, The Foundations Of Chinese Medicine: A Comprehensive Text For Acupuncturists And Herbalists, there is a chart showing the astrological provenance of acupuncture. The chart shows how the 12 main acupuncture meridians and the 12 main body segments correspond to the 12 Houses of the Chinese zodiac.
  • In Chinese cosmology, all life is animated by a numinous force called qi, the flow of which mirrors the sun's apparent 'movement' during the year through the ecliptic. (The ecliptic is the imaginary plane of the earth's orbit around the sun).Moreover, everything in the Chinese zodiac is mirrored on Earth and in Man. This was taught even in the earliest systematised TCM text, the Yellow Emperor's Canon Of Medicine, thus: 'Heaven is covered with constellations, Earth with waterways, and man with channels.'This 'as above, so below' doctrine means that if there is qi flowing around in the imaginary closed loop of the zodiac, there is qi flowing correspondingly in the body's closed loop of imaginary meridians as well.
  • Note that not only is acupuncture astrological in origin but also the astrology is based on a model of the universe which has the earth at its centre. This geocentric model was an erroneous idea widely accepted before the Copernican revolution.
  • So should doctors check the daily horoscopes of their patients?
Weiye Loh

Odds Are, It's Wrong - Science News - 0 views

  • science has long been married to mathematics. Generally it has been for the better. Especially since the days of Galileo and Newton, math has nurtured science. Rigorous mathematical methods have secured science’s fidelity to fact and conferred a timeless reliability to its findings.
  • a mutant form of math has deflected science’s heart from the modes of calculation that had long served so faithfully. Science was seduced by statistics, the math rooted in the same principles that guarantee profits for Las Vegas casinos. Supposedly, the proper use of statistics makes relying on scientific results a safe bet. But in practice, widespread misuse of statistical methods makes science more like a crapshoot.
  • science’s dirtiest secret: The “scientific method” of testing hypotheses by statistical analysis stands on a flimsy foundation. Statistical tests are supposed to guide scientists in judging whether an experimental result reflects some real effect or is merely a random fluke, but the standard methods mix mutually inconsistent philosophies and offer no meaningful basis for making such decisions. Even when performed correctly, statistical tests are widely misunderstood and frequently misinterpreted. As a result, countless conclusions in the scientific literature are erroneous, and tests of medical dangers or treatments are often contradictory and confusing.
  • ...24 more annotations...
  • Experts in the math of probability and statistics are well aware of these problems and have for decades expressed concern about them in major journals. Over the years, hundreds of published papers have warned that science’s love affair with statistics has spawned countless illegitimate findings. In fact, if you believe what you read in the scientific literature, you shouldn’t believe what you read in the scientific literature.
  • “There are more false claims made in the medical literature than anybody appreciates,” he says. “There’s no question about that.”Nobody contends that all of science is wrong, or that it hasn’t compiled an impressive array of truths about the natural world. Still, any single scientific study alone is quite likely to be incorrect, thanks largely to the fact that the standard statistical system for drawing conclusions is, in essence, illogical. “A lot of scientists don’t understand statistics,” says Goodman. “And they don’t understand statistics because the statistics don’t make sense.”
  • In 2007, for instance, researchers combing the medical literature found numerous studies linking a total of 85 genetic variants in 70 different genes to acute coronary syndrome, a cluster of heart problems. When the researchers compared genetic tests of 811 patients that had the syndrome with a group of 650 (matched for sex and age) that didn’t, only one of the suspect gene variants turned up substantially more often in those with the syndrome — a number to be expected by chance.“Our null results provide no support for the hypothesis that any of the 85 genetic variants tested is a susceptibility factor” for the syndrome, the researchers reported in the Journal of the American Medical Association.How could so many studies be wrong? Because their conclusions relied on “statistical significance,” a concept at the heart of the mathematical analysis of modern scientific experiments.
  • Statistical significance is a phrase that every science graduate student learns, but few comprehend. While its origins stretch back at least to the 19th century, the modern notion was pioneered by the mathematician Ronald A. Fisher in the 1920s. His original interest was agriculture. He sought a test of whether variation in crop yields was due to some specific intervention (say, fertilizer) or merely reflected random factors beyond experimental control.Fisher first assumed that fertilizer caused no difference — the “no effect” or “null” hypothesis. He then calculated a number called the P value, the probability that an observed yield in a fertilized field would occur if fertilizer had no real effect. If P is less than .05 — meaning the chance of a fluke is less than 5 percent — the result should be declared “statistically significant,” Fisher arbitrarily declared, and the no effect hypothesis should be rejected, supposedly confirming that fertilizer works.Fisher’s P value eventually became the ultimate arbiter of credibility for science results of all sorts
  • But in fact, there’s no logical basis for using a P value from a single study to draw any conclusion. If the chance of a fluke is less than 5 percent, two possible conclusions remain: There is a real effect, or the result is an improbable fluke. Fisher’s method offers no way to know which is which. On the other hand, if a study finds no statistically significant effect, that doesn’t prove anything, either. Perhaps the effect doesn’t exist, or maybe the statistical test wasn’t powerful enough to detect a small but real effect.
  • Soon after Fisher established his system of statistical significance, it was attacked by other mathematicians, notably Egon Pearson and Jerzy Neyman. Rather than testing a null hypothesis, they argued, it made more sense to test competing hypotheses against one another. That approach also produces a P value, which is used to gauge the likelihood of a “false positive” — concluding an effect is real when it actually isn’t. What  eventually emerged was a hybrid mix of the mutually inconsistent Fisher and Neyman-Pearson approaches, which has rendered interpretations of standard statistics muddled at best and simply erroneous at worst. As a result, most scientists are confused about the meaning of a P value or how to interpret it. “It’s almost never, ever, ever stated correctly, what it means,” says Goodman.
  • experimental data yielding a P value of .05 means that there is only a 5 percent chance of obtaining the observed (or more extreme) result if no real effect exists (that is, if the no-difference hypothesis is correct). But many explanations mangle the subtleties in that definition. A recent popular book on issues involving science, for example, states a commonly held misperception about the meaning of statistical significance at the .05 level: “This means that it is 95 percent certain that the observed difference between groups, or sets of samples, is real and could not have arisen by chance.”
  • That interpretation commits an egregious logical error (technical term: “transposed conditional”): confusing the odds of getting a result (if a hypothesis is true) with the odds favoring the hypothesis if you observe that result. A well-fed dog may seldom bark, but observing the rare bark does not imply that the dog is hungry. A dog may bark 5 percent of the time even if it is well-fed all of the time. (See Box 2)
    • Weiye Loh
       
      Does the problem then, lie not in statistics, but the interpretation of statistics? Is the fallacy of appeal to probability is at work in such interpretation? 
  • Another common error equates statistical significance to “significance” in the ordinary use of the word. Because of the way statistical formulas work, a study with a very large sample can detect “statistical significance” for a small effect that is meaningless in practical terms. A new drug may be statistically better than an old drug, but for every thousand people you treat you might get just one or two additional cures — not clinically significant. Similarly, when studies claim that a chemical causes a “significantly increased risk of cancer,” they often mean that it is just statistically significant, possibly posing only a tiny absolute increase in risk.
  • Statisticians perpetually caution against mistaking statistical significance for practical importance, but scientific papers commit that error often. Ziliak studied journals from various fields — psychology, medicine and economics among others — and reported frequent disregard for the distinction.
  • “I found that eight or nine of every 10 articles published in the leading journals make the fatal substitution” of equating statistical significance to importance, he said in an interview. Ziliak’s data are documented in the 2008 book The Cult of Statistical Significance, coauthored with Deirdre McCloskey of the University of Illinois at Chicago.
  • Multiplicity of mistakesEven when “significance” is properly defined and P values are carefully calculated, statistical inference is plagued by many other problems. Chief among them is the “multiplicity” issue — the testing of many hypotheses simultaneously. When several drugs are tested at once, or a single drug is tested on several groups, chances of getting a statistically significant but false result rise rapidly.
  • Recognizing these problems, some researchers now calculate a “false discovery rate” to warn of flukes disguised as real effects. And genetics researchers have begun using “genome-wide association studies” that attempt to ameliorate the multiplicity issue (SN: 6/21/08, p. 20).
  • Many researchers now also commonly report results with confidence intervals, similar to the margins of error reported in opinion polls. Such intervals, usually given as a range that should include the actual value with 95 percent confidence, do convey a better sense of how precise a finding is. But the 95 percent confidence calculation is based on the same math as the .05 P value and so still shares some of its problems.
  • Statistical problems also afflict the “gold standard” for medical research, the randomized, controlled clinical trials that test drugs for their ability to cure or their power to harm. Such trials assign patients at random to receive either the substance being tested or a placebo, typically a sugar pill; random selection supposedly guarantees that patients’ personal characteristics won’t bias the choice of who gets the actual treatment. But in practice, selection biases may still occur, Vance Berger and Sherri Weinstein noted in 2004 in ControlledClinical Trials. “Some of the benefits ascribed to randomization, for example that it eliminates all selection bias, can better be described as fantasy than reality,” they wrote.
  • Randomization also should ensure that unknown differences among individuals are mixed in roughly the same proportions in the groups being tested. But statistics do not guarantee an equal distribution any more than they prohibit 10 heads in a row when flipping a penny. With thousands of clinical trials in progress, some will not be well randomized. And DNA differs at more than a million spots in the human genetic catalog, so even in a single trial differences may not be evenly mixed. In a sufficiently large trial, unrandomized factors may balance out, if some have positive effects and some are negative. (See Box 3) Still, trial results are reported as averages that may obscure individual differences, masking beneficial or harm­ful effects and possibly leading to approval of drugs that are deadly for some and denial of effective treatment to others.
  • nother concern is the common strategy of combining results from many trials into a single “meta-analysis,” a study of studies. In a single trial with relatively few participants, statistical tests may not detect small but real and possibly important effects. In principle, combining smaller studies to create a larger sample would allow the tests to detect such small effects. But statistical techniques for doing so are valid only if certain criteria are met. For one thing, all the studies conducted on the drug must be included — published and unpublished. And all the studies should have been performed in a similar way, using the same protocols, definitions, types of patients and doses. When combining studies with differences, it is necessary first to show that those differences would not affect the analysis, Goodman notes, but that seldom happens. “That’s not a formal part of most meta-analyses,” he says.
  • Meta-analyses have produced many controversial conclusions. Common claims that antidepressants work no better than placebos, for example, are based on meta-analyses that do not conform to the criteria that would confer validity. Similar problems afflicted a 2007 meta-analysis, published in the New England Journal of Medicine, that attributed increased heart attack risk to the diabetes drug Avandia. Raw data from the combined trials showed that only 55 people in 10,000 had heart attacks when using Avandia, compared with 59 people per 10,000 in comparison groups. But after a series of statistical manipulations, Avandia appeared to confer an increased risk.
  • combining small studies in a meta-analysis is not a good substitute for a single trial sufficiently large to test a given question. “Meta-analyses can reduce the role of chance in the interpretation but may introduce bias and confounding,” Hennekens and DeMets write in the Dec. 2 Journal of the American Medical Association. “Such results should be considered more as hypothesis formulating than as hypothesis testing.”
  • Some studies show dramatic effects that don’t require sophisticated statistics to interpret. If the P value is 0.0001 — a hundredth of a percent chance of a fluke — that is strong evidence, Goodman points out. Besides, most well-accepted science is based not on any single study, but on studies that have been confirmed by repetition. Any one result may be likely to be wrong, but confidence rises quickly if that result is independently replicated.“Replication is vital,” says statistician Juliet Shaffer, a lecturer emeritus at the University of California, Berkeley. And in medicine, she says, the need for replication is widely recognized. “But in the social sciences and behavioral sciences, replication is not common,” she noted in San Diego in February at the annual meeting of the American Association for the Advancement of Science. “This is a sad situation.”
  • Most critics of standard statistics advocate the Bayesian approach to statistical reasoning, a methodology that derives from a theorem credited to Bayes, an 18th century English clergyman. His approach uses similar math, but requires the added twist of a “prior probability” — in essence, an informed guess about the expected probability of something in advance of the study. Often this prior probability is more than a mere guess — it could be based, for instance, on previous studies.
  • it basically just reflects the need to include previous knowledge when drawing conclusions from new observations. To infer the odds that a barking dog is hungry, for instance, it is not enough to know how often the dog barks when well-fed. You also need to know how often it eats — in order to calculate the prior probability of being hungry. Bayesian math combines a prior probability with observed data to produce an estimate of the likelihood of the hunger hypothesis. “A scientific hypothesis cannot be properly assessed solely by reference to the observational data,” but only by viewing the data in light of prior belief in the hypothesis, wrote George Diamond and Sanjay Kaul of UCLA’s School of Medicine in 2004 in the Journal of the American College of Cardiology. “Bayes’ theorem is ... a logically consistent, mathematically valid, and intuitive way to draw inferences about the hypothesis.” (See Box 4)
  • In many real-life contexts, Bayesian methods do produce the best answers to important questions. In medical diagnoses, for instance, the likelihood that a test for a disease is correct depends on the prevalence of the disease in the population, a factor that Bayesian math would take into account.
  • But Bayesian methods introduce a confusion into the actual meaning of the mathematical concept of “probability” in the real world. Standard or “frequentist” statistics treat probabilities as objective realities; Bayesians treat probabilities as “degrees of belief” based in part on a personal assessment or subjective decision about what to include in the calculation. That’s a tough placebo to swallow for scientists wedded to the “objective” ideal of standard statistics. “Subjective prior beliefs are anathema to the frequentist, who relies instead on a series of ad hoc algorithms that maintain the facade of scientific objectivity,” Diamond and Kaul wrote.Conflict between frequentists and Bayesians has been ongoing for two centuries. So science’s marriage to mathematics seems to entail some irreconcilable differences. Whether the future holds a fruitful reconciliation or an ugly separation may depend on forging a shared understanding of probability.“What does probability mean in real life?” the statistician David Salsburg asked in his 2001 book The Lady Tasting Tea. “This problem is still unsolved, and ... if it remains un­solved, the whole of the statistical approach to science may come crashing down from the weight of its own inconsistencies.”
  •  
    Odds Are, It's Wrong Science fails to face the shortcomings of statistics
Weiye Loh

Expectations can cancel out the benefit of pain drugs - 0 views

  • People who don't believe their pain medicine will work can actually reduce or even cancel out the effectiveness of the drug, and images of their brains show how they are doing it, scientists said
  • Researchers from Britain and Germany used brain scans to map how a person's feelings and past experiences can influence the effectiveness of medicines, and found that a powerful painkilling drug with a true biological effect can appear not to be working if a patient has been primed to expect it to fail.
  • By contrast, positive expectations about the treatment doubled the natural physiological or biochemical effect of an opioid drug among 22 healthy volunteers in the study.
  • ...3 more annotations...
  • "The brain imaging is telling us that patients really are switching on and off parts of their brains through the mechanisms of expectation -- positive and negative," said Irene Tracy of Britain's Oxford University, who led the research. "(The effect of expectations) is powerful enough to give real added benefits of the drug, and unfortunately it is also very capable of overriding the true analgesic effect." The placebo effect is the real benefit seen when patients are given dummy treatments but believe they will do them good. The nocebo effect is the opposite, when patients get real negative effects when they have doubts about a treatment.
  • For their study, the scientists used the drug remifentanil, a potent ultra short-acting synthetic opioid painkiller which is marketed by drugmakers GlaxoSmithKline and Abbott as Ultiva. The study was published in the Science Translational Medicine journal on Wednesday. Volunteers were put in an MRI scanner and had heat applied to one leg. They were asked to rate pain on a 1 to 100 scale. Unknown to the volunteers, the researchers started giving the drug via infusion to see what effects there would be when the volunteers had no knowledge or expectation of treatment. The average initial pain rating of 66 went down to 55. The volunteers were then told they would now start to get the drug, although no change was actually made and they just continued receiving the opioid at the same dose. The average pain ratings dropped further to 39. The volunteers were then told the drug had been stopped and warned that there may be an increase in pain. In reality, the drug was still being given at the same dose, but their pain intensity increased to 64 -- meaning the pain was almost as bad as it had been at the beginning, before they had had any drug.
  • Tracey said there may be lessons for the design of clinical trials, which often compare an experimental drug against a dummy pill to see if there is any effect beyond the placebo effect. "We should control for the effect of people's expectations on the results of any clinical trial," she said. "At the very least we should make sure we minimize any negative expectations to make sure we're not masking true efficacy in a trial drug."
Weiye Loh

Science-Based Medicine » Skepticism versus nihilism about cancer and science-... - 0 views

  • I’m a John Ioannidis convert, and I accept that there is a lot of medical literature that is erroneous. (Just search for Dr. Ioannidis’ last name on this blog, and you’ll find copious posts praising him and discussing his work.) In fact, as I’ve pointed out, most medical researchers instinctively know that most new scientific findings will not hold up to scrutiny, which is why we rarely accept the results of a single study, except in unusual circumstances, as being enough to change practice. I also have pointed out many times that this is not necessarily a bad thing. Replication is key to verification of scientific findings, and more often than not provocative scientific findings are not replicated. Does that mean they shouldn’t be published?
  • As for pseudoscience, I’m half tempted to agree with Dr. Spector, but just not in the way he thinks. Unfortunately, over the last 20 years or so, there has been an increasing amount of pseudoscience in the medical literature in the form of “complementary and alternative medicine” (CAM) studies of highly improbable remedies or even virtually impossible ones (i.e., homeopathy). However, that does not appear to be what Dr. Spector is talking about, which is why I looked up his references. The second reference is to an SI article from 2009 entitled Science and Pseudoscience in Adult Nutrition Research and Practice. There, and only there, did I find out just what it is that Dr. Spector apparently means by “pseudoscience”: By pseudoscience, I mean the use of inappropriate methods that frequently yield wrong or misleading answers for the type of question asked. In nutrition research, such methods also often misuse statistical evaluations.
  • Dr. Spector doesn’t really know the difference between inadequately rigorous science and pseudoscience! Now, don’t get me wrong. I know that it’s not always easy to distinguish science from pseudoscience, especially at the fringes, but in general bad science has to go a lot further than Dr. Spector thinks to merit the the term “pseudoscience.” It is clear (to me, at least) from his articles that Dr. Spector throws around the term “pseudoscience” around rather more loosely than he should, using it as a pejorative for any clinical science less rigorous than a randomized, double-blind, placebo-controlled trial that meets FDA standards for approval of a drug (his pharma background coming to the fore, no doubt). Pseudoscience, Dr. Spector. You keep using that word. I do not think it means what you think it means. Indeed, I almost get the impression from his articles that Dr. Spector views any study that doesn’t reach FDA-level standards for drug approval to be pseudoscience.
  • ...4 more annotations...
  • Medical science, when it works well, tends to progress from basic science, to small pilot studies, to larger randomized studies, and then–only then–to those big, rigorous, insanely expensive randomized, double-blind, placebo-controlled trials. Dr. Spector mentions hierarchies of evidence, but he seems to fall into a false dichotomy, namely that if it’s not Level I evidence, it’s crap. The problem is, as Mark pointed out, in medicine we often don’t have Level I evidence for many questions. Indeed, for some questions, we will never have Level I evidence. Clinical medicine involves making decisions in the midst of uncertainty, sometimes extreme uncertainty.
  • Dr. Spector then proceeds to paint a picture of reckless physicians proceeding on crappy studies to pump women full of hormones. Actually, it was more than a bit more complicated on than that. That was the time when I was in my medical training, and I remember the discussions we had regarding the strength (or lack thereof) of the epidemiological data and the lack of good RCTs looking at HRT. I also remember that nothing works as well to relieve menopausal symptoms as HRT, an observation we have been reminded of again since 2003, which is the year when the first big study came out implicating HRT in increasing the risk of breast cancer (more later).
  • I found a rather fascinating editorial in the New England Journal of Medicine from more than 20 years ago that discussed the state of the evidence back then with regard to estrogen and breast cancer: Evidence that estrogen increases the risk of breast cancer has been surprisingly difficult to obtain. Clinical and epidemiologic studies and studies in animals strongly suggest that endogenous estrogen plays a part in causing breast cancer. If so, exogenous estrogen should be a potent promoter of breast cancer. Although more than 20 case–control and prospective studies of the relation of breast cancer and noncontraceptive estrogen use have failed to demonstrate the expected association, relatively few women in these studies used estrogen for extended periods. Studies of the use of diethylstilbestrol and oral contraceptives suggest that a long exposure or latency may be necessary to show any association between hormone use and breast cancer. In the Swedish study, only six years of follow-up was needed to demonstrate an increased risk of breast cancer with the postmenopausal use of estradiol. It should be noted, however, that half the women in the subgroup that provided detailed data on the duration of hormone use had taken estrogen for many years before their base-line prescription status was defined. The duration of estrogen exposure in these women before the diagnosis of breast cancer was probably seriously underestimated; a short latency cannot be attributed to estradiol on the basis of these data. Other recent studies of the use of noncontraceptive estrogen suggest a slightly increased risk of breast cancer after 15 to 20 years’ use.
  • even now, the evidence is conflicting regarding HRT and breast cancer, with the preponderance of evidence suggesting that mixed HRT (estrogen and progestin) significantly increases the risk of breast cancer, while estrogen-alone HRT very well might not increase the risk of breast cancer at all or (more likely) only very little. Indeed, I was just at a conference all day Saturday where data demonstrating this very point were discussed by one of the speakers. None of this stops Dr. Spector from categorically labeling estrogen as a “carcinogen that causes breast cancers that kill women.” Maybe. Maybe not. It’s actually not that clear. The problem, of course, is that, consistent with the first primary reports of WHI results, the preponderance of evidence finding health risks due to HRT have indicted the combined progestin/estrogen combinations as unsafe.
Weiye Loh

Skepticblog » The Decline Effect - 0 views

  • The first group are those with an overly simplistic or naive sense of how science functions. This is a view of science similar to those films created in the 1950s and meant to be watched by students, with the jaunty music playing in the background. This view generally respects science, but has a significant underappreciation for the flaws and complexity of science as a human endeavor. Those with this view are easily scandalized by revelations of the messiness of science.
  • The second cluster is what I would call scientific skepticism – which combines a respect for science and empiricism as a method (really “the” method) for understanding the natural world, with a deep appreciation for all the myriad ways in which the endeavor of science can go wrong. Scientific skeptics, in fact, seek to formally understand the process of science as a human endeavor with all its flaws. It is therefore often skeptics pointing out phenomena such as publication bias, the placebo effect, the need for rigorous controls and blinding, and the many vagaries of statistical analysis. But at the end of the day, as complex and messy the process of science is, a reliable picture of reality is slowly ground out.
  • The third group, often frustrating to scientific skeptics, are the science-deniers (for lack of a better term). They may take a postmodernist approach to science – science is just one narrative with no special relationship to the truth. Whatever you call it, what the science-deniers in essence do is describe all of the features of science that the skeptics do (sometimes annoyingly pretending that they are pointing these features out to skeptics) but then come to a different conclusion at the end – that science (essentially) does not work.
  • ...13 more annotations...
  • this third group – the science deniers – started out in the naive group, and then were so scandalized by the realization that science is a messy human endeavor that the leap right to the nihilistic conclusion that science must therefore be bunk.
  • The article by Lehrer falls generally into this third category. He is discussing what has been called “the decline effect” – the fact that effect sizes in scientific studies tend to decrease over time, sometime to nothing.
  • This term was first applied to the parapsychological literature, and was in fact proposed as a real phenomena of ESP – that ESP effects literally decline over time. Skeptics have criticized this view as magical thinking and hopelessly naive – Occam’s razor favors the conclusion that it is the flawed measurement of ESP, not ESP itself, that is declining over time. 
  • Lehrer, however, applies this idea to all of science, not just parapsychology. He writes: And this is why the decline effect is so troubling. Not because it reveals the human fallibility of science, in which data are tweaked and beliefs shape perceptions. (Such shortcomings aren’t surprising, at least for scientists.) And not because it reveals that many of our most exciting theories are fleeting fads and will soon be rejected. (That idea has been around since Thomas Kuhn.) The decline effect is troubling because it reminds us how difficult it is to prove anything. We like to pretend that our experiments define the truth for us. But that’s often not the case. Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true. When the experiments are done, we still have to choose what to believe.
  • Lehrer is ultimately referring to aspects of science that skeptics have been pointing out for years (as a way of discerning science from pseudoscience), but Lehrer takes it to the nihilistic conclusion that it is difficult to prove anything, and that ultimately “we still have to choose what to believe.” Bollocks!
  • Lehrer is describing the cutting edge or the fringe of science, and then acting as if it applies all the way down to the core. I think the problem is that there is so much scientific knowledge that we take for granted – so much so that we forget it is knowledge that derived from the scientific method, and at one point was not known.
  • It is telling that Lehrer uses as his primary examples of the decline effect studies from medicine, psychology, and ecology – areas where the signal to noise ratio is lowest in the sciences, because of the highly variable and complex human element. We don’t see as much of a decline effect in physics, for example, where phenomena are more objective and concrete.
  • If the truth itself does not “wear off”, as the headline of Lehrer’s article provocatively states, then what is responsible for this decline effect?
  • it is no surprise that effect science in preliminary studies tend to be positive. This can be explained on the basis of experimenter bias – scientists want to find positive results, and initial experiments are often flawed or less than rigorous. It takes time to figure out how to rigorously study a question, and so early studies will tend not to control for all the necessary variables. There is further publication bias in which positive studies tend to be published more than negative studies.
  • Further, some preliminary research may be based upon chance observations – a false pattern based upon a quirky cluster of events. If these initial observations are used in the preliminary studies, then the statistical fluke will be carried forward. Later studies are then likely to exhibit a regression to the mean, or a return to more statistically likely results (which is exactly why you shouldn’t use initial data when replicating a result, but should use entirely fresh data – a mistake for which astrologers are infamous).
  • skeptics are frequently cautioning against new or preliminary scientific research. Don’t get excited by every new study touted in the lay press, or even by a university’s press release. Most new findings turn out to be wrong. In science, replication is king. Consensus and reliable conclusions are built upon multiple independent lines of evidence, replicated over time, all converging on one conclusion.
  • Lehrer does make some good points in his article, but they are points that skeptics are fond of making. In order to have a  mature and functional appreciation for the process and findings of science, it is necessary to understand how science works in the real world, as practiced by flawed scientists and scientific institutions. This is the skeptical message.
  • But at the same time reliable findings in science are possible, and happen frequently – when results can be replicated and when they fit into the expanding intricate weave of the picture of the natural world being generated by scientific investigation.
Weiye Loh

Information technology and economic change: The impact of the printing press | vox - Re... - 0 views

  • Despite the revolutionary technological advance of the printing press in the 15th century, there is precious little economic evidence of its benefits. Using data on 200 European cities between 1450 and 1600, this column finds that economic growth was higher by as much as 60 percentage points in cities that adopted the technology.
  • Historians argue that the printing press was among the most revolutionary inventions in human history, responsible for a diffusion of knowledge and ideas, “dwarfing in scale anything which had occurred since the invention of writing” (Roberts 1996, p. 220). Yet economists have struggled to find any evidence of this information technology revolution in measures of aggregate productivity or per capita income (Clark 2001, Mokyr 2005). The historical data thus present us with a puzzle analogous to the famous Solow productivity paradox – that, until the mid-1990s, the data on macroeconomic productivity showed no effect of innovations in computer-based information technology.
  • In recent work (Dittmar 2010a), I examine the revolution in Renaissance information technology from a new perspective by assembling city-level data on the diffusion of the printing press in 15th-century Europe. The data record each city in which a printing press was established 1450-1500 – some 200 out of over 1,000 historic cities (see also an interview on this site, Dittmar 2010b). The research emphasises cities for three principal reasons. First, the printing press was an urban technology, producing for urban consumers. Second, cities were seedbeds for economic ideas and social groups that drove the emergence of modern growth. Third, city sizes were historically important indicators of economic prosperity, and broad-based city growth was associated with macroeconomic growth (Bairoch 1988, Acemoglu et al. 2005).
  • ...8 more annotations...
  • Figure 1 summarises the data and shows how printing diffused from Mainz 1450-1500. Figure 1. The diffusion of the printing press
  • City-level data on the adoption of the printing press can be exploited to examine two key questions: Was the new technology associated with city growth? And, if so, how large was the association? I find that cities in which printing presses were established 1450-1500 had no prior growth advantage, but subsequently grew far faster than similar cities without printing presses. My work uses a difference-in-differences estimation strategy to document the association between printing and city growth. The estimates suggest early adoption of the printing press was associated with a population growth advantage of 21 percentage points 1500-1600, when mean city growth was 30 percentage points. The difference-in-differences model shows that cities that adopted the printing press in the late 1400s had no prior growth advantage, but grew at least 35 percentage points more than similar non-adopting cities from 1500 to 1600.
  • The restrictions on diffusion meant that cities relatively close to Mainz were more likely to receive the technology other things equal. Printing presses were established in 205 cities 1450-1500, but not in 40 of Europe’s 100 largest cities. Remarkably, regulatory barriers did not limit diffusion. Printing fell outside existing guild regulations and was not resisted by scribes, princes, or the Church (Neddermeyer 1997, Barbier 2006, Brady 2009).
  • Historians observe that printing diffused from Mainz in “concentric circles” (Barbier 2006). Distance from Mainz was significantly associated with early adoption of the printing press, but neither with city growth before the diffusion of printing nor with other observable determinants of subsequent growth. The geographic pattern of diffusion thus arguably allows us to identify exogenous variation in adoption. Exploiting distance from Mainz as an instrument for adoption, I find large and significant estimates of the relationship between the adoption of the printing press and city growth. I find a 60 percentage point growth advantage between 1500-1600.
  • The importance of distance from Mainz is supported by an exercise using “placebo” distances. When I employ distance from Venice, Amsterdam, London, or Wittenberg instead of distance from Mainz as the instrument, the estimated print effect is statistically insignificant.
  • Cities that adopted print media benefitted from positive spillovers in human capital accumulation and technological change broadly defined. These spillovers exerted an upward pressure on the returns to labour, made cities culturally dynamic, and attracted migrants. In the pre-industrial era, commerce was a more important source of urban wealth and income than tradable industrial production. Print media played a key role in the development of skills that were valuable to merchants. Following the invention printing, European presses produced a stream of math textbooks used by students preparing for careers in business.
  • These and hundreds of similar texts worked students through problem sets concerned with calculating exchange rates, profit shares, and interest rates. Broadly, print media was also associated with the diffusion of cutting-edge business practice (such as book-keeping), literacy, and the social ascent of new professionals – merchants, lawyers, officials, doctors, and teachers.
  • The printing press was one of the greatest revolutions in information technology. The impact of the printing press is hard to identify in aggregate data. However, the diffusion of the technology was associated with extraordinary subsequent economic dynamism at the city level. European cities were seedbeds of ideas and business practices that drove the transition to modern growth. These facts suggest that the printing press had very far-reaching consequences through its impact on the development of cities.
Weiye Loh

SMA's confusing suggestion - 0 views

  • Regardless of whether a physician is state-sanctioned, Western-trained or trained traditionally, he must be viewed with considerable caution if he does not practise methods of healing based on proper evidence by any standards.
  • SMA's suggestion that medical practitioners be allowed to refer to practitioners of traditional Chinese medicine (TCM) and acupuncturists is nothing short of wholesale endorsement of their methods. Most TCM practitioners perform acupuncture, which we now know is about as effective as a placebo. This alone should make us ask whether patients are served well by a referral to a TCM practitioner. What is especially curious is that while the SMA suggests letting doctors sign referrals to TCM practitioners, it does not encourage its members to do so and does not think doctors will do so widely. If that is SMA's view, why fiddle with the status quo and confuse doctors and the public with such mixed messages?
  • While tolerance precludes outright criticism by doctors of alternative medicine, we should not endorse non- evidence-based therapeutic modalities.
1 - 9 of 9
Showing 20 items per page