Skip to main content

Home/ TOK Friends/ Group items tagged acupuncture

Rss Feed Group items tagged

Javier E

Acupuncture, Real or Fake, Eases Pain - Well Blog - NYTimes.com - 1 views

  • Fake acupuncture appears to work just as well for pain relief as the real thing, according to a new study of patients with knee arthritis. The findings, published in the September issue of the journal Arthritis Care and Research, are the latest to suggest that a powerful but little understood placebo effect may be at work when patients report benefits from acupuncture treatment
  • The results don’t mean acupuncture doesn’t work, but they do suggest that the benefits of both real and fake acupuncture may have something to do with the way the body transmits or processes pain signals. Other studies have suggested that the prick of a needle around the area of injury or pain could create a “super-placebo” effect that alters the way the brain perceives and responds to pain.
Javier E

The decline effect and the scientific method : The New Yorker - 3 views

  • The test of replicability, as it’s known, is the foundation of modern research. Replicability is how the community enforces itself. It’s a safeguard for the creep of subjectivity. Most of the time, scientists know what results they want, and that can influence the results they get. The premise of replicability is that the scientific community can correct for these flaws.
  • But now all sorts of well-established, multiply confirmed findings have started to look increasingly uncertain. It’s as if our facts were losing their truth: claims that have been enshrined in textbooks are suddenly unprovable.
  • This phenomenon doesn’t yet have an official name, but it’s occurring across a wide range of fields, from psychology to ecology.
  • ...39 more annotations...
  • If replication is what separates the rigor of science from the squishiness of pseudoscience, where do we put all these rigorously validated findings that can no longer be proved? Which results should we believe?
  • Schooler demonstrated that subjects shown a face and asked to describe it were much less likely to recognize the face when shown it later than those who had simply looked at it. Schooler called the phenomenon “verbal overshadowing.”
  • The most likely explanation for the decline is an obvious one: regression to the mean. As the experiment is repeated, that is, an early statistical fluke gets cancelled out. The extrasensory powers of Schooler’s subjects didn’t decline—they were simply an illusion that vanished over time.
  • yet Schooler has noticed that many of the data sets that end up declining seem statistically solid—that is, they contain enough data that any regression to the mean shouldn’t be dramatic. “These are the results that pass all the tests,” he says. “The odds of them being random are typically quite remote, like one in a million. This means that the decline effect should almost never happen. But it happens all the time!
  • this is why Schooler believes that the decline effect deserves more attention: its ubiquity seems to violate the laws of statistics
  • In 2001, Michael Jennions, a biologist at the Australian National University, set out to analyze “temporal trends” across a wide range of subjects in ecology and evolutionary biology. He looked at hundreds of papers and forty-four meta-analyses (that is, statistical syntheses of related studies), and discovered a consistent decline effect over time, as many of the theories seemed to fade into irrelevance.
  • Jennions admits that his findings are troubling, but expresses a reluctance to talk about them
  • publicly. “This is a very sensitive issue for scientists,” he says. “You know, we’re supposed to be dealing with hard facts, the stuff that’s supposed to stand the test of time. But when you see these trends you become a little more skeptical of things.”
  • While publication bias almost certainly plays a role in the decline effect, it remains an incomplete explanation. For one thing, it fails to account for the initial prevalence of positive results among studies that never even get submitted to journals. It also fails to explain the experience of people like Schooler, who have been unable to replicate their initial data despite their best efforts.
  • Jennions, similarly, argues that the decline effect is largely a product of publication bias, or the tendency of scientists and scientific journals to prefer positive data over null results, which is what happens when no effect is found. The bias was first identified by the statistician Theodore Sterling, in 1959, after he noticed that ninety-seven per cent of all published psychological studies with statistically significant data found the effect they were looking for
  • Sterling saw that if ninety-seven per cent of psychology studies were proving their hypotheses, either psychologists were extraordinarily lucky or they published only the outcomes of successful experiments.
  • One of his most cited papers has a deliberately provocative title: “Why Most Published Research Findings Are False.”
  • suspects that an equally significant issue is the selective reporting of results—the data that scientists choose to document in the first place. Palmer’s most convincing evidence relies on a statistical tool known as a funnel graph. When a large number of studies have been done on a single subject, the data should follow a pattern: studies with a large sample size should all cluster around a common value—the true result—whereas those with a smaller sample size should exhibit a random scattering, since they’re subject to greater sampling error. This pattern gives the graph its name, since the distribution resembles a funnel.
  • after Palmer plotted every study of fluctuating asymmetry, he noticed that the distribution of results with smaller sample sizes wasn’t random at all but instead skewed heavily toward positive results. Palmer has since documented a similar problem in several other contested subject areas. “Once I realized that selective reporting is everywhere in science, I got quite depressed,” Palmer told me. “As a researcher, you’re always aware that there might be some nonrandom patterns, but I had no idea how widespread it is.”
  • Palmer summarized the impact of selective reporting on his field: “We cannot escape the troubling conclusion that some—perhaps many—cherished generalities are at best exaggerated in their biological significance and at worst a collective illusion nurtured by strong a-priori beliefs often repeated.”
  • Palmer emphasizes that selective reporting is not the same as scientific fraud. Rather, the problem seems to be one of subtle omissions and unconscious misperceptions, as researchers struggle to make sense of their results. Stephen Jay Gould referred to this as the “sho
  • horning” process.
  • “A lot of scientific measurement is really hard,” Simmons told me. “If you’re talking about fluctuating asymmetry, then it’s a matter of minuscule differences between the right and left sides of an animal. It’s millimetres of a tail feather. And so maybe a researcher knows that he’s measuring a good male”—an animal that has successfully mated—“and he knows that it’s supposed to be symmetrical. Well, that act of measurement is going to be vulnerable to all sorts of perception biases. That’s not a cynical statement. That’s just the way human beings work.”
  • For Simmons, the steep rise and slow fall of fluctuating asymmetry is a clear example of a scientific paradigm, one of those intellectual fads that both guide and constrain research: after a new paradigm is proposed, the peer-review process is tilted toward positive results. But then, after a few years, the academic incentives shift—the paradigm has become entrenched—so that the most notable results are now those that disprove the theory.
  • John Ioannidis, an epidemiologist at Stanford University, argues that such distortions are a serious issue in biomedical research. “These exaggerations are why the decline has become so common,” he says. “It’d be really great if the initial studies gave us an accurate summary of things. But they don’t. And so what happens is we waste a lot of money treating millions of patients and doing lots of follow-up studies on other themes based on results that are misleading.”
  • In 2005, Ioannidis published an article in the Journal of the American Medical Association that looked at the forty-nine most cited clinical-research studies in three major medical journals.
  • the data Ioannidis found were disturbing: of the thirty-four claims that had been subject to replication, forty-one per cent had either been directly contradicted or had their effect sizes significantly downgraded.
  • the most troubling fact emerged when he looked at the test of replication: out of four hundred and thirty-two claims, only a single one was consistently replicable. “This doesn’t mean that none of these claims will turn out to be true,” he says. “But, given that most of them were done badly, I wouldn’t hold my breath.”
  • According to Ioannidis, the main problem is that too many researchers engage in what he calls “significance chasing,” or finding ways to interpret the data so that it passes the statistical test of significance—the ninety-five-per-cent boundary invented by Ronald Fisher.
  • One of the classic examples of selective reporting concerns the testing of acupuncture in different countries. While acupuncture is widely accepted as a medical treatment in various Asian countries, its use is much more contested in the West. These cultural differences have profoundly influenced the results of clinical trials.
  • The problem of selective reporting is rooted in a fundamental cognitive flaw, which is that we like proving ourselves right and hate being wrong.
  • “It feels good to validate a hypothesis,” Ioannidis said. “It feels even better when you’ve got a financial interest in the idea or your career depends upon it. And that’s why, even after a claim has been systematically disproven”—he cites, for instance, the early work on hormone replacement therapy, or claims involving various vitamins—“you still see some stubborn researchers citing the first few studies
  • That’s why Schooler argues that scientists need to become more rigorous about data collection before they publish. “We’re wasting too much time chasing after bad studies and underpowered experiments,”
  • The current “obsession” with replicability distracts from the real problem, which is faulty design.
  • “Every researcher should have to spell out, in advance, how many subjects they’re going to use, and what exactly they’re testing, and what constitutes a sufficient level of proof. We have the tools to be much more transparent about our experiments.”
  • Schooler recommends the establishment of an open-source database, in which researchers are required to outline their planned investigations and document all their results. “I think this would provide a huge increase in access to scientific work and give us a much better way to judge the quality of an experiment,”
  • scientific research will always be shadowed by a force that can’t be curbed, only contained: sheer randomness. Although little research has been done on the experimental dangers of chance and happenstance, the research that exists isn’t encouraging.
  • The disturbing implication of the Crabbe study is that a lot of extraordinary scientific data are nothing but noise. The hyperactivity of those coked-up Edmonton mice wasn’t an interesting new fact—it was a meaningless outlier, a by-product of invisible variables we don’t understand.
  • The problem, of course, is that such dramatic findings are also the most likely to get published in prestigious journals, since the data are both statistically significant and entirely unexpected
  • This suggests that the decline effect is actually a decline of illusion. While Karl Popper imagined falsification occurring with a single, definitive experiment—Galileo refuted Aristotelian mechanics in an afternoon—the process turns out to be much messier than that.
  • Many scientific theories continue to be considered true even after failing numerous experimental tests.
  • Even the law of gravity hasn’t always been perfect at predicting real-world phenomena. (In one test, physicists measuring gravity by means of deep boreholes in the Nevada desert found a two-and-a-half-per-cent discrepancy between the theoretical predictions and the actual data.)
  • Such anomalies demonstrate the slipperiness of empiricism. Although many scientific ideas generate conflicting results and suffer from falling effect sizes, they continue to get cited in the textbooks and drive standard medical practice. Why? Because these ideas seem true. Because they make sense. Because we can’t bear to let them go. And this is why the decline effect is so troubling. Not because it reveals the human fallibility of science, in which data are tweaked and beliefs shape perceptions. (Such shortcomings aren’t surprising, at least for scientists.) And not because it reveals that many of our most exciting theories are fleeting fads and will soon be rejected. (That idea has been around since Thomas Kuhn.)
  • The decline effect is troubling because it reminds us how difficult it is to prove anything. We like to pretend that our experiments define the truth for us. But that’s often not the case. Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true. When the experiments are done, we still have to choose what to believe. ♦
Javier E

The Dangers of Pseudoscience - NYTimes.com - 0 views

  • the “demarcation problem,” the issue of what separates good science from bad science and pseudoscience (and everything in between). The problem is relevant for at least three reasons.
  • The first is philosophical: Demarcation is crucial to our pursuit of knowledge; its issues go to the core of debates on epistemology and of the nature of truth and discovery.
  • The second reason is civic: our society spends billions of tax dollars on scientific research, so it is important that we also have a good grasp of what constitutes money well spent in this regard.
  • ...18 more annotations...
  • Third, as an ethical matter, pseudoscience is not — contrary to popular belief — merely a harmless pastime of the gullible; it often threatens people’s welfare,
  • It is precisely in the area of medical treatments that the science-pseudoscience divide is most critical, and where the role of philosophers in clarifying things may be most relevant.
  • some traditional Chinese remedies (like drinking fresh turtle blood to alleviate cold symptoms) may in fact work
  • There is no question that some folk remedies do work. The active ingredient of aspirin, for example, is derived from willow bark, which had been known to have beneficial effects since the time of Hippocrates. There is also no mystery about how this happens: people have more or less randomly tried solutions to their health problems for millennia, sometimes stumbling upon something useful
  • What makes the use of aspirin “scientific,” however, is that we have validated its effectiveness through properly controlled trials, isolated the active ingredient, and understood the biochemical pathways through which it has its effects
  • In terms of empirical results, there are strong indications that acupuncture is effective for reducing chronic pain and nausea, but sham therapy, where needles are applied at random places, or are not even pierced through the skin, turn out to be equally effective (see for instance this recent study on the effect of acupuncture on post-chemotherapy chronic fatigue), thus seriously undermining talk of meridians and Qi lines
  • Asma at one point compares the current inaccessibility of Qi energy to the previous (until this year) inaccessibility of the famous Higgs boson,
  • But the analogy does not hold. The existence of the Higgs had been predicted on the basis of a very successful physical theory known as the Standard Model. This theory is not only exceedingly mathematically sophisticated, but it has been verified experimentally over and over again. The notion of Qi, again, is not really a theory in any meaningful sense of the word. It is just an evocative word to label a mysterious force
  • Philosophers of science have long recognized that there is nothing wrong with positing unobservable entities per se, it’s a question of what work such entities actually do within a given theoretical-empirical framework. Qi and meridians don’t seem to do any, and that doesn’t seem to bother supporters and practitioners of Chinese medicine. But it ought to.
  • what’s the harm in believing in Qi and related notions, if in fact the proposed remedies seem to help?
  • we can incorporate whatever serendipitous discoveries from folk medicine into modern scientific practice, as in the case of the willow bark turned aspirin. In this sense, there is no such thing as “alternative” medicine, there’s only stuff that works and stuff that doesn’t.
  • Second, if we are positing Qi and similar concepts, we are attempting to provide explanations for why some things work and others don’t. If these explanations are wrong, or unfounded as in the case of vacuous concepts like Qi, then we ought to correct or abandon them.
  • pseudo-medical treatments often do not work, or are even positively harmful. If you take folk herbal “remedies,” for instance, while your body is fighting a serious infection, you may suffer severe, even fatal, consequences.
  • Indulging in a bit of pseudoscience in some instances may be relatively innocuous, but the problem is that doing so lowers your defenses against more dangerous delusions that are based on similar confusions and fallacies. For instance, you may expose yourself and your loved ones to harm because your pseudoscientific proclivities lead you to accept notions that have been scientifically disproved, like the increasingly (and worryingly) popular idea that vaccines cause autism.
  • Philosophers nowadays recognize that there is no sharp line dividing sense from nonsense, and moreover that doctrines starting out in one camp may over time evolve into the other. For example, alchemy was a (somewhat) legitimate science in the times of Newton and Boyle, but it is now firmly pseudoscientific (movements in the opposite direction, from full-blown pseudoscience to genuine science, are notably rare).
  • The verdict by philosopher Larry Laudan, echoed by Asma, that the demarcation problem is dead and buried, is not shared by most contemporary philosophers who have studied the subject.
  • the criterion of falsifiability, for example, is still a useful benchmark for distinguishing science and pseudoscience, as a first approximation. Asma’s own counterexample inadvertently shows this: the “cleverness” of astrologers in cherry-picking what counts as a confirmation of their theory, is hardly a problem for the criterion of falsifiability, but rather a nice illustration of Popper’s basic insight: the bad habit of creative fudging and finagling with empirical data ultimately makes a theory impervious to refutation. And all pseudoscientists do it, from parapsychologists to creationists and 9/11 Truthers.
  • The borderlines between genuine science and pseudoscience may be fuzzy, but this should be even more of a call for careful distinctions, based on systematic facts and sound reasoning. To try a modicum of turtle blood here and a little aspirin there is not the hallmark of wisdom and even-mindedness. It is a dangerous gateway to superstition and irrationality.
kushnerha

A Placebo Treatment for Pain - The New York Times - 0 views

  • This phenomenon — in which someone feels better after receiving fake treatment — was once dismissed as an illusion. People who are ill often improve regardless of the treatment they receive. But neuroscientists are discovering that in some conditions, including pain, placebos create biological effects similar to those caused by drugs.
  • Taking a placebo painkiller dampens activity in pain-related areas of the brain and spinal cord, and triggers the release of endorphins, the natural pain-relieving chemicals that opioid drugs are designed to mimic. Even when we take a real painkiller, a big chunk of its effect is delivered not by any direct chemical action, but by our expectation that the drug will work. Studies show that widely used painkillers like morphine, buprenorphine and tramadol are markedly less effective if we don’t know we’re taking them.
  • Placebo effects in pain are so large, in fact, that drug manufacturers are finding it hard to beat them. Finding ways to minimize placebo effects in trials, for example by screening out those who are most susceptible, is now a big focus for research. But what if instead we seek to harness these effects? Placebos might ruin drug trials, but they also show us a new approach to treating pain.
  • ...9 more annotations...
  • It is unethical to deceive patients by prescribing fake treatments, of course. But there is evidence that people with some conditions benefit even if they know they are taking placebos. In a 2014 study that followed 459 migraine attacks in 66 patients, honestly labeled placebos provided significantly more pain relief than no treatment, and were nearly half as effective as the painkiller Maxalt.
  • With placebo responses in pain so high — and the risks of drugs so severe — why not prescribe a course of “honest” placebos for those who wish to try it, before proceeding, if necessary, to an active drug?
  • Another option is to employ alternative therapies, which through placebo responses can benefit patients even when there is no physical mode of action.
  • a key ingredient is expectation: The greater our belief that a treatment will work, the better we’ll respond.
  • Individual attitudes and experiences are important, as are cultural factors. Placebo effects are getting stronger in the United States, for example, though not elsewhere.
  • Likely explanations include a growing cultural belief in the effectiveness of painkillers — a result of direct-to-consumer advertising (illegal in most other countries) and perhaps the fact that so many Americans have taken these drugs in the past.
  • Trials show, for example, that strengthening patients’ positive expectations and reducing their anxiety during a variety of procedures, including minimally invasive surgery, while still being honest, can reduce the dose of painkillers required and cut complications.
  • Placebo studies also reveal the value of social interaction as a treatment for pain. Harvard researchers studied patients in pain from irritable bowel syndrome and found that 44 percent of those given sham acupuncture had adequate relief from their symptoms. If the person who performed the acupuncture was extra supportive and empathetic, however, that figure jumped to 62 percent.
  • Placebos tell us that pain is a complex mix of biological, psychological and social factors. We need to develop better drugs to treat it, but let’s also take more seriously the idea of relieving pain without them.
Javier E

A Place For Placebos « The Dish - 0 views

  • there’s virtually no scientific evidence that alternative medicine (anything from chiropractic care to acupuncture) has any curative benefit beyond a placebo effect. … However, there is one area where alternative medicine often trumps traditional medicine: stress reduction. And stress reduction can, of course, make a huge impact on people’s health. …
  • Maybe each of these activities (listening to high end audio gear, drinking high end wine, having needles inserted into your chakras) is really about ritualizing a sensory experience. By putting on headphones you know are high quality, or drinking expensive wine, or entering the chiropractor’s office, you are telling yourself, “I am going to focus on this moment. I am going to savor this.” It’s the act of savoring, rather than the savoring tool, that results in both happiness and a longer life.
1 - 5 of 5
Showing 20 items per page