Skip to main content

Home/ TOK Friends/ Group items tagged pseudoscience

Rss Feed Group items tagged

Javier E

The Dangers of Pseudoscience - NYTimes.com - 0 views

  • the “demarcation problem,” the issue of what separates good science from bad science and pseudoscience (and everything in between). The problem is relevant for at least three reasons.
  • The first is philosophical: Demarcation is crucial to our pursuit of knowledge; its issues go to the core of debates on epistemology and of the nature of truth and discovery.
  • The second reason is civic: our society spends billions of tax dollars on scientific research, so it is important that we also have a good grasp of what constitutes money well spent in this regard.
  • ...18 more annotations...
  • Third, as an ethical matter, pseudoscience is not — contrary to popular belief — merely a harmless pastime of the gullible; it often threatens people’s welfare,
  • It is precisely in the area of medical treatments that the science-pseudoscience divide is most critical, and where the role of philosophers in clarifying things may be most relevant.
  • some traditional Chinese remedies (like drinking fresh turtle blood to alleviate cold symptoms) may in fact work
  • There is no question that some folk remedies do work. The active ingredient of aspirin, for example, is derived from willow bark, which had been known to have beneficial effects since the time of Hippocrates. There is also no mystery about how this happens: people have more or less randomly tried solutions to their health problems for millennia, sometimes stumbling upon something useful
  • What makes the use of aspirin “scientific,” however, is that we have validated its effectiveness through properly controlled trials, isolated the active ingredient, and understood the biochemical pathways through which it has its effects
  • In terms of empirical results, there are strong indications that acupuncture is effective for reducing chronic pain and nausea, but sham therapy, where needles are applied at random places, or are not even pierced through the skin, turn out to be equally effective (see for instance this recent study on the effect of acupuncture on post-chemotherapy chronic fatigue), thus seriously undermining talk of meridians and Qi lines
  • Asma at one point compares the current inaccessibility of Qi energy to the previous (until this year) inaccessibility of the famous Higgs boson,
  • But the analogy does not hold. The existence of the Higgs had been predicted on the basis of a very successful physical theory known as the Standard Model. This theory is not only exceedingly mathematically sophisticated, but it has been verified experimentally over and over again. The notion of Qi, again, is not really a theory in any meaningful sense of the word. It is just an evocative word to label a mysterious force
  • Philosophers of science have long recognized that there is nothing wrong with positing unobservable entities per se, it’s a question of what work such entities actually do within a given theoretical-empirical framework. Qi and meridians don’t seem to do any, and that doesn’t seem to bother supporters and practitioners of Chinese medicine. But it ought to.
  • what’s the harm in believing in Qi and related notions, if in fact the proposed remedies seem to help?
  • we can incorporate whatever serendipitous discoveries from folk medicine into modern scientific practice, as in the case of the willow bark turned aspirin. In this sense, there is no such thing as “alternative” medicine, there’s only stuff that works and stuff that doesn’t.
  • Second, if we are positing Qi and similar concepts, we are attempting to provide explanations for why some things work and others don’t. If these explanations are wrong, or unfounded as in the case of vacuous concepts like Qi, then we ought to correct or abandon them.
  • pseudo-medical treatments often do not work, or are even positively harmful. If you take folk herbal “remedies,” for instance, while your body is fighting a serious infection, you may suffer severe, even fatal, consequences.
  • Indulging in a bit of pseudoscience in some instances may be relatively innocuous, but the problem is that doing so lowers your defenses against more dangerous delusions that are based on similar confusions and fallacies. For instance, you may expose yourself and your loved ones to harm because your pseudoscientific proclivities lead you to accept notions that have been scientifically disproved, like the increasingly (and worryingly) popular idea that vaccines cause autism.
  • Philosophers nowadays recognize that there is no sharp line dividing sense from nonsense, and moreover that doctrines starting out in one camp may over time evolve into the other. For example, alchemy was a (somewhat) legitimate science in the times of Newton and Boyle, but it is now firmly pseudoscientific (movements in the opposite direction, from full-blown pseudoscience to genuine science, are notably rare).
  • The verdict by philosopher Larry Laudan, echoed by Asma, that the demarcation problem is dead and buried, is not shared by most contemporary philosophers who have studied the subject.
  • the criterion of falsifiability, for example, is still a useful benchmark for distinguishing science and pseudoscience, as a first approximation. Asma’s own counterexample inadvertently shows this: the “cleverness” of astrologers in cherry-picking what counts as a confirmation of their theory, is hardly a problem for the criterion of falsifiability, but rather a nice illustration of Popper’s basic insight: the bad habit of creative fudging and finagling with empirical data ultimately makes a theory impervious to refutation. And all pseudoscientists do it, from parapsychologists to creationists and 9/11 Truthers.
  • The borderlines between genuine science and pseudoscience may be fuzzy, but this should be even more of a call for careful distinctions, based on systematic facts and sound reasoning. To try a modicum of turtle blood here and a little aspirin there is not the hallmark of wisdom and even-mindedness. It is a dangerous gateway to superstition and irrationality.
summertyler

The Dangers of Pseudoscience - 0 views

  • Philosophers of science have been preoccupied for a while with what they call the “demarcation problem,” the issue of what separates good science from bad science and pseudoscience (and everything in between).
  • Demarcation is crucial to our pursuit of knowledge; its issues go to the core of debates on epistemology and of the nature of truth and discovery
  • our society spends billions of tax dollars on scientific research, so it is important that we also have a good grasp of what constitutes money well spent in this regard
  • ...2 more annotations...
  • pseudoscience is not — contrary to popular belief — merely a harmless pastime of the gullible; it often threatens people’s welfare, sometimes fatally so
  • It is precisely in the area of medical treatments that the science-pseudoscience divide is most critical, and where the role of philosophers in clarifying things may be most relevant.
  •  
    Pseudoscience is dangerous for three reasons, a philosophical, a civic, and a ethical reason.
Ellie McGinnis

The Dangers of Pseudoscience - NYTimes.com - 0 views

  • “demarcation problem,” the issue of what separates good science from bad science and pseudoscience
  • Demarcation is crucial to our pursuit of knowledge; its issues go to the core of debates on epistemology and of the nature of truth and discovery
  • our society spends billions of tax dollars on scientific research, so it is important that we also have a good grasp of what constitutes money well spent in this regard
  • ...10 more annotations...
  • pseudoscience is not — contrary to popular belief — merely a harmless pastime of the gullible; it often threatens people’s welfare, sometimes fatally so
  • in the area of medical treatments that the science-pseudoscience divide is most critical, and where the role of philosophers in clarifying things may be most relevant
  • What makes the use of aspirin “scientific,” however, is that we have validated its effectiveness through properly controlled trials, isolated the active ingredient, and understood the biochemical pathways through which it has its effects
  • Popper’s basic insight: the bad habit of creative fudging and finagling with empirical data ultimately makes a theory impervious to refutation. And all pseudoscientists do it, from parapsychologists to creationists and 9/11 Truthers.
  • Philosophers of science have long recognized that there is nothing wrong with positing unobservable entities per se, it’s a question of what work such entities actually do within a given theoretical-empirical framework.
  • we are attempting to provide explanations for why some things work and others don’t. If these explanations are wrong, or unfounded as in the case of vacuous concepts like Qi, then we ought to correct or abandon them.
  • no sharp line dividing sense from nonsense, and moreover that doctrines starting out in one camp may over time evolve into the other.
  • inaccessibility of the famous Higgs boson, a sub-atomic particle postulated by physicists to play a crucial role in literally holding the universe together (it provides mass to all other particles)
  • The open-ended nature of science means that there is nothing sacrosanct in either its results or its methods.
  • The borderlines between genuine science and pseudoscience may be fuzzy, but this should be even more of a call for careful distinctions, based on systematic facts and sound reasoning
anonymous

Hearing ghost voices relies on pseudoscience and fallibility of human perception - 0 views

  • Hearing ghost voices relies on pseudoscience and fallibility of human perception
  • Nontrivial numbers of Americans believe in the paranormal.
  • Part of the attraction of the audio recorder for paranormal researchers is its apparent objectivity. How could a skeptic refute the authenticity of a spirit captured by an unbiased technical instrument? To the believers, EVP seem like incontrovertible evidence of communications from beyond.
  • ...18 more annotations...
  • But recent research in my lab suggested that people don’t agree much about what, if anything, they hear in the EVP sounds – a result readily explained by the fallibility of human perception.
  • In some instances, alleged EVP are the voices of the investigators or interference from radio transmissions – problems that indicate shoddy data collection practices. Other research, however, has suggested that EVP have been captured under acoustically controlled circumstances in recording studios.
  • Research in mainstream psychology has shown that people will readily perceive words in strings of nonsensical speech sounds.
  • People’s expectations about what they’re supposed to hear can result in the illusory perception of tones, nature sounds, machine sounds, and even voices when only acoustic white noise – like the sound of a detuned radio – exists.
  • Interpretations of speech in noise – a situation similar to EVP where the alleged voice is difficult to discern – can shift entirely based upon what the listener expects to hear.
  • In my lab, we recently conducted an experiment to examine how expectations might influence the perception of purported EVP
  • So suggesting a paranormal research topic mattered only when the audio was ambiguous.
  • when people said they heard a voice in the EVP, only 13% agreed about exactly what the voice said. To compare, 95% percent of people on average agreed about what the voice said when they heard actual speech.
  • These findings suggest that paranormal researchers should not use their own subjective judgments to confirm the contents of EVP.
  • But perhaps most importantly, we showed that the mere suggestion of a paranormal research context made people more likely to hear voices in ambiguous stimuli, although they couldn’t agree on what the voices were saying.
  • pareidolia – the tendency to perceive human characteristics in meaningless perceptual patterns
  • There are many visual examples of pareidolia – things like seeing human faces in everyday objects (such as Jesus in a piece of toast).
  • Research from cognitive psychology has shown that paranormal believers may be especially prone to misperceiving chance events.
  • Another characteristic of pseudoscience is a lack of integration with related areas of inquiry. There is a rich history of using experimental methods to examine auditory perception, yet EVP enthusiasts are either unaware or willfully ignorant of this relevant work.
  • parsimony – the idea that the simplest explanation is preferred
  • we need a theory to account for how and why a human listener sometimes misperceives ambiguous stimuli.
  • In fact, this very tendency is one of many well-documented cognitive shortcuts that may have adaptive value. A voice may indicate the presence of a potential mate or foe, so it may be useful to err on the side of perceiving agency in ambiguous auditory stimuli.
  • Currently, there is only limited, tentative evidence to link exposure to pseudoscience on television to pseudoscientific beliefs. Still, one study showed that people find paranormal research to be more credible and scientific when it is shown using technological tools such as recording devices. Other evidence has suggested that popular opinion may outweigh scientific credibility when people evaluate pseudoscientific claims.
  •  
    Why do we hear voices or weird noises and think of spooky stories or ghosts? It all has to do with perception of the audible information we're taking in and how we've been influenced about this topic.
katedriscoll

The Quest to Tell Science from Pseudoscience | Boston Review - 0 views

  • Of the answers that have been proposed, Popper’s own criterion—falsifiability—remains the most commonly invoked, despite serious criticism from both philosophers and scientists. These attacks fatally weakened Popper’s proposal, yet its persistence over a century of debates helps to illustrate the challenge of demarcation—a problem no less central today than it was when Popper broached it
  • pper’s answer emerged. Popper was born just after the turn of the twentieth century in Vienna—the birthplace of psychoanalysis—and received his doctorate in psychology in 1928. In the early 1920s Popper volunteered in the clinics of Alfred Adler, who had split with his former mentor, the creator of psychoanalysis: Sigmund Freud. Precocious interest in psychoanalysis, and his subsequent rejection of it, were crucial in Popper’s later formulation of his philosophical views on science.
  • At first, Popper was quite taken with logical empiricism, but he would diverge from the mainstream of the movement and develop his own framework for understanding scientific thought in his two influential books The Logic of Scientific Discovery (1934, revised and translated to English in 1959) and Conjectures and Refutations (1962). Popper claimed to have formulated his initial ideas about demarcation in 1919, when he was seventeen years old. He had, he writes, “wished to distinguish between science and pseudo-science; knowing very well that science often errs, and that pseudoscience may happen to stumble on the truth.”
Javier E

Anti-vaccine activists, 9/11 deniers, and Google's social search. - Slate Magazine - 1 views

  • democratization of information-gathering—when accompanied by smart institutional and technological arrangements—has been tremendously useful, giving us Wikipedia and Twitter. But it has also spawned thousands of sites that undermine scientific consensus, overturn well-established facts, and promote conspiracy theories
  • Meanwhile, the move toward social search may further insulate regular visitors to such sites; discovering even more links found by their equally paranoid friends will hardly enlighten them.
  • Initially, the Internet helped them find and recruit like-minded individuals and promote events and petitions favorable to their causes. However, as so much of our public life has shifted online, they have branched out into manipulating search engines, editing Wikipedia entries, harassing scientists who oppose whatever pet theory they happen to believe in, and amassing digitized scraps of "evidence" that they proudly present to potential recruits.
  • ...9 more annotations...
  • The Vaccine article contains a number of important insights. First, the anti-vaccination cohort likes to move the goal posts: As scientists debunked the link between autism and mercury (once present in some childhood inoculations but now found mainly in certain flu vaccines), most activists dropped their mercury theory and point instead to aluminum or said that kids received “too many too soon.”
  • Second, it isn't clear whether scientists can "discredit" the movement's false claims at all: Its members are skeptical of what scientists have to say—not least because they suspect hidden connections between academia and pharmaceutical companies that manufacture the vaccines.
  • mere exposure to the current state of the scientific consensus will not sway hard-core opponents of vaccination. They are too vested in upholding their contrarian theories; some have consulting and speaking gigs to lose while others simply enjoy a sense of belonging to a community, no matter how kooky
  • attempts to influence communities that embrace pseudoscience or conspiracy theories by having independent experts or, worse, government workers join them—the much-debated antidote of “cognitive infiltration” proposed by Cass Sunstein (who now heads the Office of Information and Regulatory Affairs in the White House)—w
  • perhaps, it's time to accept that many of these communities aren't going to lose core members regardless of how much science or evidence is poured on them. Instead, resources should go into thwarting their growth by targeting their potential—rather than existent—members.
  • Given that censorship of search engines is not an appealing or even particularly viable option, what can be done to ensure that users are made aware that all the pseudoscientific advice they are likely to encounter may not be backed by science?
  • One is to train our browsers to flag information that may be suspicious or disputed. Thus, every time a claim like "vaccination leads to autism" appears in our browser, that sentence woul
  • The second—and not necessarily mutually exclusive—option is to nudge search engines to take more responsibility for their index and exercise a heavier curatorial control in presenting search results for issues like "global warming" or "vaccination." Google already has a list of search queries that send most traffic to sites that trade in pseudoscience and conspiracy theories; why not treat them differently than normal queries? Thus, whenever users are presented with search results that are likely to send them to sites run by pseudoscientists or conspiracy theorists, Google may simply display a huge red banner asking users to exercise caution and check a previously generated list of authoritative resources before making up their minds.
  • In more than a dozen countries Google already does something similar for users who are searching for terms like "ways to die" or "suicidal thoughts" by placing a prominent red note urging them to call the National Suicide Prevention Hotline.
sissij

How Inoculation Can Help Prevent Pseudoscience | Big Think - 2 views

  • It is easier to fool a person than it is to convince a person that they’ve been fooled. This is one of the great curses of humanity.
  • Given the incredible amount of information we process each day, it is difficult for any of us to critically analyze all of it.
  • The state of Minnesota is battling a measles outbreak caused by anti-vaccination propaganda. And Discussion over the effects of misinformation on recent elections in Austria, Germany, and the United States is still ongoing.
  • ...3 more annotations...
  • A recent set of experiments shows us that there is a way to help reduce the effects of misinformation on people: the authors amusingly call it the “inoculation.”
  • which even then were heavily influenced by their pre-existing worldviews.
  • teaching about misconceptions leads to greater learning overall then just telling somebody the truth.
  •  
    Fake news and alternative facts are things that mess up our perception a lot. As we learned in TOK, there are a lot of fallacies in human reasoning. People tend to stick with their pre-existing worldview or ideas. I found it very interesting that people reduce the effect of misinformation by having an "inoculation". I think our TOK class is like the "inoculation" in a way that it asks us question and challenge us with the idea that everything might not seem as definite or absolute as it seems. TOK class can definitely help us to be immune of the fake news. --Sissi (5/25/2017)
Javier E

The decline effect and the scientific method : The New Yorker - 3 views

  • The test of replicability, as it’s known, is the foundation of modern research. Replicability is how the community enforces itself. It’s a safeguard for the creep of subjectivity. Most of the time, scientists know what results they want, and that can influence the results they get. The premise of replicability is that the scientific community can correct for these flaws.
  • But now all sorts of well-established, multiply confirmed findings have started to look increasingly uncertain. It’s as if our facts were losing their truth: claims that have been enshrined in textbooks are suddenly unprovable.
  • This phenomenon doesn’t yet have an official name, but it’s occurring across a wide range of fields, from psychology to ecology.
  • ...39 more annotations...
  • If replication is what separates the rigor of science from the squishiness of pseudoscience, where do we put all these rigorously validated findings that can no longer be proved? Which results should we believe?
  • Schooler demonstrated that subjects shown a face and asked to describe it were much less likely to recognize the face when shown it later than those who had simply looked at it. Schooler called the phenomenon “verbal overshadowing.”
  • The most likely explanation for the decline is an obvious one: regression to the mean. As the experiment is repeated, that is, an early statistical fluke gets cancelled out. The extrasensory powers of Schooler’s subjects didn’t decline—they were simply an illusion that vanished over time.
  • yet Schooler has noticed that many of the data sets that end up declining seem statistically solid—that is, they contain enough data that any regression to the mean shouldn’t be dramatic. “These are the results that pass all the tests,” he says. “The odds of them being random are typically quite remote, like one in a million. This means that the decline effect should almost never happen. But it happens all the time!
  • this is why Schooler believes that the decline effect deserves more attention: its ubiquity seems to violate the laws of statistics
  • In 2001, Michael Jennions, a biologist at the Australian National University, set out to analyze “temporal trends” across a wide range of subjects in ecology and evolutionary biology. He looked at hundreds of papers and forty-four meta-analyses (that is, statistical syntheses of related studies), and discovered a consistent decline effect over time, as many of the theories seemed to fade into irrelevance.
  • Jennions admits that his findings are troubling, but expresses a reluctance to talk about them
  • publicly. “This is a very sensitive issue for scientists,” he says. “You know, we’re supposed to be dealing with hard facts, the stuff that’s supposed to stand the test of time. But when you see these trends you become a little more skeptical of things.”
  • Sterling saw that if ninety-seven per cent of psychology studies were proving their hypotheses, either psychologists were extraordinarily lucky or they published only the outcomes of successful experiments.
  • Jennions, similarly, argues that the decline effect is largely a product of publication bias, or the tendency of scientists and scientific journals to prefer positive data over null results, which is what happens when no effect is found. The bias was first identified by the statistician Theodore Sterling, in 1959, after he noticed that ninety-seven per cent of all published psychological studies with statistically significant data found the effect they were looking for
  • While publication bias almost certainly plays a role in the decline effect, it remains an incomplete explanation. For one thing, it fails to account for the initial prevalence of positive results among studies that never even get submitted to journals. It also fails to explain the experience of people like Schooler, who have been unable to replicate their initial data despite their best efforts.
  • One of his most cited papers has a deliberately provocative title: “Why Most Published Research Findings Are False.”
  • suspects that an equally significant issue is the selective reporting of results—the data that scientists choose to document in the first place. Palmer’s most convincing evidence relies on a statistical tool known as a funnel graph. When a large number of studies have been done on a single subject, the data should follow a pattern: studies with a large sample size should all cluster around a common value—the true result—whereas those with a smaller sample size should exhibit a random scattering, since they’re subject to greater sampling error. This pattern gives the graph its name, since the distribution resembles a funnel.
  • after Palmer plotted every study of fluctuating asymmetry, he noticed that the distribution of results with smaller sample sizes wasn’t random at all but instead skewed heavily toward positive results. Palmer has since documented a similar problem in several other contested subject areas. “Once I realized that selective reporting is everywhere in science, I got quite depressed,” Palmer told me. “As a researcher, you’re always aware that there might be some nonrandom patterns, but I had no idea how widespread it is.”
  • Palmer summarized the impact of selective reporting on his field: “We cannot escape the troubling conclusion that some—perhaps many—cherished generalities are at best exaggerated in their biological significance and at worst a collective illusion nurtured by strong a-priori beliefs often repeated.”
  • Palmer emphasizes that selective reporting is not the same as scientific fraud. Rather, the problem seems to be one of subtle omissions and unconscious misperceptions, as researchers struggle to make sense of their results. Stephen Jay Gould referred to this as the “sho
  • horning” process.
  • “A lot of scientific measurement is really hard,” Simmons told me. “If you’re talking about fluctuating asymmetry, then it’s a matter of minuscule differences between the right and left sides of an animal. It’s millimetres of a tail feather. And so maybe a researcher knows that he’s measuring a good male”—an animal that has successfully mated—“and he knows that it’s supposed to be symmetrical. Well, that act of measurement is going to be vulnerable to all sorts of perception biases. That’s not a cynical statement. That’s just the way human beings work.”
  • One of the classic examples of selective reporting concerns the testing of acupuncture in different countries. While acupuncture is widely accepted as a medical treatment in various Asian countries, its use is much more contested in the West. These cultural differences have profoundly influenced the results of clinical trials.
  • John Ioannidis, an epidemiologist at Stanford University, argues that such distortions are a serious issue in biomedical research. “These exaggerations are why the decline has become so common,” he says. “It’d be really great if the initial studies gave us an accurate summary of things. But they don’t. And so what happens is we waste a lot of money treating millions of patients and doing lots of follow-up studies on other themes based on results that are misleading.”
  • In 2005, Ioannidis published an article in the Journal of the American Medical Association that looked at the forty-nine most cited clinical-research studies in three major medical journals.
  • the data Ioannidis found were disturbing: of the thirty-four claims that had been subject to replication, forty-one per cent had either been directly contradicted or had their effect sizes significantly downgraded.
  • the most troubling fact emerged when he looked at the test of replication: out of four hundred and thirty-two claims, only a single one was consistently replicable. “This doesn’t mean that none of these claims will turn out to be true,” he says. “But, given that most of them were done badly, I wouldn’t hold my breath.”
  • According to Ioannidis, the main problem is that too many researchers engage in what he calls “significance chasing,” or finding ways to interpret the data so that it passes the statistical test of significance—the ninety-five-per-cent boundary invented by Ronald Fisher.
  • For Simmons, the steep rise and slow fall of fluctuating asymmetry is a clear example of a scientific paradigm, one of those intellectual fads that both guide and constrain research: after a new paradigm is proposed, the peer-review process is tilted toward positive results. But then, after a few years, the academic incentives shift—the paradigm has become entrenched—so that the most notable results are now those that disprove the theory.
  • The problem of selective reporting is rooted in a fundamental cognitive flaw, which is that we like proving ourselves right and hate being wrong.
  • “It feels good to validate a hypothesis,” Ioannidis said. “It feels even better when you’ve got a financial interest in the idea or your career depends upon it. And that’s why, even after a claim has been systematically disproven”—he cites, for instance, the early work on hormone replacement therapy, or claims involving various vitamins—“you still see some stubborn researchers citing the first few studies
  • That’s why Schooler argues that scientists need to become more rigorous about data collection before they publish. “We’re wasting too much time chasing after bad studies and underpowered experiments,”
  • The current “obsession” with replicability distracts from the real problem, which is faulty design.
  • “Every researcher should have to spell out, in advance, how many subjects they’re going to use, and what exactly they’re testing, and what constitutes a sufficient level of proof. We have the tools to be much more transparent about our experiments.”
  • Schooler recommends the establishment of an open-source database, in which researchers are required to outline their planned investigations and document all their results. “I think this would provide a huge increase in access to scientific work and give us a much better way to judge the quality of an experiment,”
  • scientific research will always be shadowed by a force that can’t be curbed, only contained: sheer randomness. Although little research has been done on the experimental dangers of chance and happenstance, the research that exists isn’t encouraging.
  • The disturbing implication of the Crabbe study is that a lot of extraordinary scientific data are nothing but noise. The hyperactivity of those coked-up Edmonton mice wasn’t an interesting new fact—it was a meaningless outlier, a by-product of invisible variables we don’t understand.
  • The problem, of course, is that such dramatic findings are also the most likely to get published in prestigious journals, since the data are both statistically significant and entirely unexpected
  • This suggests that the decline effect is actually a decline of illusion. While Karl Popper imagined falsification occurring with a single, definitive experiment—Galileo refuted Aristotelian mechanics in an afternoon—the process turns out to be much messier than that.
  • Many scientific theories continue to be considered true even after failing numerous experimental tests.
  • Even the law of gravity hasn’t always been perfect at predicting real-world phenomena. (In one test, physicists measuring gravity by means of deep boreholes in the Nevada desert found a two-and-a-half-per-cent discrepancy between the theoretical predictions and the actual data.)
  • Such anomalies demonstrate the slipperiness of empiricism. Although many scientific ideas generate conflicting results and suffer from falling effect sizes, they continue to get cited in the textbooks and drive standard medical practice. Why? Because these ideas seem true. Because they make sense. Because we can’t bear to let them go. And this is why the decline effect is so troubling. Not because it reveals the human fallibility of science, in which data are tweaked and beliefs shape perceptions. (Such shortcomings aren’t surprising, at least for scientists.) And not because it reveals that many of our most exciting theories are fleeting fads and will soon be rejected. (That idea has been around since Thomas Kuhn.)
  • The decline effect is troubling because it reminds us how difficult it is to prove anything. We like to pretend that our experiments define the truth for us. But that’s often not the case. Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true. When the experiments are done, we still have to choose what to believe. ♦
sissij

Believe It Or Not, Most Published Research Findings Are Probably False | Big Think - 0 views

  • but this has come with the side effect of a toxic combination of confirmation bias and Google, enabling us to easily find a study to support whatever it is that we already believe, without bothering to so much as look at research that might challenge our position
  • Indeed, this is a statement oft-used by fans of pseudoscience who take the claim at face value, without applying the principles behind it to their own evidence.
  • at present, most published findings are likely to be incorrect.
  • ...6 more annotations...
  • If you use p=0.05 to suggest that you have made a discovery, you will be wrong at least 30 percent of the time.
  • The problem is being tackled head on in the field of psychology which was shaken by the Stapel affair in which one Dutch researcher fabricated data in over 50 fraudulent papers before being detected.
  • a problem know as publication bias or the file drawer problem.
  • The smaller the effect size, the less likely the findings are to be true.
  • The greater the number and the lesser the selection of tested relationships, the less likely the findings are to be true.
  • For scientists, the discussion over how to resolve the problem is rapidly heating up with calls for big changes to how researchers register, conduct, and publish research and a growing chorus from hundreds of global scientific organizations demanding that all clinical trials are published.
  •  
    As we learned in TOK, science is full of uncertainties. And in this article, the author suggests that even the publication of science paper is full of flaws. But the general population often cited science source that's in support of them. However, science findings are full of faults and the possibility is very high for the scientists to make a false claim. Sometimes, not the errors in experiments, but the fabrication of data lead to false scientific papers. And also, there are a lot of patterns behind the publication of false scientific papers.
Javier E

Double X Science: Real science vs. fake science: How can you tell them apart? - 0 views

  • Pseudosciences are usually pretty easily identified by their emphasis on confirmation over refutation, on physically impossible claims, and on terms charged with emotion or false "sciencey-ness
  • If we could hand out cheat sheets for people of sound mind to use when considering a product, book, therapy, or remedy, the following would constitute the top-10 questions you should always ask yourself--and answer--before shelling out the benjamins for anything
ardenganse

A Late Burst of Climate Denial Extends the Era of Trump Disinformation - The New York T... - 0 views

  • Dr. Legates, a climate denialist installed last year by the Trump administration
    • ardenganse
       
      People tend to surround themselves with people who agree with them. This could relate somewhat to confirmation bias.
  • Peter Gleick, a climate scientist and member of the National Academy of Sciences who noticed the posts and drew attention to them on Twitter, called them “ridiculous” and a ham-handed effort to grant a veneer of government respectability to junk science before President-elect Joseph R. Biden Jr. assumes office Jan. 20.
  • “To climate science itself these pose very little danger because they are pseudoscience, because they are ridiculous, and because nobody serious in the scientific community will pay any attention to them.”
    • ardenganse
       
      Relates to the scientific method and shared knowledge. Any claim has to be widely accepted before it can be deemed accurate.
sanderk

Why people believe the Earth is flat and we should listen to anti-vaxxers | Elfy Scott ... - 0 views

  • I understand why scientifically minded people experience profound frustration at the nonsense, particularly when we’re forced to consider the public health implications of the anti-vaxxer movement which has been blamed as the root cause for recent outbreaks of measles in the US, a viral infection which can prove devastating for babies and young children. Misinformation can cause immense suffering and we should do our utmost to dispel the lies.
  • Too many people in scientific spheres seem to revel in dismissing flat-Earthers and anti-vaxxers as garden variety nut-jobs and losers. It may be cathartic – but it’s not productive.
  • It’s interesting that for a scientific community so perennially pleased with itself, we all seem to be making the same fundamental attribution error by ignoring the notion that belief in pseudoscience and conspiracy theories is propelled by external pressures of fear, confusion and disempowerment. Instead we seem too often satisfied with pinning the nonsense on some bizarrely flourishing individual idiocy.
  • ...1 more annotation...
  • When we feel so fundamentally disenfranchised, it’s comforting to concoct a fictional universe that systemically denies you the right cards. It gives you something to fight against and makes you self-deterministic. It provides an “us and them” narrative that allows you to conceive of yourself as a little David raging against a rather haughty, intellectual establishment Goliath. This is what worries me about journalists writing columns or tweets sneering at the supposed stupidity of the pseudoscientists and con spiracy theorists – it only serves to enforce this “us and them” worldview.
Javier E

The New York Times' trans coverage is under fire. The paper needs to listen | Arwa Mahd... - 0 views

  • I’ve got a feeling the poor alien might get the impression that every third person in the US is trans – rather than 0.5% of the population. They (I assume aliens are nonbinary) might get the impression that nobody is allowed to say the word “woman” any more and we are all being forced at gunpoint to say “uterus-havers”. They might get the impression that women’s sports have been completely taken over by trans women. They might believe that millions of children are being mutilated by doctors in the name of gender-affirming care because of the all-powerful trans lobby. They might come away thinking that JK Rowling is not a multi-multi-multi-millionaire with endless resources at her disposal but a marginalized victim who needs brave Times columnists to come to her defense.
  • “In the past eight months the Times has now published more than 15,000 words’ worth of front-page stories asking whether care and support for young trans people might be going too far or too fast”. Those, to reiterate, are newspaper front-page stories. As Popula notes, that number “doesn’t include the 11,000 or so words the New York Times Magazine devoted to a laboriously evenhanded story about disagreements over the standards of care for trans youth; or the 3,000 words of the front-page story … on whether trans women athletes are unfairly ruining the competition for other women; or the 1,200 words of the front-page story … on how trans interests are banning the word “woman” from abortion-rights discourse.”
  • This letter, addressed to the paper’s associate managing editor for standards, accused the Times of treating gender diversity “with an eerily familiar mix of pseudoscience and euphemistic, charged language, while publishing reporting on trans children that omits relevant information about its sources”. That relevant information being that some of those sources have affiliations with far-right groups. That “charged language” being phrases like “patient zero” to describe a transgender young person seeking gender-affirming care, “a phrase that vilifies transness as a disease to be feared”.
  • ...4 more annotations...
  • “It is not unusual for outside groups to critique our coverage or to rally supporters to seek to influence our journalism,” Kahn wrote in the memo. “In this case, however, members of our staff and contributors to The Times joined the effort … We do not welcome, and will not tolerate, participation by Times journalists in protests organized by advocacy groups or attacks on colleagues on social media and other public forums.”
  • Charlie Stadtlander, the Times’ director of external communication, put out a statement stating that the organization pursues “independent reporting on transgender issues that include profiling groundbreakers in the movement, challenges and prejudice faced by the community, and how society is grappling with debates about care”. While that was all very diplomatic, the executive editor, Joe Kahn, and opinion editor, Kathleen Kingsbury, sent around a rather more pointed newsroom memo condemning the letters on Thursday.
  • The second letter was signed by more than 100 LGBTQ+ and civil rights groups, including Glaad and the Human Rights Campaign. It expressed support for the contributor letter and accused the Times of platforming “fringe theories” and “dangerous inaccuracies”. It noted that while the Times has produced responsible coverage of trans people, “those articles are not getting front-page placement or sent to app users via push notification like the irresponsible pieces are”. And it observed that rightwing politicians have been using the Times’s coverage of trans issues to justify criminalizing gender-affirming care.
  • Here’s the thing: there is no clear-cut line between advocacy and journalism. All media organizations have a perspective about the world and filter their output (which will, of course, strive to be fairly reported) through that perspective. To pretend otherwise is dishonest. Like it or not, the Times is involved in advocacy. It just needs to step back for a moment and think about who it’s advocating for.
Javier E

Psychological nativism - Wikipedia - 0 views

  • In the field of psychology, nativism is the view that certain skills or abilities are "native" or hard-wired into the brain at birth. This is in contrast to the "blank slate" or tabula rasa view, which states that the brain has inborn capabilities for learning from the environment but does not contain content such as innate beliefs.
  • Some nativists believe that specific beliefs or preferences are "hard-wired". For example, one might argue that some moral intuitions are innate or that color preferences are innate. A less established argument is that nature supplies the human mind with specialized learning devices. This latter view differs from empiricism only to the extent that the algorithms that translate experience into information may be more complex and specialized in nativist theories than in empiricist theories. However, empiricists largely remain open to the nature of learning algorithms and are by no means restricted to the historical associationist mechanisms of behaviorism.
  • Nativism has a history in philosophy, particularly as a reaction to the straightforward empiricist views of John Locke and David Hume. Hume had given persuasive logical arguments that people cannot infer causality from perceptual input. The most one could hope to infer is that two events happen in succession or simultaneously. One response to this argument involves positing that concepts not supplied by experience, such as causality, must exist prior to any experience and hence must be innate.
  • ...14 more annotations...
  • The philosopher Immanuel Kant (1724–1804) argued in his Critique of Pure Reason that the human mind knows objects in innate, a priori ways. Kant claimed that humans, from birth, must experience all objects as being successive (time) and juxtaposed (space). His list of inborn categories describes predicates that the mind can attribute to any object in general. Arthur Schopenhauer (1788–1860) agreed with Kant, but reduced the number of innate categories to one—causality—which presupposes the others.
  • Modern nativism is most associated with the work of Jerry Fodor (1935–2017), Noam Chomsky (b. 1928), and Steven Pinker (b. 1954), who argue that humans from birth have certain cognitive modules (specialised genetically inherited psychological abilities) that allow them to learn and acquire certain skills, such as language.
  • For example, children demonstrate a facility for acquiring spoken language but require intensive training to learn to read and write. This poverty of the stimulus observation became a principal component of Chomsky's argument for a "language organ"—a genetically inherited neurological module that confers a somewhat universal understanding of syntax that all neurologically healthy humans are born with, which is fine-tuned by an individual's experience with their native language
  • In The Blank Slate (2002), Pinker similarly cites the linguistic capabilities of children, relative to the amount of direct instruction they receive, as evidence that humans have an inborn facility for speech acquisition (but not for literacy acquisition).
  • A number of other theorists[1][2][3] have disagreed with these claims. Instead, they have outlined alternative theories of how modularization might emerge over the course of development, as a result of a system gradually refining and fine-tuning its responses to environmental stimuli.[4]
  • Many empiricists are now also trying to apply modern learning models and techniques to the question of language acquisition, with marked success.[20] Similarity-based generalization marks another avenue of recent research, which suggests that children may be able to rapidly learn how to use new words by generalizing about the usage of similar words that they already know (see also the distributional hypothesis).[14][21][22][23]
  • The term universal grammar (or UG) is used for the purported innate biological properties of the human brain, whatever exactly they turn out to be, that are responsible for children's successful acquisition of a native language during the first few years of life. The person most strongly associated with the hypothesising of UG is Noam Chomsky, although the idea of Universal Grammar has clear historical antecedents at least as far back as the 1300s, in the form of the Speculative Grammar of Thomas of Erfurt.
  • This evidence is all the more impressive when one considers that most children do not receive reliable corrections for grammatical errors.[9] Indeed, even children who for medical reasons cannot produce speech, and therefore have no possibility of producing an error in the first place, have been found to master both the lexicon and the grammar of their community's language perfectly.[10] The fact that children succeed at language acquisition even when their linguistic input is severely impoverished, as it is when no corrective feedback is available, is related to the argument from the poverty of the stimulus, and is another claim for a central role of UG in child language acquisition.
  • Researchers at Blue Brain discovered a network of about fifty neurons which they believed were building blocks of more complex knowledge but contained basic innate knowledge that could be combined in different more complex ways to give way to acquired knowledge, like memory.[11
  • experience, the tests would bring about very different characteristics for each rat. However, the rats all displayed similar characteristics which suggest that their neuronal circuits must have been established previously to their experiences. The Blue Brain Project research suggests that some of the "building blocks" of knowledge are genetic and present at birth.[11]
  • modern nativist theory makes little in the way of specific falsifiable and testable predictions, and has been compared by some empiricists to a pseudoscience or nefarious brand of "psychological creationism". As influential psychologist Henry L. Roediger III remarked that "Chomsky was and is a rationalist; he had no uses for experimental analyses or data of any sort that pertained to language, and even experimental psycholinguistics was and is of little interest to him".[13]
  • , Chomsky's poverty of the stimulus argument is controversial within linguistics.[14][15][16][17][18][19]
  • Neither the five-year-old nor the adults in the community can easily articulate the principles of the grammar they are following. Experimental evidence shows that infants come equipped with presuppositions that allow them to acquire the rules of their language.[6]
  • Paul Griffiths, in "What is Innateness?", argues that innateness is too confusing a concept to be fruitfully employed as it confuses "empirically dissociated" concepts. In a previous paper, Griffiths argued that innateness specifically confuses these three distinct biological concepts: developmental fixity, species nature, and intended outcome. Developmental fixity refers to how insensitive a trait is to environmental input, species nature reflects what it is to be an organism of a certain kind, and the intended outcome is how an organism is meant to develop.[24]
1 - 14 of 14
Showing 20 items per page