Skip to main content

Home/ TOK Friends/ Group items tagged falsification

Rss Feed Group items tagged

katedriscoll

What is Falsification? | TOKTalk.net - 0 views

  • This is how science works, and how it should work. If the theory fails to make the correct predictions, then you have to replace the theory. This is what you call scientific progress. You replace old theories with better ones.
  • And now to a question that some of you are burning to know the answer of: “Is the falsification principle falsifyable?” I once read, I don’t know if it’s true, that Karl Popper used to kick his students out of the classroom for asking such a question. The falsification principle is that what its name says, it is a principle and not a scientific theory. No, the falsification principle itself is not falsifyable, it is not scientific. It belongs to metascience, or philosophy. Don’t forget: the scientific method – hypothesis, experimentation, observation, conclusion – is also not falsifyable, still it is used by scientists on a daily basis.
Grace Carey

News at Tipitaka Network - 0 views

  •  
    Finding some interesting and very much TOK articles while I'm working on my religious investigation about the science behind Buddhist beliefs. I found this one particularly intriguing as it discusses why the theory of reincarnation is scientifically sound and why scientists are often narrow-minded and overly trusted. "I was once told by a Buddhist G.P. that, on his first day at a medical school in Sydney, the famous Professor, head of the Medical School, began his welcoming address by stating "Half of what we are going to teach you in the next few years is wrong. Our problem is that we do not know which half it is!" Those were the words of a real scientist." "Logic is only as reliable as the assumptions on which it is based." "Objective experience is that which is free from all bias. In Buddhism, the three types of bias are desire, ill-will and skeptical doubt. Desire makes one see only what one wants to see, it bends the truth to fit one's preferences." "Reality, according to pure science, does not consist of well ordered matter with precise massed, energies and positions in space, all just waiting to be measured. Reality is the broadest of smudges of all possibilities, only some being more probable than others." "At a recent seminar on Science and Religion, at which I was a speaker, a Catholic in the audience bravely announced that whenever she looks through a telescope at the stars, she feels uncomfortable because her religion is threatened. I commented that whenever a scientist looks the other way round through a telescope, to observe the one who is watching, then they feel uncomfortable because their science is threatened by what is doing the seeing! "
Javier E

The decline effect and the scientific method : The New Yorker - 3 views

  • The test of replicability, as it’s known, is the foundation of modern research. Replicability is how the community enforces itself. It’s a safeguard for the creep of subjectivity. Most of the time, scientists know what results they want, and that can influence the results they get. The premise of replicability is that the scientific community can correct for these flaws.
  • But now all sorts of well-established, multiply confirmed findings have started to look increasingly uncertain. It’s as if our facts were losing their truth: claims that have been enshrined in textbooks are suddenly unprovable.
  • This phenomenon doesn’t yet have an official name, but it’s occurring across a wide range of fields, from psychology to ecology.
  • ...39 more annotations...
  • If replication is what separates the rigor of science from the squishiness of pseudoscience, where do we put all these rigorously validated findings that can no longer be proved? Which results should we believe?
  • Schooler demonstrated that subjects shown a face and asked to describe it were much less likely to recognize the face when shown it later than those who had simply looked at it. Schooler called the phenomenon “verbal overshadowing.”
  • The most likely explanation for the decline is an obvious one: regression to the mean. As the experiment is repeated, that is, an early statistical fluke gets cancelled out. The extrasensory powers of Schooler’s subjects didn’t decline—they were simply an illusion that vanished over time.
  • yet Schooler has noticed that many of the data sets that end up declining seem statistically solid—that is, they contain enough data that any regression to the mean shouldn’t be dramatic. “These are the results that pass all the tests,” he says. “The odds of them being random are typically quite remote, like one in a million. This means that the decline effect should almost never happen. But it happens all the time!
  • this is why Schooler believes that the decline effect deserves more attention: its ubiquity seems to violate the laws of statistics
  • In 2001, Michael Jennions, a biologist at the Australian National University, set out to analyze “temporal trends” across a wide range of subjects in ecology and evolutionary biology. He looked at hundreds of papers and forty-four meta-analyses (that is, statistical syntheses of related studies), and discovered a consistent decline effect over time, as many of the theories seemed to fade into irrelevance.
  • Jennions admits that his findings are troubling, but expresses a reluctance to talk about them
  • publicly. “This is a very sensitive issue for scientists,” he says. “You know, we’re supposed to be dealing with hard facts, the stuff that’s supposed to stand the test of time. But when you see these trends you become a little more skeptical of things.”
  • Sterling saw that if ninety-seven per cent of psychology studies were proving their hypotheses, either psychologists were extraordinarily lucky or they published only the outcomes of successful experiments.
  • Jennions, similarly, argues that the decline effect is largely a product of publication bias, or the tendency of scientists and scientific journals to prefer positive data over null results, which is what happens when no effect is found. The bias was first identified by the statistician Theodore Sterling, in 1959, after he noticed that ninety-seven per cent of all published psychological studies with statistically significant data found the effect they were looking for
  • While publication bias almost certainly plays a role in the decline effect, it remains an incomplete explanation. For one thing, it fails to account for the initial prevalence of positive results among studies that never even get submitted to journals. It also fails to explain the experience of people like Schooler, who have been unable to replicate their initial data despite their best efforts.
  • One of his most cited papers has a deliberately provocative title: “Why Most Published Research Findings Are False.”
  • suspects that an equally significant issue is the selective reporting of results—the data that scientists choose to document in the first place. Palmer’s most convincing evidence relies on a statistical tool known as a funnel graph. When a large number of studies have been done on a single subject, the data should follow a pattern: studies with a large sample size should all cluster around a common value—the true result—whereas those with a smaller sample size should exhibit a random scattering, since they’re subject to greater sampling error. This pattern gives the graph its name, since the distribution resembles a funnel.
  • after Palmer plotted every study of fluctuating asymmetry, he noticed that the distribution of results with smaller sample sizes wasn’t random at all but instead skewed heavily toward positive results. Palmer has since documented a similar problem in several other contested subject areas. “Once I realized that selective reporting is everywhere in science, I got quite depressed,” Palmer told me. “As a researcher, you’re always aware that there might be some nonrandom patterns, but I had no idea how widespread it is.”
  • Palmer summarized the impact of selective reporting on his field: “We cannot escape the troubling conclusion that some—perhaps many—cherished generalities are at best exaggerated in their biological significance and at worst a collective illusion nurtured by strong a-priori beliefs often repeated.”
  • Palmer emphasizes that selective reporting is not the same as scientific fraud. Rather, the problem seems to be one of subtle omissions and unconscious misperceptions, as researchers struggle to make sense of their results. Stephen Jay Gould referred to this as the “sho
  • horning” process.
  • “A lot of scientific measurement is really hard,” Simmons told me. “If you’re talking about fluctuating asymmetry, then it’s a matter of minuscule differences between the right and left sides of an animal. It’s millimetres of a tail feather. And so maybe a researcher knows that he’s measuring a good male”—an animal that has successfully mated—“and he knows that it’s supposed to be symmetrical. Well, that act of measurement is going to be vulnerable to all sorts of perception biases. That’s not a cynical statement. That’s just the way human beings work.”
  • One of the classic examples of selective reporting concerns the testing of acupuncture in different countries. While acupuncture is widely accepted as a medical treatment in various Asian countries, its use is much more contested in the West. These cultural differences have profoundly influenced the results of clinical trials.
  • John Ioannidis, an epidemiologist at Stanford University, argues that such distortions are a serious issue in biomedical research. “These exaggerations are why the decline has become so common,” he says. “It’d be really great if the initial studies gave us an accurate summary of things. But they don’t. And so what happens is we waste a lot of money treating millions of patients and doing lots of follow-up studies on other themes based on results that are misleading.”
  • In 2005, Ioannidis published an article in the Journal of the American Medical Association that looked at the forty-nine most cited clinical-research studies in three major medical journals.
  • the data Ioannidis found were disturbing: of the thirty-four claims that had been subject to replication, forty-one per cent had either been directly contradicted or had their effect sizes significantly downgraded.
  • the most troubling fact emerged when he looked at the test of replication: out of four hundred and thirty-two claims, only a single one was consistently replicable. “This doesn’t mean that none of these claims will turn out to be true,” he says. “But, given that most of them were done badly, I wouldn’t hold my breath.”
  • According to Ioannidis, the main problem is that too many researchers engage in what he calls “significance chasing,” or finding ways to interpret the data so that it passes the statistical test of significance—the ninety-five-per-cent boundary invented by Ronald Fisher.
  • For Simmons, the steep rise and slow fall of fluctuating asymmetry is a clear example of a scientific paradigm, one of those intellectual fads that both guide and constrain research: after a new paradigm is proposed, the peer-review process is tilted toward positive results. But then, after a few years, the academic incentives shift—the paradigm has become entrenched—so that the most notable results are now those that disprove the theory.
  • The problem of selective reporting is rooted in a fundamental cognitive flaw, which is that we like proving ourselves right and hate being wrong.
  • “It feels good to validate a hypothesis,” Ioannidis said. “It feels even better when you’ve got a financial interest in the idea or your career depends upon it. And that’s why, even after a claim has been systematically disproven”—he cites, for instance, the early work on hormone replacement therapy, or claims involving various vitamins—“you still see some stubborn researchers citing the first few studies
  • That’s why Schooler argues that scientists need to become more rigorous about data collection before they publish. “We’re wasting too much time chasing after bad studies and underpowered experiments,”
  • The current “obsession” with replicability distracts from the real problem, which is faulty design.
  • “Every researcher should have to spell out, in advance, how many subjects they’re going to use, and what exactly they’re testing, and what constitutes a sufficient level of proof. We have the tools to be much more transparent about our experiments.”
  • Schooler recommends the establishment of an open-source database, in which researchers are required to outline their planned investigations and document all their results. “I think this would provide a huge increase in access to scientific work and give us a much better way to judge the quality of an experiment,”
  • scientific research will always be shadowed by a force that can’t be curbed, only contained: sheer randomness. Although little research has been done on the experimental dangers of chance and happenstance, the research that exists isn’t encouraging.
  • The disturbing implication of the Crabbe study is that a lot of extraordinary scientific data are nothing but noise. The hyperactivity of those coked-up Edmonton mice wasn’t an interesting new fact—it was a meaningless outlier, a by-product of invisible variables we don’t understand.
  • The problem, of course, is that such dramatic findings are also the most likely to get published in prestigious journals, since the data are both statistically significant and entirely unexpected
  • This suggests that the decline effect is actually a decline of illusion. While Karl Popper imagined falsification occurring with a single, definitive experiment—Galileo refuted Aristotelian mechanics in an afternoon—the process turns out to be much messier than that.
  • Many scientific theories continue to be considered true even after failing numerous experimental tests.
  • Even the law of gravity hasn’t always been perfect at predicting real-world phenomena. (In one test, physicists measuring gravity by means of deep boreholes in the Nevada desert found a two-and-a-half-per-cent discrepancy between the theoretical predictions and the actual data.)
  • Such anomalies demonstrate the slipperiness of empiricism. Although many scientific ideas generate conflicting results and suffer from falling effect sizes, they continue to get cited in the textbooks and drive standard medical practice. Why? Because these ideas seem true. Because they make sense. Because we can’t bear to let them go. And this is why the decline effect is so troubling. Not because it reveals the human fallibility of science, in which data are tweaked and beliefs shape perceptions. (Such shortcomings aren’t surprising, at least for scientists.) And not because it reveals that many of our most exciting theories are fleeting fads and will soon be rejected. (That idea has been around since Thomas Kuhn.)
  • The decline effect is troubling because it reminds us how difficult it is to prove anything. We like to pretend that our experiments define the truth for us. But that’s often not the case. Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true. When the experiments are done, we still have to choose what to believe. ♦
oliviaodon

How One Psychologist Is Tackling Human Biases in Science - 0 views

  • It’s likely that some researchers are consciously cherry-picking data to get their work published. And some of the problems surely lie with journal publication policies. But the problems of false findings often begin with researchers unwittingly fooling themselves: they fall prey to cognitive biases, common modes of thinking that lure us toward wrong but convenient or attractive conclusions.
  • Peer review seems to be a more fallible instrument—especially in areas such as medicine and psychology—than is often appreciated, as the emerging “crisis of replicability” attests.
  • Psychologists have shown that “most of our reasoning is in fact rationalization,” he says. In other words, we have already made the decision about what to do or to think, and our “explanation” of our reasoning is really a justification for doing what we wanted to do—or to believe—anyway. Science is of course meant to be more objective and skeptical than everyday thought—but how much is it, really?
  • ...10 more annotations...
  • common response to this situation is to argue that, even if individual scientists might fool themselves, others have no hesitation in critiquing their ideas or their results, and so it all comes out in the wash: Science as a communal activity is self-correcting. Sometimes this is true—but it doesn’t necessarily happen as quickly or smoothly as we might like to believe.
  • The idea, says Nosek, is that researchers “write down in advance what their study is for and what they think will happen.” Then when they do their experiments, they agree to be bound to analyzing the results strictly within the confines of that original plan
  • He is convinced that the process and progress of science would be smoothed by bringing these biases to light—which means making research more transparent in its methods, assumptions, and interpretations
  • Surprisingly, Nosek thinks that one of the most effective solutions to cognitive bias in science could come from the discipline that has weathered some of the heaviest criticism recently for its error-prone and self-deluding ways: pharmacology.
  • Psychologist Brian Nosek of the University of Virginia says that the most common and problematic bias in science is “motivated reasoning”: We interpret observations to fit a particular idea.
  • Sometimes it seems surprising that science functions at all.
  • Whereas the falsification model of the scientific method championed by philosopher Karl Popper posits that the scientist looks for ways to test and falsify her theories—to ask “How am I wrong?”—Nosek says that scientists usually ask instead “How am I right?” (or equally, to ask “How are you wrong?”).
  • Statistics may seem to offer respite from bias through strength in numbers, but they are just as fraught.
  • Given that science has uncovered a dizzying variety of cognitive biases, the relative neglect of their consequences within science itself is peculiar. “I was aware of biases in humans at large,” says Hartgerink, “but when I first ‘learned’ that they also apply to scientists, I was somewhat amazed, even though it is so obvious.”
  • Nosek thinks that peer review might sometimes actively hinder clear and swift testing of scientific claims.
Josh Schwartz

Theory of Relativitiy Still Lives On? - 3 views

  •  
    Double falsification!
  •  
    I really like the tone that the author takes at the end of this article; he's able to create on a pretty succinct yet critical conclusion on the subject. At the same time, I worry if this stint will cause belief in the Theory of Relativity to become even more dogmatic for many. Nothing makes a theory stronger than testing it with opposition.
charlottedonoho

Who's to blame when fake science gets published? - 1 views

  • The now-discredited study got headlines because it offered hope. It seemed to prove that our sense of empathy, our basic humanity, could overcome prejudice and bridge seemingly irreconcilable differences. It was heartwarming, and it was utter bunkum. The good news is that this particular case of scientific fraud isn't going to do much damage to anyone but the people who concocted and published the study. The bad news is that the alleged deception is a symptom of a weakness at the heart of the scientific establishment.
  • When it was published in Science magazine last December, the research attracted academic as well as media attention; it seemed to provide solid evidence that increasing contact between minority and majority groups could reduce prejudice.
  • But in May, other researchers tried to reproduce the study using the same methods, and failed. Upon closer examination, they uncovered a number of devastating "irregularities" - statistical quirks and troubling patterns - that strongly implied that the whole LaCour/Green study was based upon made-up data.
  • ...6 more annotations...
  • The data hit the fan, at which point Green distanced himself from the survey and called for the Science article to be retracted. The professor even told Retraction Watch, the website that broke the story, that all he'd really done was help LaCour write up the findings.
  • Science magazine didn't shoulder any blame, either. In a statement, editor in chief Marcia McNutt said the magazine was essentially helpless against the depredations of a clever hoaxer: "No peer review process is perfect, and in fact it is very difficult for peer reviewers to detect artful fraud."
  • This is, unfortunately, accurate. In a scientific collaboration, a smart grad student can pull the wool over his adviser's eyes - or vice versa. And if close collaborators aren't going to catch the problem, it's no surprise that outside reviewers dragooned into critiquing the research for a journal won't catch it either. A modern science article rests on a foundation of trust.
  • If the process can't catch such obvious fraud - a hoax the perpetrators probably thought wouldn't work - it's no wonder that so many scientists feel emboldened to sneak a plagiarised passage or two past the gatekeepers.
  • Major peer-review journals tend to accept big, surprising, headline-grabbing results when those are precisely the ones that are most likely to be wrong.
  • Despite the artful passing of the buck by LaCour's senior colleague and the editors of Science magazine, affairs like this are seldom truly the product of a single dishonest grad student. Scientific publishers and veteran scientists - even when they don't take an active part in deception - must recognise that they are ultimately responsible for the culture producing the steady drip-drip-drip of falsification, exaggeration and outright fabrication eroding the discipline they serve.
sandrine_h

Darwin's Influence on Modern Thought - Scientific American - 0 views

  • Great minds shape the thinking of successive historical periods. Luther and Calvin inspired the Reformation; Locke, Leibniz, Voltaire and Rousseau, the Enlightenment. Modern thought is most dependent on the influence of Charles Darwin
  • one needs schooling in the physicist’s style of thought and mathematical techniques to appreciate Einstein’s contributions in their fullness. Indeed, this limitation is true for all the extraordinary theories of modern physics, which have had little impact on the way the average person apprehends the world.
  • The situation differs dramatically with regard to concepts in biology.
  • ...10 more annotations...
  • Many biological ideas proposed during the past 150 years stood in stark conflict with what everybody assumed to be true. The acceptance of these ideas required an ideological revolution. And no biologist has been responsible for more—and for more drastic—modifications of the average person’s worldview than Charles Darwin
  • . Evolutionary biology, in contrast with physics and chemistry, is a historical science—the evolutionist attempts to explain events and processes that have already taken place. Laws and experiments are inappropriate techniques for the explication of such events and processes. Instead one constructs a historical narrative, consisting of a tentative reconstruction of the particular scenario that led to the events one is trying to explain.
  • The discovery of natural selection, by Darwin and Alfred Russel Wallace, must itself be counted as an extraordinary philosophical advance
  • The concept of natural selection had remarkable power for explaining directional and adaptive changes. Its nature is simplicity itself. It is not a force like the forces described in the laws of physics; its mechanism is simply the elimination of inferior individuals
  • A diverse population is a necessity for the proper working of natural selection
  • Because of the importance of variation, natural selection should be considered a two-step process: the production of abundant variation is followed by the elimination of inferior individuals
  • By adopting natural selection, Darwin settled the several-thousandyear- old argument among philosophers over chance or necessity. Change on the earth is the result of both, the first step being dominated by randomness, the second by necessity
  • Another aspect of the new philosophy of biology concerns the role of laws. Laws give way to concepts in Darwinism. In the physical sciences, as a rule, theories are based on laws; for example, the laws of motion led to the theory of gravitation. In evolutionary biology, however, theories are largely based on concepts such as competition, female choice, selection, succession and dominance. These biological concepts, and the theories based on them, cannot be reduced to the laws and theories of the physical sciences
  • Despite the initial resistance by physicists and philosophers, the role of contingency and chance in natural processes is now almost universally acknowledged. Many biologists and philosophers deny the existence of universal laws in biology and suggest that all regularities be stated in probabilistic terms, as nearly all so-called biological laws have exceptions. Philosopher of science Karl Popper’s famous test of falsification therefore cannot be applied in these cases.
  • To borrow Darwin’s phrase, there is grandeur in this view of life. New modes of thinking have been, and are being, evolved. Almost every component in modern man’s belief system is somehow affected by Darwinian principles
katedriscoll

Natural Sciences - TOK 2022: THEORY OF KNOWLEDGE WEBSITE FOR THE IBDP - 1 views

  • Each discipline within the natural sciences aims to produce knowledge about different aspects of the natural world. In this sense, each discipline within the natural sciences will tweak its methodology somewhat to fit its particular purpose and scope. Nevertheless, all disciplines within the natural sciences will broadly have a shared underlying scope, methodology and purpose.
  • You arguably trusted your teachers and believed that what they told you in science class was true. But under which circumstances should we accept second hand scientific knowledge? The motto of Britain's very first scientific society  (The Royal Society)  is "Nullius in Verba", which means "Take nobody's word for it".  One of the key features of the natural sciences is the necessity of being able to prove what you claim. Good science does not only require proof. It also actively invites peer-review and even falsification. For example, if your teacher claims that starch will turn blue when mixed with iodine, you will want to test this yourself. Within the natural sciences, you should be able to repeat experiments to see if a hypothesis is correct. But what should you conclude when an experiment 'does not work'? If this happens in you science lesson, you may have made a mistake.
Javier E

George Orwell: The Prevention of Literature - The Atlantic - 0 views

  • the much more tenable and dangerous proposition that freedom is undesirable and that intellectual honesty is a form of antisocial selfishness
  • the controversy over freedom of speech and of the press is at bottom a controversy over the desirability, or otherwise, of telling lies.
  • What is really at issue is the right to report contemporary events truthfully, or as truthfully as is consistent with the ignorance, bias, and self-deception from which every observer necessarily suffers
  • ...10 more annotations...
  • it is necessary to strip away the irrelevancies in which this controversy is usually wrapped up.
  • The enemies of intellectual liberty always try to present their case as a plea for discipline versus individualism.
  • The issue truth-versus-untruth is as far as possible kept in the background.
  • the writer who refuses to sell his opinions is always branded as a mere egoist, He is accused, that is, either of wanting to shut himself up in an ivory tower, or of making an exhibitionist display of his own personality, or of resisting the inevitable current, of history in an attempt to cling to unjustified privileges.
  • Each of them tacitly claims that “the truth” has already been revealed, and that the heretic, if he is not simply a fool, is secretly aware of “the truth” and merely resists it out of selfish motives.
  • Freedom of the intellect means the freedom to report what one has seen, heard, and fell, and not to be obliged to fabricate imaginary facts and feelings.
  • known facts are suppressed and distorted to such an extent as to make it doubtful whether a true history of our times can ever be written.
  • A totalitarian state is in effect a theocracy, and its ruling caste, in order to keep its position, has to be thought of as infallible. But since, in practice, no one is infallible, it is frequently necessary to rearrange past events in order to show that this or that mistake was not made, or that this or that imaginary triumph actually happened
  • Then, again, every major change in policy demands a corresponding change of doctrine and a revaluation of prominent historical figures. This kind of thing happens everywhere, but clearly it is likelier to lead to outright falsification in societies where only one opinion is permissible at any given moment.
  • The friends of totalitarianism in England usually tend to argue that since absolute truth is not attainable, a big lie is no worse than a little lie. It is pointed out that all historical records are biased and inaccurate, or, on the other hand, that modem physics has proved that what seems to us the real world is an illusion, so that to believe in the evidence of one’s senses is simply vulgar philistinism.
1 - 9 of 9
Showing 20 items per page