Skip to main content

Home/ TOK Friends/ Group items tagged cognitive-bias

Rss Feed Group items tagged

sissij

There's a New Cognitive Bias in Town: SPOT | Big Think - 0 views

  • You may be thinking, “just what we needed (sigh),” but anything that helps us see through the ways we fool ourselves is worth knowing about. It’s called the “Spontaneous Preference For Own Theories,” or “SPOT,” effect.
  • And now Aiden Gregg and colleagues at the University of Southampton have announced the SPOT effect, which is sort of a combination of all three. It posits that we’re more likely to believe a theory because it’s ours, and will even attempt to hold onto our faith in it in spite of mounting evidence.
  • The subjects were presented a series of seven facts about Wugworld, starting with some that supported the theory that Niffites were the predators — “Niffites are at least twice as large as Luupites.” — but changing over time to evidence that it was the Luupites who were actually the aggressors: “Luupites have been observed eating the dead bodies of Niffites.” After each new fact, the subjects were asked again if they still thought the theory was true.
  •  
    This is a very interesting cognitive bias. It shows that people are very self-center and easily offended. I think it can be explained by the logic of evolution because people need to have a strong feeling for what belongs to them, no matter it is an object or some kind of idea. However, when the thing becomes others', people tend to care less. That's why it is always really hard to convince others to listen to your idea. --Sissi (3/6/2017)
katedriscoll

How to Identify Cognitive Bias: 12 Examples of Cognitive Bias - 2021 - MasterClass - 0 views

  • Cognitive biases are inherent in the way we think, and many of them are unconscious. Identifying the biases you experience and purport in your everyday interactions is the first step to understanding how our mental processes work, which can help us make better, more informed decisions
katedriscoll

What are Cognitive Biases? | Interaction Design Foundation (IxDF) - 0 views

  • ognitive bias is an umbrella term that refers to the systematic ways in which the context and framing of information influence individuals’ judgment and decision-making. There are many kinds of cognitive biases that influence individuals differently, but their common characteristic is that—in step with human individuality—they lead to judgment and decision-making that deviates from rational objectivity.
  • In some cases, cognitive biases make our thinking and decision-making faster and more efficient. The reason is that we do not stop to consider all available information, as our thoughts proceed down some channels instead of others. In other cases, however, cognitive biases can lead to errors for exactly the same reason. An example is confirmation bias, where we tend to favor information that reinforces or confirms our pre-existing beliefs. For instance, if we believe that planes are dangerous, a handful of stories about plane crashes tend to be more memorable than millions of stories about safe, successful flights. Thus, the prospect of air travel equates to an avoidable risk of doom for a person inclined to think in this way, regardless of how much time has passed without news of an air catastrophe.
Javier E

Our Dangerous Inability to Agree on What is TRUE | Risk: Reason and Reality | Big Think - 2 views

  • Given that human cognition is never the product of pure dispassionate reason, but a subjective interpretation of the facts based on our feelings and biases and instincts, when can we ever say that we know who is right and who is wrong, about anything? When can we declare a fact so established that it’s fair to say, without being called arrogant, that those who deny this truth don’t just disagree…that they’re just plain wrong.
  • This isn’t about matters of faith, or questions of ultimately unknowable things which by definition can not be established by fact. This is a question about what is knowable, and provable by careful objective scientific inquiry, a process which includes challenging skepticism rigorously applied precisely to establish what, beyond any reasonable doubt, is in fact true.
  • With enough careful investigation and scrupulously challenged evidence, we can establish knowable truths that are not just the product of our subjective motivated reasoning.
  • ...8 more annotations...
  • This matters for social animals like us, whose safety and very survival ultimately depend on our ability to coexist. Views that have more to do with competing tribal biases than objective interpretations of the evidence create destructive and violent conflict. Denial of scientifically established ‘truth’ cause all sorts of serious direct harms. Consider a few examples; • The widespread faith-based rejection of evolution feeds intense polarization. • Continued fear of vaccines is allowing nearly eradicated diseases to return. • Those who deny the evidence of the safety of genetically modified food are also denying the immense potential benefits of that technology to millions. • Denying the powerful evidence for climate change puts us all in serious jeopardy should that evidence prove to be true.
  • To address these harms, we need to understand why we often have trouble agreeing on what is true (what some have labeled science denialism). Social science has taught us that human cognition is innately, and inescapably, a process of interpreting the hard data about our world – its sights and sound and smells and facts and ideas - through subjective affective filters that help us turn those facts into the judgments and choices and behaviors that help us survive. The brain’s imperative, after all, is not to reason. It’s job is survival, and subjective cognitive biases and instincts have developed to help us make sense of information in the pursuit of safety, not so that we might come to know ‘THE universal absolute truth
  • This subjective cognition is built-in, subconscious, beyond free will, and unavoidably leads to different interpretations of the same facts.
  • But here is a truth with which I hope we can all agree. Our subjective system of cognition can be dangerous.
  • It can produce perceptions that conflict with the evidence, what I call The Perception Gap, which can in turn produce profound harm
  • We need to recognize the greater threat that our subjective system of cognition can pose, and in the name of our own safety and the welfare of the society on which we depend, do our very best to rise above it or, when we can’t, account for this very real danger in the policies we adopt.
  • "Everyone engages in motivated reasoning, everyone screens out unwelcome evidence, no one is a fully rational actor. Sure. But when it comes to something with such enormous consequences to human welfare
  • I think it's fair to say we have an obligation to confront our own ideological priors. We have an obligation to challenge ourselves, to push ourselves, to be suspicious of conclusions that are too convenient, to be sure that we're getting it right.
huffem4

How to Use Critical Thinking to Separate Fact From Fiction Online | by Simon Spichak | ... - 2 views

  • Critical thinking helps us frame everyday problems, teaches us to ask the correct questions, and points us towards intelligent solutions.
  • Critical thinking is a continuing practice that involves an open mind and methods for synthesizing and evaluating the quality of knowledge and evidence, as well as an understanding of human errors.
  • Step 1. What We Believe Depends on How We Feel
  • ...33 more annotations...
  • One of the first things I ask myself when I read a headline or find a claim about a product is if the phrase is emotionally neutral. Some headlines generate outrage or fear, indicating that there is a clear bias. When we read something that exploits are emotions, we must be careful.
  • misinformation tends to play on our emotions a lot better than factual reporting or news.
  • When I’m trying to figure out whether a claim is factual, there are a few questions I always ask myself.Does the headline, article, or information evoke fear, anger, or other strong negative emotions?Where did you hear about the information? Does it cite any direct evidence?What is the expert consensus on this information?
  • Step 2. Evidence Synthesis and EvaluationSometimes I’m still feeling uncertain if there’s any truth to a claim. Even after taking into account the emotions it evokes, I need to find the evidence of a claim and evaluate its quality
  • Often, the information that I want to check is either political or scientific. There are different questions I ask myself, depending on the nature of these claims.
  • Political claims
  • Looking at multiple different outlets, each with its own unique biases, helps us get a picture of the issue.
  • I use multiple websites specializing in fact-checking. They provide primary sources of evidence for different types of claims. Here is a list of websites where I do my fact-checking:
  • SnopesPolitifactFactCheckMedia Bias/Fact Check (a bias assessor for fact-checking websites)Simply type in some keywords from the claim to find out if it’s verified with primary sources, misleading, false, or unproven.
  • Science claims
  • Often we tout science as the process by which we uncover absolute truths about the universe. Once many scientists agree on something, it gets disseminated in the news. Confusion arises once this science changes or evolves, as is what happened throughout the coronavirus pandemic. In addition to fear and misinformation, we have to address a fundamental misunderstanding of the way science works when practicing critical thinking.
  • It is confusing to hear about certain drugs found to cure the coronavirus one moment, followed by many other scientists and researchers saying that they don’t. How do we collect and assess these scientific claims when there are discrepancies?
  • A big part of these scientific findings is difficult to access for the public
  • Sometimes the distinction between scientific coverage and scientific articles isn’t clear. When this difference is clear, we might still find findings in different academic journals that disagree with each other. Sometimes, research that isn’t peer-reviewed receives plenty of coverage in the media
  • Correlation and causation: Sometimes a claim might present two factors that appear correlated. Consider recent misinformation about 5G Towers and the spread of coronavirus. While there might appear to be associations, it doesn’t necessarily mean that there is a causative relationship
  • To practice critical thinking with these kinds of claims, we must ask the following questions:Does this claim emerge from a peer-reviewed scientific article? Has this paper been retracted?Does this article appear in a reputable journal?What is the expert consensus on this article?
  • The next examples I want to bring up refer to retracted articles from peer-reviewed journals. Since science is a self-correcting process, rather than a decree of absolutes, mistakes and fraud are corrected.
  • Briefly, I will show you exactly how to tell if the resource you are reading is an actual, peer-reviewed scientific article.
  • How does science go from experiments to the news?
  • researchers outline exactly how they conducted their experiments so other researchers can replicate them, build upon them, or provide quality assurance for them. This scientific report does not go straight to the nearest science journalist. Websites and news outlets like Scientific American or The Atlantic do not publish scientific articles.
  • Here is a quick checklist that will help you figure out if you’re viewing a scientific paper.
  • Once it’s written up, researchers send this manuscript to a journal. Other experts in the field then provide comments, feedback, and critiques. These peer reviewers ask researchers for clarification or even more experiments to strengthen their results. Peer review often takes months or sometimes years.
  • Some peer-reviewed scientific journals are Science and Nature; other scientific articles are searchable through the PubMed database. If you’re curious about a topic, search for scientific papers.
  • Peer-review is crucial! If you’re assessing the quality of evidence for claims, peer-reviewed research is a strong indicator
  • Finally, there are platforms for scientists to review research even after publication in a peer-reviewed journal. Although most scientists conduct experiments and interpret their data objectively, they may still make errors. Many scientists use Twitter and PubPeer to perform a post-publication review
  • Step 3. Are You Practicing Objectivity?
  • To finish off, I want to discuss common cognitive errors that we tend to make. Finally, there are some framing questions to ask at the end of our research to help us with assessing any information that we find.
  • Dunning-Kruger effect: Why do we rely on experts? In 1999, David Dunning and Justin Kruger published “Unskilled and unaware of it: how difficulties in recognizing one’s own incompetence lead to inflated self-assessments.” They found that the less a person understands about a topic, the more confident of their abilities or knowledge they will be
  • How does this relate to critical thinking? If you’re reading a claim sourced or written by somebody who lacks expertise in a field, they are underestimating its complexity. Whenever possible, look for an authoritative source when synthesizing and evaluating evidence for a claim.
  • Survivorship bias: Ever heard someone argue that we don’t need vaccines or seatbelts? After all, they grew up without either of them and are still alive and healthy!These arguments are appealing at first, but they don’t account for any cases of failures. They are attributing a misplaced sense of optimism and safety by ignoring the deaths that occurred resultant from a lack of vaccinations and seatbelts
  • When you’re still unsure, follow the consensus of the experts within the field. Scientists pointed out flaws within this pre-print article leading to its retraction. The pre-print was removed from the server because it did not hold up to proper scientific standards or scrutiny.
  • Now with all the evidence we’ve gathered, we ask ourselves some final questions. There are plenty more questions you will come up with yourself, case-by-case.Who is making the original claim?Who supports these claims? What are their qualifications?What is the evidence used for these claims?Where is this evidence published?How was the evidence gathered?Why is it important?
  • “even if some data is supporting a claim, does it make sense?” Some claims are deceptively true but fall apart when accounting for this bias.
Javier E

Daniel Kahneman on 'Emergent Weirdness' in Artifical Intelligences - Alexis Madrigal - ... - 0 views

  • Human brains take shortcuts in making decisions. Finding where those shortcuts lead us to dumb places is what his life work has been all about. Artificial intelligences, say, Google, also have to take shortcuts, but they are *not* the same ones that our brains use. So, when an AI ends up in a weird place by taking a shortcut, that bias strikes us as uncannily weird. Get ready, too, because AI bias is going to start replacing human cognitive bias more and more regularly.
Javier E

The Backfire Effect « You Are Not So Smart - 0 views

  • corrections tended to increase the strength of the participants’ misconceptions if those corrections contradicted their ideologies. People on opposing sides of the political spectrum read the same articles and then the same corrections, and when new evidence was interpreted as threatening to their beliefs, they doubled down. The corrections backfired.
  • Once something is added to your collection of beliefs, you protect it from harm. You do it instinctively and unconsciously when confronted with attitude-inconsistent information. Just as confirmation bias shields you when you actively seek information, the backfire effect defends you when the information seeks you, when it blindsides you. Coming or going, you stick to your beliefs instead of questioning them. When someone tries to correct you, tries to dilute your misconceptions, it backfires and strengthens them instead. Over time, the backfire effect helps make you less skeptical of those things which allow you to continue seeing your beliefs and attitudes as true and proper.
  • Psychologists call stories like these narrative scripts, stories that tell you what you want to hear, stories which confirm your beliefs and give you permission to continue feeling as you already do. If believing in welfare queens protects your ideology, you accept it and move on.
  • ...8 more annotations...
  • Contradictory evidence strengthens the position of the believer. It is seen as part of the conspiracy, and missing evidence is dismissed as part of the coverup.
  • Most online battles follow a similar pattern, each side launching attacks and pulling evidence from deep inside the web to back up their positions until, out of frustration, one party resorts to an all-out ad hominem nuclear strike
  • you can never win an argument online. When you start to pull out facts and figures, hyperlinks and quotes, you are actually making the opponent feel as though they are even more sure of their position than before you started the debate. As they match your fervor, the same thing happens in your skull. The backfire effect pushes both of you deeper into your original beliefs.
  • you spend much more time considering information you disagree with than you do information you accept. Information which lines up with what you already believe passes through the mind like a vapor, but when you come across something which threatens your beliefs, something which conflicts with your preconceived notions of how the world works, you seize up and take notice. Some psychologists speculate there is an evolutionary explanation. Your ancestors paid more attention and spent more time thinking about negative stimuli than positive because bad things required a response
  • when your beliefs are challenged, you pore over the data, picking it apart, searching for weakness. The cognitive dissonance locks up the gears of your mind until you deal with it. In the process you form more neural connections, build new memories and put out effort – once you finally move on, your original convictions are stronger than ever.
  • The backfire effect is constantly shaping your beliefs and memory, keeping you consistently leaning one way or the other through a process psychologists call biased assimilation.
  • They then separated subjects into two groups; one group said they believed homosexuality was a mental illness and one did not. Each group then read the fake studies full of pretend facts and figures suggesting their worldview was wrong. On either side of the issue, after reading studies which did not support their beliefs, most people didn’t report an epiphany, a realization they’ve been wrong all these years. Instead, they said the issue was something science couldn’t understand. When asked about other topics later on, like spanking or astrology, these same people said they no longer trusted research to determine the truth. Rather than shed their belief and face facts, they rejected science altogether.
  • As social media and advertising progresses, confirmation bias and the backfire effect will become more and more difficult to overcome. You will have more opportunities to pick and choose the kind of information which gets into your head along with the kinds of outlets you trust to give you that information. In addition, advertisers will continue to adapt, not only generating ads based on what they know about you, but creating advertising strategies on the fly based on what has and has not worked on you so far. The media of the future may be delivered based not only on your preferences, but on how you vote, where you grew up, your mood, the time of day or year – every element of you which can be quantified. In a world where everything comes to you on demand, your beliefs may never be challenged.
Javier E

Opinion | Two visions of 'normal' collided in our abnormal pandemic year - The Washingt... - 0 views

  • The date was Sept. 17, 2001. The rubble was still smoking. As silly as this sounds, I was hoping it would make me cry.
  • That didn’t happen. The truth is, it still looked like something on television, a surreal shot from a disaster movie. I was stunned but unmoved.
  • ADLater, trying to understand the difference between those two moments, I told people, “The rubble still didn’t feel real.”
  • ...11 more annotations...
  • now, after a year of pandemic, I realize that wasn’t the problem. The rubble was real, all right. It just wasn’t normal.
  • it always, somehow, came back to that essential human craving for things to be normal, and our inability to believe that they are not, even when presented with compelling evidence.
  • This phenomenon is well-known to cognitive scientists, who have dubbed it “normalcy bias.”
  • the greater risk is more often the opposite: People can’t quite believe. They ignore the fire alarm, defy the order to evacuate ahead of the hurricane, or pause to grab their luggage when exiting the crashed plane. Too often, they die.
  • Calling the quest for normalcy a bias makes it sound bad, but most of the time this tendency is a good thing. The world is full of aberrations, most of them meaningless. If we aimed for maximal reaction to every anomaly we encountered, we’d break down from sheer nervous exhaustion.
  • But when things go disastrously wrong, our optimal response is at war with the part of our brain that insists things are fine. We try to reoccupy the old normal even if it’s become radioactive and salted with mines. We still resist the new normal — even when it’s staring us in the face.
  • Nine months into our current disaster, I now see that our bitter divides over pandemic response were most fundamentally a contest between two ideas of what it meant to get “back to normal.”
  • One group wanted to feel as safe as they had before a virus invaded our shores; the other wanted to feel as unfettered
  • he disputes that followed weren’t just a fight to determine whose idea of normal would prevail. They were a battle against an unthinkable reality, which was that neither kind of normalcy was fully possible anymore.
  • I suspect we all might have been less willing to make war on our opponents if only we’d believed that we were fighting people not very different from how we were — exhausted by the whole thing and frantic to feel like themselves again
  • Some catastrophes are simply too big to be understood except in the smallest way, through their most ordinary human details
Javier E

Why Baseball Is Obsessed With the Book 'Thinking, Fast and Slow' - The New York Times - 0 views

  • In Teaford’s case, the scouting evaluation was predisposed to a mental shortcut called the representativeness heuristic, which was first defined by the psychologists Daniel Kahneman and Amos Tversky. In such cases, an assessment is heavily influenced by what is believed to be the standard or the ideal.
  • Kahneman, a professor emeritus at Princeton University and a winner of the Nobel Prize in economics in 2002, later wrote “Thinking, Fast and Slow,” a book that has become essential among many of baseball’s front offices and coaching staffs.
  • “Pretty much wherever I go, I’m bothering people, ‘Have you read this?’” said Mejdal, now an assistant general manager with the Baltimore Orioles.
  • ...12 more annotations...
  • There aren’t many explicit references to baseball in “Thinking, Fast and Slow,” yet many executives swear by it
  • “From coaches to front office people, some get back to me and say this has changed their life. They never look at decisions the same way.
  • A few, though, swear by it. Andrew Friedman, the president of baseball operations for the Dodgers, recently cited the book as having “a real profound impact,” and said he reflects back on it when evaluating organizational processes. Keith Law, a former executive for the Toronto Blue Jays, wrote the book “Inside Game” — an examination of bias and decision-making in baseball — that was inspired by “Thinking, Fast and Slow.”
  • “As the decision tree in baseball has changed over time, this helps all of us better understand why it needed to change,” Mozeliak wrote in an email. He said that was especially true when “working in a business that many decisions are based on what we see, what we remember, and what is intuitive to our thinking.”
  • The central thesis of Kahneman’s book is the interplay between each mind’s System 1 and System 2, which he described as a “psychodrama with two characters.”
  • System 1 is a person’s instinctual response — one that can be enhanced by expertise but is automatic and rapid. It seeks coherence and will apply relevant memories to explain events.
  • System 2, meanwhile, is invoked for more complex, thoughtful reasoning — it is characterized by slower, more rational analysis but is prone to laziness and fatigue.
  • Kahneman wrote that when System 2 is overloaded, System 1 could make an impulse decision, often at the expense of self-control
  • No area of baseball is more susceptible to bias than scouting, in which organizations aggregate information from disparate sources:
  • “The independent opinion aspect is critical to avoid the groupthink and be aware of momentum,”
  • Matt Blood, the director of player development for the Orioles, first read “Thinking, Fast and Slow” as a Cardinals area scout nine years ago and said that he still consults it regularly. He collaborated with a Cardinals analyst to develop his own scouting algorithm as a tripwire to mitigate bias
  • Mejdal himself fell victim to the trap of the representativeness heuristic when he started with the Cardinals in 2005
oliviaodon

How One Psychologist Is Tackling Human Biases in Science - 0 views

  • It’s likely that some researchers are consciously cherry-picking data to get their work published. And some of the problems surely lie with journal publication policies. But the problems of false findings often begin with researchers unwittingly fooling themselves: they fall prey to cognitive biases, common modes of thinking that lure us toward wrong but convenient or attractive conclusions.
  • Peer review seems to be a more fallible instrument—especially in areas such as medicine and psychology—than is often appreciated, as the emerging “crisis of replicability” attests.
  • Psychologists have shown that “most of our reasoning is in fact rationalization,” he says. In other words, we have already made the decision about what to do or to think, and our “explanation” of our reasoning is really a justification for doing what we wanted to do—or to believe—anyway. Science is of course meant to be more objective and skeptical than everyday thought—but how much is it, really?
  • ...10 more annotations...
  • common response to this situation is to argue that, even if individual scientists might fool themselves, others have no hesitation in critiquing their ideas or their results, and so it all comes out in the wash: Science as a communal activity is self-correcting. Sometimes this is true—but it doesn’t necessarily happen as quickly or smoothly as we might like to believe.
  • The idea, says Nosek, is that researchers “write down in advance what their study is for and what they think will happen.” Then when they do their experiments, they agree to be bound to analyzing the results strictly within the confines of that original plan
  • He is convinced that the process and progress of science would be smoothed by bringing these biases to light—which means making research more transparent in its methods, assumptions, and interpretations
  • Surprisingly, Nosek thinks that one of the most effective solutions to cognitive bias in science could come from the discipline that has weathered some of the heaviest criticism recently for its error-prone and self-deluding ways: pharmacology.
  • Psychologist Brian Nosek of the University of Virginia says that the most common and problematic bias in science is “motivated reasoning”: We interpret observations to fit a particular idea.
  • Sometimes it seems surprising that science functions at all.
  • Whereas the falsification model of the scientific method championed by philosopher Karl Popper posits that the scientist looks for ways to test and falsify her theories—to ask “How am I wrong?”—Nosek says that scientists usually ask instead “How am I right?” (or equally, to ask “How are you wrong?”).
  • Statistics may seem to offer respite from bias through strength in numbers, but they are just as fraught.
  • Given that science has uncovered a dizzying variety of cognitive biases, the relative neglect of their consequences within science itself is peculiar. “I was aware of biases in humans at large,” says Hartgerink, “but when I first ‘learned’ that they also apply to scientists, I was somewhat amazed, even though it is so obvious.”
  • Nosek thinks that peer review might sometimes actively hinder clear and swift testing of scientific claims.
honordearlove

6 Cognitive Biases That Make Politics Irrational - 0 views

  • In turn, these seemingly irrational flaws in judgement can lead to perpetual distortion, inaccurate judgement, and illogical interpretation -- all of which are key ingredients in the widening of cultural rifts, the deepening of global disparity gaps, and the general intensifying of political upheavals.
  • forming culturally cohesive social circles based upon similar viewpoints, and unconsciously referencing only those perspectives which reaffirm our deeply entrenched beliefs
  • innate tribal desire to be socially accepted, we tend to favour the thoughts, ideals, and sentiments of those with whom we racially, culturally, and ethnocentrically identify with most.
  • ...5 more annotations...
  • The repercussions of this bias confines us to the same routines, political parties, and economic strategies
  • our cognitive selective attention processes identify negative news as inherently important or profound
  • This is the sort of groupthink that convinces religious and political radicals they have greater support
  • This lack of self-control, where most of us would rather exchange serious pains in the not-to-distant future for menial pleasures in the moment, personifies the impulsive decision-making that has led to the financial meltdown, urban saturation, political corruption, and general slighting of imminent environmental cataclysms.
  • So while the majority of us may be prone to these errors in rational judgement, we can also be more aware of them. And who knows, if we can manage to re-rationalise how we think, act, and treat one another, perhaps our politics will follow suit.
Javier E

The decline effect and the scientific method : The New Yorker - 3 views

  • The test of replicability, as it’s known, is the foundation of modern research. Replicability is how the community enforces itself. It’s a safeguard for the creep of subjectivity. Most of the time, scientists know what results they want, and that can influence the results they get. The premise of replicability is that the scientific community can correct for these flaws.
  • But now all sorts of well-established, multiply confirmed findings have started to look increasingly uncertain. It’s as if our facts were losing their truth: claims that have been enshrined in textbooks are suddenly unprovable.
  • This phenomenon doesn’t yet have an official name, but it’s occurring across a wide range of fields, from psychology to ecology.
  • ...39 more annotations...
  • If replication is what separates the rigor of science from the squishiness of pseudoscience, where do we put all these rigorously validated findings that can no longer be proved? Which results should we believe?
  • Schooler demonstrated that subjects shown a face and asked to describe it were much less likely to recognize the face when shown it later than those who had simply looked at it. Schooler called the phenomenon “verbal overshadowing.”
  • The most likely explanation for the decline is an obvious one: regression to the mean. As the experiment is repeated, that is, an early statistical fluke gets cancelled out. The extrasensory powers of Schooler’s subjects didn’t decline—they were simply an illusion that vanished over time.
  • yet Schooler has noticed that many of the data sets that end up declining seem statistically solid—that is, they contain enough data that any regression to the mean shouldn’t be dramatic. “These are the results that pass all the tests,” he says. “The odds of them being random are typically quite remote, like one in a million. This means that the decline effect should almost never happen. But it happens all the time!
  • this is why Schooler believes that the decline effect deserves more attention: its ubiquity seems to violate the laws of statistics
  • In 2001, Michael Jennions, a biologist at the Australian National University, set out to analyze “temporal trends” across a wide range of subjects in ecology and evolutionary biology. He looked at hundreds of papers and forty-four meta-analyses (that is, statistical syntheses of related studies), and discovered a consistent decline effect over time, as many of the theories seemed to fade into irrelevance.
  • Jennions admits that his findings are troubling, but expresses a reluctance to talk about them
  • publicly. “This is a very sensitive issue for scientists,” he says. “You know, we’re supposed to be dealing with hard facts, the stuff that’s supposed to stand the test of time. But when you see these trends you become a little more skeptical of things.”
  • While publication bias almost certainly plays a role in the decline effect, it remains an incomplete explanation. For one thing, it fails to account for the initial prevalence of positive results among studies that never even get submitted to journals. It also fails to explain the experience of people like Schooler, who have been unable to replicate their initial data despite their best efforts.
  • Jennions, similarly, argues that the decline effect is largely a product of publication bias, or the tendency of scientists and scientific journals to prefer positive data over null results, which is what happens when no effect is found. The bias was first identified by the statistician Theodore Sterling, in 1959, after he noticed that ninety-seven per cent of all published psychological studies with statistically significant data found the effect they were looking for
  • Sterling saw that if ninety-seven per cent of psychology studies were proving their hypotheses, either psychologists were extraordinarily lucky or they published only the outcomes of successful experiments.
  • One of his most cited papers has a deliberately provocative title: “Why Most Published Research Findings Are False.”
  • suspects that an equally significant issue is the selective reporting of results—the data that scientists choose to document in the first place. Palmer’s most convincing evidence relies on a statistical tool known as a funnel graph. When a large number of studies have been done on a single subject, the data should follow a pattern: studies with a large sample size should all cluster around a common value—the true result—whereas those with a smaller sample size should exhibit a random scattering, since they’re subject to greater sampling error. This pattern gives the graph its name, since the distribution resembles a funnel.
  • after Palmer plotted every study of fluctuating asymmetry, he noticed that the distribution of results with smaller sample sizes wasn’t random at all but instead skewed heavily toward positive results. Palmer has since documented a similar problem in several other contested subject areas. “Once I realized that selective reporting is everywhere in science, I got quite depressed,” Palmer told me. “As a researcher, you’re always aware that there might be some nonrandom patterns, but I had no idea how widespread it is.”
  • Palmer summarized the impact of selective reporting on his field: “We cannot escape the troubling conclusion that some—perhaps many—cherished generalities are at best exaggerated in their biological significance and at worst a collective illusion nurtured by strong a-priori beliefs often repeated.”
  • Palmer emphasizes that selective reporting is not the same as scientific fraud. Rather, the problem seems to be one of subtle omissions and unconscious misperceptions, as researchers struggle to make sense of their results. Stephen Jay Gould referred to this as the “sho
  • horning” process.
  • “A lot of scientific measurement is really hard,” Simmons told me. “If you’re talking about fluctuating asymmetry, then it’s a matter of minuscule differences between the right and left sides of an animal. It’s millimetres of a tail feather. And so maybe a researcher knows that he’s measuring a good male”—an animal that has successfully mated—“and he knows that it’s supposed to be symmetrical. Well, that act of measurement is going to be vulnerable to all sorts of perception biases. That’s not a cynical statement. That’s just the way human beings work.”
  • For Simmons, the steep rise and slow fall of fluctuating asymmetry is a clear example of a scientific paradigm, one of those intellectual fads that both guide and constrain research: after a new paradigm is proposed, the peer-review process is tilted toward positive results. But then, after a few years, the academic incentives shift—the paradigm has become entrenched—so that the most notable results are now those that disprove the theory.
  • John Ioannidis, an epidemiologist at Stanford University, argues that such distortions are a serious issue in biomedical research. “These exaggerations are why the decline has become so common,” he says. “It’d be really great if the initial studies gave us an accurate summary of things. But they don’t. And so what happens is we waste a lot of money treating millions of patients and doing lots of follow-up studies on other themes based on results that are misleading.”
  • In 2005, Ioannidis published an article in the Journal of the American Medical Association that looked at the forty-nine most cited clinical-research studies in three major medical journals.
  • the data Ioannidis found were disturbing: of the thirty-four claims that had been subject to replication, forty-one per cent had either been directly contradicted or had their effect sizes significantly downgraded.
  • the most troubling fact emerged when he looked at the test of replication: out of four hundred and thirty-two claims, only a single one was consistently replicable. “This doesn’t mean that none of these claims will turn out to be true,” he says. “But, given that most of them were done badly, I wouldn’t hold my breath.”
  • According to Ioannidis, the main problem is that too many researchers engage in what he calls “significance chasing,” or finding ways to interpret the data so that it passes the statistical test of significance—the ninety-five-per-cent boundary invented by Ronald Fisher.
  • One of the classic examples of selective reporting concerns the testing of acupuncture in different countries. While acupuncture is widely accepted as a medical treatment in various Asian countries, its use is much more contested in the West. These cultural differences have profoundly influenced the results of clinical trials.
  • The problem of selective reporting is rooted in a fundamental cognitive flaw, which is that we like proving ourselves right and hate being wrong.
  • “It feels good to validate a hypothesis,” Ioannidis said. “It feels even better when you’ve got a financial interest in the idea or your career depends upon it. And that’s why, even after a claim has been systematically disproven”—he cites, for instance, the early work on hormone replacement therapy, or claims involving various vitamins—“you still see some stubborn researchers citing the first few studies
  • That’s why Schooler argues that scientists need to become more rigorous about data collection before they publish. “We’re wasting too much time chasing after bad studies and underpowered experiments,”
  • The current “obsession” with replicability distracts from the real problem, which is faulty design.
  • “Every researcher should have to spell out, in advance, how many subjects they’re going to use, and what exactly they’re testing, and what constitutes a sufficient level of proof. We have the tools to be much more transparent about our experiments.”
  • Schooler recommends the establishment of an open-source database, in which researchers are required to outline their planned investigations and document all their results. “I think this would provide a huge increase in access to scientific work and give us a much better way to judge the quality of an experiment,”
  • scientific research will always be shadowed by a force that can’t be curbed, only contained: sheer randomness. Although little research has been done on the experimental dangers of chance and happenstance, the research that exists isn’t encouraging.
  • The disturbing implication of the Crabbe study is that a lot of extraordinary scientific data are nothing but noise. The hyperactivity of those coked-up Edmonton mice wasn’t an interesting new fact—it was a meaningless outlier, a by-product of invisible variables we don’t understand.
  • The problem, of course, is that such dramatic findings are also the most likely to get published in prestigious journals, since the data are both statistically significant and entirely unexpected
  • This suggests that the decline effect is actually a decline of illusion. While Karl Popper imagined falsification occurring with a single, definitive experiment—Galileo refuted Aristotelian mechanics in an afternoon—the process turns out to be much messier than that.
  • Many scientific theories continue to be considered true even after failing numerous experimental tests.
  • Even the law of gravity hasn’t always been perfect at predicting real-world phenomena. (In one test, physicists measuring gravity by means of deep boreholes in the Nevada desert found a two-and-a-half-per-cent discrepancy between the theoretical predictions and the actual data.)
  • Such anomalies demonstrate the slipperiness of empiricism. Although many scientific ideas generate conflicting results and suffer from falling effect sizes, they continue to get cited in the textbooks and drive standard medical practice. Why? Because these ideas seem true. Because they make sense. Because we can’t bear to let them go. And this is why the decline effect is so troubling. Not because it reveals the human fallibility of science, in which data are tweaked and beliefs shape perceptions. (Such shortcomings aren’t surprising, at least for scientists.) And not because it reveals that many of our most exciting theories are fleeting fads and will soon be rejected. (That idea has been around since Thomas Kuhn.)
  • The decline effect is troubling because it reminds us how difficult it is to prove anything. We like to pretend that our experiments define the truth for us. But that’s often not the case. Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true. When the experiments are done, we still have to choose what to believe. ♦
Javier E

We've Seen This Movie Before - NYTimes.com - 0 views

  • If the bad act is committed by a member of a group you wish to demonize, attribute it to a community or a religion and not to the individual. But if the bad act is committed by someone whose profile, interests and agendas are uncomfortably close to your own, detach the malefactor from everything that is going on or is in the air (he came from nowhere) and characterize him as a one-off, non-generalizable, sui generis phenomenon.
  • The only thing more breathtaking than the effrontery of the move is the ease with which so many fall in with it. I guess it’s because both those who perform it and those who eagerly consume it save themselves the trouble of serious thought.
  •  
    Attribution bias at work.
Javier E

Why College Graduates Are Irrationally Optimistic - NYTimes.com - 0 views

  • Because of the power of optimism, enhancing graduates’ faith in the American dream by presenting them with rare examples as proof may be just what the doctor ordered. Their hopes may not be fully realized, but they will be more successful, healthier and happier if they hold on to positively biased expectations.
  • Whether you are 9 or 90, male or female, of African or European descent, you are likely to have an optimism bias. In fact, 80 percent of the world does. (Many believe optimism is unique to Americans; studies show the rest of the world is just as optimistic.)
  • In fact, the people who accurately predict the likelihood of coming events tend to be mildly depressed.
  • ...4 more annotations...
  • with the development of non-invasive brain imaging techniques, we have gathered evidence that suggests our brains are hard-wired to be unrealistically optimistic. When we learn what the future may hold, our neurons efficiently encode unexpectedly good information, but fail to incorporate information that is unexpectedly bad.
  • Underestimating risk makes us less likely to practice safe sex, save for retirement, buy insurance or undergo medical screenings.
  • Take the financial crisis of 2008. Each investor, homeowner, banker or economic regulator expected slightly better profits than were realistically warranted. On its own, each bias would not have created huge losses. Yet when combined in one market they produced a giant financial bubble that did just that.
  • The optimal solution then? Believe you will live a long healthy life, but go for frequent medical screenings. Aspire to write the next “Harry Potter” series, but have a safety net in place too.
caelengrubb

How to read the news like a scientist | - 0 views

  • “In present times, our risk of being fooled is especially high,” she says. There are two main factors at play: “Disinformation spreads like wildfire in social media,” she adds, “and when it comes to news reporting, sometimes it is more important for journalists to be fast than accurate.”
  • Scientists labor under a burden of proof. They must conduct experiments and collect data under controlled conditions to arrive at their conclusions — and be ready to defend their findings with facts, not emotions.
  • 1. Cultivate your skepticism.
  • ...15 more annotations...
  • When you learn a new piece of information through social media, think to yourself: “This may be true, but it also may be false,”
  • 2. Find out who is making the claim.
  • When you encounter a new claim, look for conflicts of interest. Ask: Do they stand to profit from what they say? Are they affiliated with an organization that could be swaying them? Two other questions to consider: What makes the writer or speaker qualified to comment on the topic? What statements have they made in the past?
  • 3. Watch out for the halo effect.
  • The halo effect, says Frans, “is a cognitive bias that makes our feeling towards someone affect how we judge their claims.
  • If we dislike someone, we are a lot more likely to disagree with them; if we like them, we are biased to agree.”
  • New scientific papers under review are read “blind,” with the authors’ names removed. That way, the experts who are deciding whether it’s worthy of publication don’t know which of their fellow scientists wrote it so they’ll be able to react free from pre-judgement or bias.
  • 4. Look at the evidence.
  • Before you act on or share a particularly surprising or enraging story, do a quick Google search — you might learn something even more interesting.
  • 5. Beware of the tendency to cherry-pick information.
  • Another human bias — confirmation bias — means we’re more likely to notice stories or facts that fit what we already believe (or want to believe).
  • When you search for information, you should not disregard the information that goes against whatever opinion you might have in advance.”
  • In your own life, look for friends and acquaintances on social media with alternative viewpoints. You don’t have to agree with them, or tolerate misinformation from them — but it’s healthy and balanced to have some variety in your information diet.
  • 6. Recognize the difference between correlation and causation.
  • However, she says, “there is no evidence supporting these claims, and it’s important to remember that just because two things increase simultaneously, this does not mean that they are causally linked to each other. Correlation does not equal causality.”
blythewallick

People show confirmation bias even about which way dots are moving -- ScienceDaily - 0 views

  • People have a tendency to interpret new information in a way that supports their pre-existing beliefs, a phenomenon known as confirmation bias.
  • Now, researchers reporting in Current Biology on September 13 have shown that people will do the same thing even when the decision they've made pertains to a choice that is rather less consequential: which direction a series of dots is moving and whether the average of a series of numbers is greater or less than 50.
  • "Confirmation biases have previously only been established in the domains of higher cognition or subjective preferences," for example in individuals' preferences for one consumer product or another, says Tobias Donner from University Medical Center Hamburg-Eppendorf (UKE), Germany. "It was rather striking for us to see that people displayed clear signs of confirmation bias when judging on sensory input that we expected to be subjectively neutral to them."
  • ...2 more annotations...
  • The experiments showed that participants, after making an initial call based on the first movie, were more likely to use subsequent evidence that was consistent with their initial choice to make a final judgment the second time around.
  • "Contrary to a common phrase, first impression does not have to be the last impression," Talluri says. "Such impressions, or choices, lead us to evaluate information in their favor. By acknowledging the fact that we selectively prioritize information agreeing with our previous choices, we could attempt to actively suppress this bias, at least in cases of critical significance, like evaluating job candidates or making policies that impact a large section of the society."
kushnerha

BBC - Future - The surprising downsides of being clever - 0 views

  • If ignorance is bliss, does a high IQ equal misery? Popular opinion would have it so. We tend to think of geniuses as being plagued by existential angst, frustration, and loneliness. Think of Virginia Woolf, Alan Turing, or Lisa Simpson – lone stars, isolated even as they burn their brightest. As Ernest Hemingway wrote: “Happiness in intelligent people is the rarest thing I know.”
  • Combing California’s schools for the creme de la creme, he selected 1,500 pupils with an IQ of 140 or more – 80 of whom had IQs above 170. Together, they became known as the “Termites”, and the highs and lows of their lives are still being studied to this day.
  • Termites’ average salary was twice that of the average white-collar job. But not all the group met Terman’s expectations – there were many who pursued more “humble” professions such as police officers, seafarers, and typists. For this reason, Terman concluded that “intellect and achievement are far from perfectly correlated”. Nor did their smarts endow personal happiness. Over the course of their lives, levels of divorce, alcoholism and suicide were about the same as the national average.
  • ...16 more annotations...
  • One possibility is that knowledge of your talents becomes something of a ball and chain. Indeed, during the 1990s, the surviving Termites were asked to look back at the events in their 80-year lifespan. Rather than basking in their successes, many reported that they had been plagued by the sense that they had somehow failed to live up to their youthful expectations.
  • The most notable, and sad, case concerns the maths prodigy Sufiah Yusof. Enrolled at Oxford University aged 12, she dropped out of her course before taking her finals and started waitressing. She later worked as a call girl, entertaining clients with her ability to recite equations during sexual acts.
  • Another common complaint, often heard in student bars and internet forums, is that smarter people somehow have a clearer vision of the world’s failings. Whereas the rest of us are blinkered from existential angst, smarter people lay awake agonising over the human condition or other people’s folly.
  • MacEwan University in Canada found that those with the higher IQ did indeed feel more anxiety throughout the day. Interestingly, most worries were mundane, day-to-day concerns, though; the high-IQ students were far more likely to be replaying an awkward conversation, than asking the “big questions”. “It’s not that their worries were more profound, but they are just worrying more often about more things,” says Penney. “If something negative happened, they thought about it more.”
  • seemed to correlate with verbal intelligence – the kind tested by word games in IQ tests, compared to prowess at spatial puzzles (which, in fact, seemed to reduce the risk of anxiety). He speculates that greater eloquence might also make you more likely to verbalise anxieties and ruminate over them. It’s not necessarily a disadvantage, though. “Maybe they were problem-solving a bit more than most people,” he says – which might help them to learn from their mistakes.
  • The harsh truth, however, is that greater intelligence does not equate to wiser decisions; in fact, in some cases it might make your choices a little more foolish.
  • we need to turn our minds to an age-old concept: “wisdom”. His approach is more scientific that it might at first sound. “The concept of wisdom has an ethereal quality to it,” he admits. “But if you look at the lay definition of wisdom, many people would agree it’s the idea of someone who can make good unbiased judgement.”
  • “my-side bias” – our tendency to be highly selective in the information we collect so that it reinforces our previous attitudes. The more enlightened approach would be to leave your assumptions at the door as you build your argument – but Stanovich found that smarter people are almost no more likely to do so than people with distinctly average IQs.
  • People who ace standard cognitive tests are in fact slightly more likely to have a “bias blind spot”. That is, they are less able to see their own flaws, even when though they are quite capable of criticising the foibles of others. And they have a greater tendency to fall for the “gambler’s fallacy”
  • A tendency to rely on gut instincts rather than rational thought might also explain why a surprisingly high number of Mensa members believe in the paranormal; or why someone with an IQ of 140 is about twice as likely to max out their credit card.
  • “The people pushing the anti-vaccination meme on parents and spreading misinformation on websites are generally of more than average intelligence and education.” Clearly, clever people can be dangerously, and foolishly, misguided.
  • spent the last decade building tests for rationality, and he has found that fair, unbiased decision-making is largely independent of IQ.
  • Crucially, Grossmann found that IQ was not related to any of these measures, and certainly didn’t predict greater wisdom. “People who are very sharp may generate, very quickly, arguments [for] why their claims are the correct ones – but may do it in a very biased fashion.”
  • employers may well begin to start testing these abilities in place of IQ; Google has already announced that it plans to screen candidates for qualities like intellectual humility, rather than sheer cognitive prowess.
  • He points out that we often find it easier to leave our biases behind when we consider other people, rather than ourselves. Along these lines, he has found that simply talking through your problems in the third person (“he” or “she”, rather than “I”) helps create the necessary emotional distance, reducing your prejudices and leading to wiser arguments.
  • If you’ve been able to rest on the laurels of your intelligence all your life, it could be very hard to accept that it has been blinding your judgement. As Socrates had it: the wisest person really may be the one who can admit he knows nothing.
criscimagnael

Can Forensic Science Be Trusted? - The Atlantic - 0 views

  • When asked, years later, why she had failed to photograph what she said she’d seen on the enhanced bedsheet, Yezzo replied, “This is one time that I didn’t manage to get it soon enough.” She added: “Operator error.”
  • The words were deployed as definitive by prosecutors—“the evidence is uncontroverted by the scientist, totally uncontroverted”
  • Michael Donnelly, now a justice on the Ohio Supreme Court, did not preside over this case, but he has had ample exposure to the use of forensic evidence. “As a trial judge,” he told me, “I sat there for 14 years. And when forensics experts testified, the jury hung on their every word.”
  • ...10 more annotations...
  • Forensic science, which drives the plots of movies and television shows, is accorded great respect by the public. And in the proper hands, it can provide persuasive insight. But in the wrong hands, it can trap innocent people in a vise of seeming inerrancy—and it has done so far too often. What’s more, although some forensic disciplines, such as DNA analysis, are reliable, others have been shown to have serious limitations.
  • Yezzo is not like Annie Dookhan, a chemist in a Massachusetts crime laboratory who boosted her productivity by falsifying reports and by “dry labbing”—that is, reporting results without actually conducting any tests.
  • Nor is Yezzo like Michael West, a forensic odontologist who claimed that he could identify bite marks on a victim and then match those marks to a specific person.
  • The deeper issue with forensic science lies not in malfeasance or corruption—or utter incompetence—but in the gray area where Yezzo can be found. Her alleged personal problems are unusual: Only because of them did the details of her long career come to light.
  • to the point of alignment; how rarely an analyst’s skills are called into question in court; and how seldom the performance of crime labs is subjected to any true oversight.
  • More than half of those exonerated by post-conviction DNA testing had been wrongly convicted based on flawed forensic evidence.
  • The quality of the work done in crime labs is almost never audited.
  • Even the best forensic scientists can fall prey to unintentional bias.
  • Study after study has demonstrated the power of cognitive bias.
  • Cognitive bias can of course affect anyone, in any circumstance—but it is particularly dangerous in a criminal-justice system where forensic scientists have wide latitude as well as some incentive to support the views of prosecutors and the police.
Javier E

History News Network | History Gets Into Bed with Psychology, and It's a Happy Match - 0 views

  • The fact that many of our self-protective delusions are built into the way the brain works is no justification for not trying to override them. Knowing how dissonance works helps us identify our own inclinations to perpetuate errors -- and protect ourselves from those who can’t. Or won’t.Related LinksWhat Historians Can Learn from the Social Sciences and Sciences /* * * CONFIGURATION VARIABLES: EDIT BEFORE PASTING INTO YOUR WEBPAGE * * */ var disqus_shortname = 'hnndev'; // required: replace example with your forum shortname /* * * DON'T EDIT BELOW THIS LINE * * */ (function() { var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true; dsq.src = '//' + disqus_shortname + '.disqus.com/embed.js'; (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq); })(); Please enable JavaScript to view the comments powered by Disqus. News Breaking News Historians DC Breaking News Historians DC ‘Scottsboro Boys’ pardoned in Alabama ‘November 22, 1963’ U-Boat discovered off the coast of Indonesia Vatican publicly unveils bone fragments said to belong to St. Peter Pictured: the 'real site' of the Hanging Gardens of Babylon Historian: Taiwan can use WWII legacy to improve standing with China 'I Take Long Walks': The Emotional Lives of Holocaust Scholars Chinese historian: Xi Jinping a master of "neo-authoritarianism" History Comes to Life With Tweets From Past Celtic Paths, Illuminated by a Sundial try{for(var lastpass_iter=0; lastpass_iter < document.forms.length; lastpass_iter++){ var lastpass_f = document.forms[lastpass_iter]; if(typeof(lastpass_f.lpsubmitorig2)=="undefined"){ lastpass_f.lpsubmitorig2 = lastpass_f.submit; lastpass_f.submit = function(){ var form=this; var customEvent = document.createEvent("Event"); customEvent.initEvent("lpCustomEvent", true, true); var d = document.getElementById("hiddenlpsubmitdiv"); for(var i = 0; i < document.forms.length; i++){ if(document.forms[i]==form){ d.innerText=i; } } d.dispatchEvent(customEvent); form.lpsubmitorig2(); } } }}catch(e){}
  • at last, history has gotten into bed with psychological science, and it’s a happy match. History gives us the data of, in Barbara Tuchman’s splendid words, our march of folly -- repeated examples of human beings unable and unwilling to learn from mistakes, let alone to admit them. Cognitive science shows us why
  • Our brains, which have allowed us to travel into outer space, have a whole bunch of design flaws, which is why we have so much breathtaking bumbling here on Earth.
  • ...3 more annotations...
  • Of the many built-in biases in human thought, three have perhaps the greatest consequences for our own history and that of nations: the belief that we see things as they really are, rather than as we wish them to be; the belief that we are better, kinder, smarter, and more ethical than average; and the confirmation bias, which sees to it that we notice, remember, and accept information that confirms our beliefs -- and overlook, forget, and discount information that disconfirms our beliefs.
  • The great motivational theory that accommodates all of these biases is cognitive dissonance, developed by Leon Festinger in 1957 and further refined and transformed into a theory of self-justification by his student (and later my coauthor and friend) Elliot Aronson. The need to reduce dissonance is the key mechanism that underlies the reluctance to be wrong, to change our minds, to admit serious mistakes, and to be unwilling to accept unwelcome information
  • The greater the dissonance between who we are and the mistake we made or the cruelty we committed, the greater the need to justify the mistake, the crime, the villainy, instead of admitting and rectifying it
‹ Previous 21 - 40 of 94 Next › Last »
Showing 20 items per page