Skip to main content

Home/ TOK Friends/ Group items tagged cognitive biases

Rss Feed Group items tagged

Javier E

Cognitive Biases and the Human Brain - The Atlantic - 1 views

  • Present bias shows up not just in experiments, of course, but in the real world. Especially in the United States, people egregiously undersave for retirement—even when they make enough money to not spend their whole paycheck on expenses, and even when they work for a company that will kick in additional funds to retirement plans when they contribute.
  • hen people hear the word bias, many if not most will think of either racial prejudice or news organizations that slant their coverage to favor one political position over another. Present bias, by contrast, is an example of cognitive bias—the collection of faulty ways of thinking that is apparently hardwired into the human brain. The collection is large. Wikipedia’s “List of cognitive biases” contains 185 entries, from actor-observer bias (“the tendency for explanations of other individuals’ behaviors to overemphasize the influence of their personality and underemphasize the influence of their situation … and for explanations of one’s own behaviors to do the opposite”) to the Zeigarnik effect (“uncompleted or interrupted tasks are remembered better than completed ones”)
  • If I had to single out a particular bias as the most pervasive and damaging, it would probably be confirmation bias. That’s the effect that leads us to look for evidence confirming what we already think or suspect, to view facts and ideas we encounter as further confirmation, and to discount or ignore any piece of evidence that seems to support an alternate view
  • ...48 more annotations...
  • Confirmation bias shows up most blatantly in our current political divide, where each side seems unable to allow that the other side is right about anything.
  • The whole idea of cognitive biases and faulty heuristics—the shortcuts and rules of thumb by which we make judgments and predictions—was more or less invented in the 1970s by Amos Tversky and Daniel Kahneman
  • versky died in 1996. Kahneman won the 2002 Nobel Prize in Economics for the work the two men did together, which he summarized in his 2011 best seller, Thinking, Fast and Slow. Another best seller, last year’s The Undoing Project, by Michael Lewis, tells the story of the sometimes contentious collaboration between Tversky and Kahneman
  • Another key figure in the field is the University of Chicago economist Richard Thaler. One of the biases he’s most linked with is the endowment effect, which leads us to place an irrationally high value on our possessions.
  • In an experiment conducted by Thaler, Kahneman, and Jack L. Knetsch, half the participants were given a mug and then asked how much they would sell it for. The average answer was $5.78. The rest of the group said they would spend, on average, $2.21 for the same mug. This flew in the face of classic economic theory, which says that at a given time and among a certain population, an item has a market value that does not depend on whether one owns it or not. Thaler won the 2017 Nobel Prize in Economics.
  • “The question that is most often asked about cognitive illusions is whether they can be overcome. The message … is not encouraging.”
  • that’s not so easy in the real world, when we’re dealing with people and situations rather than lines. “Unfortunately, this sensible procedure is least likely to be applied when it is needed most,” Kahneman writes. “We would all like to have a warning bell that rings loudly whenever we are about to make a serious error, but no such bell is available.”
  • At least with the optical illusion, our slow-thinking, analytic mind—what Kahneman calls System 2—will recognize a Müller-Lyer situation and convince itself not to trust the fast-twitch System 1’s perception
  • Kahneman and others draw an analogy based on an understanding of the Müller-Lyer illusion, two parallel lines with arrows at each end. One line’s arrows point in; the other line’s arrows point out. Because of the direction of the arrows, the latter line appears shorter than the former, but in fact the two lines are the same length.
  • Because biases appear to be so hardwired and inalterable, most of the attention paid to countering them hasn’t dealt with the problematic thoughts, judgments, or predictions themselves
  • Is it really impossible, however, to shed or significantly mitigate one’s biases? Some studies have tentatively answered that question in the affirmative.
  • what if the person undergoing the de-biasing strategies was highly motivated and self-selected? In other words, what if it was me?
  • Over an apple pastry and tea with milk, he told me, “Temperament has a lot to do with my position. You won’t find anyone more pessimistic than I am.”
  • I met with Kahneman
  • “I see the picture as unequal lines,” he said. “The goal is not to trust what I think I see. To understand that I shouldn’t believe my lying eyes.” That’s doable with the optical illusion, he said, but extremely difficult with real-world cognitive biases.
  • In this context, his pessimism relates, first, to the impossibility of effecting any changes to System 1—the quick-thinking part of our brain and the one that makes mistaken judgments tantamount to the Müller-Lyer line illusion
  • he most effective check against them, as Kahneman says, is from the outside: Others can perceive our errors more readily than we can.
  • “slow-thinking organizations,” as he puts it, can institute policies that include the monitoring of individual decisions and predictions. They can also require procedures such as checklists and “premortems,”
  • A premortem attempts to counter optimism bias by requiring team members to imagine that a project has gone very, very badly and write a sentence or two describing how that happened. Conducting this exercise, it turns out, helps people think ahead.
  • “My position is that none of these things have any effect on System 1,” Kahneman said. “You can’t improve intuition.
  • Perhaps, with very long-term training, lots of talk, and exposure to behavioral economics, what you can do is cue reasoning, so you can engage System 2 to follow rules. Unfortunately, the world doesn’t provide cues. And for most people, in the heat of argument the rules go out the window.
  • Kahneman describes an even earlier Nisbett article that showed subjects’ disinclination to believe statistical and other general evidence, basing their judgments instead on individual examples and vivid anecdotes. (This bias is known as base-rate neglect.)
  • over the years, Nisbett had come to emphasize in his research and thinking the possibility of training people to overcome or avoid a number of pitfalls, including base-rate neglect, fundamental attribution error, and the sunk-cost fallacy.
  • Nisbett’s second-favorite example is that economists, who have absorbed the lessons of the sunk-cost fallacy, routinely walk out of bad movies and leave bad restaurant meals uneaten.
  • When Nisbett asks the same question of students who have completed the statistics course, about 70 percent give the right answer. He believes this result shows, pace Kahneman, that the law of large numbers can be absorbed into System 2—and maybe into System 1 as well, even when there are minimal cues.
  • about half give the right answer: the law of large numbers, which holds that outlier results are much more frequent when the sample size (at bats, in this case) is small. Over the course of the season, as the number of at bats increases, regression to the mean is inevitabl
  • When Nisbett has to give an example of his approach, he usually brings up the baseball-phenom survey. This involved telephoning University of Michigan students on the pretense of conducting a poll about sports, and asking them why there are always several Major League batters with .450 batting averages early in a season, yet no player has ever finished a season with an average that high.
  • we’ve tested Michigan students over four years, and they show a huge increase in ability to solve problems. Graduate students in psychology also show a huge gain.”
  • , “I know from my own research on teaching people how to reason statistically that just a few examples in two or three domains are sufficient to improve people’s reasoning for an indefinitely large number of events.”
  • isbett suggested another factor: “You and Amos specialized in hard problems for which you were drawn to the wrong answer. I began to study easy problems, which you guys would never get wrong but untutored people routinely do … Then you can look at the effects of instruction on such easy problems, which turn out to be huge.”
  • Nisbett suggested that I take “Mindware: Critical Thinking for the Information Age,” an online Coursera course in which he goes over what he considers the most effective de-biasing skills and concepts. Then, to see how much I had learned, I would take a survey he gives to Michigan undergraduates. So I did.
  • he course consists of eight lessons by Nisbett—who comes across on-screen as the authoritative but approachable psych professor we all would like to have had—interspersed with some graphics and quizzes. I recommend it. He explains the availability heuristic this way: “People are surprised that suicides outnumber homicides, and drownings outnumber deaths by fire. People always think crime is increasing” even if it’s not.
  • When I finished the course, Nisbett sent me the survey he and colleagues administer to Michigan undergrads
  • It contains a few dozen problems meant to measure the subjects’ resistance to cognitive biases
  • I got it right. Indeed, when I emailed my completed test, Nisbett replied, “My guess is that very few if any UM seniors did as well as you. I’m sure at least some psych students, at least after 2 years in school, did as well. But note that you came fairly close to a perfect score.”
  • Nevertheless, I did not feel that reading Mindware and taking the Coursera course had necessarily rid me of my biases
  • For his part, Nisbett insisted that the results were meaningful. “If you’re doing better in a testing context,” he told me, “you’ll jolly well be doing better in the real world.”
  • The New York–based NeuroLeadership Institute offers organizations and individuals a variety of training sessions, webinars, and conferences that promise, among other things, to use brain science to teach participants to counter bias. This year’s two-day summit will be held in New York next month; for $2,845, you could learn, for example, “why are our brains so bad at thinking about the future, and how do we do it better?”
  • Philip E. Tetlock, a professor at the University of Pennsylvania’s Wharton School, and his wife and research partner, Barbara Mellers, have for years been studying what they call “superforecasters”: people who manage to sidestep cognitive biases and predict future events with far more accuracy than the pundits
  • One of the most important ingredients is what Tetlock calls “the outside view.” The inside view is a product of fundamental attribution error, base-rate neglect, and other biases that are constantly cajoling us into resting our judgments and predictions on good or vivid stories instead of on data and statistics
  • In 2006, seeking to prevent another mistake of that magnitude, the U.S. government created the Intelligence Advanced Research Projects Activity (iarpa), an agency designed to use cutting-edge research and technology to improve intelligence-gathering and analysis. In 2011, iarpa initiated a program, Sirius, to fund the development of “serious” video games that could combat or mitigate what were deemed to be the six most damaging biases: confirmation bias, fundamental attribution error, the bias blind spot (the feeling that one is less biased than the average person), the anchoring effect, the representativeness heuristic, and projection bias (the assumption that everybody else’s thinking is the same as one’s own).
  • most promising are a handful of video games. Their genesis was in the Iraq War
  • Together with collaborators who included staff from Creative Technologies, a company specializing in games and other simulations, and Leidos, a defense, intelligence, and health research company that does a lot of government work, Morewedge devised Missing. Some subjects played the game, which takes about three hours to complete, while others watched a video about cognitive bias. All were tested on bias-mitigation skills before the training, immediately afterward, and then finally after eight to 12 weeks had passed.
  • “The literature on training suggests books and classes are fine entertainment but largely ineffectual. But the game has very large effects. It surprised everyone.”
  • he said he saw the results as supporting the research and insights of Richard Nisbett. “Nisbett’s work was largely written off by the field, the assumption being that training can’t reduce bias,
  • even the positive results reminded me of something Daniel Kahneman had told me. “Pencil-and-paper doesn’t convince me,” he said. “A test can be given even a couple of years later. But the test cues the test-taker. It reminds him what it’s all about.”
  • Morewedge told me that some tentative real-world scenarios along the lines of Missing have shown “promising results,” but that it’s too soon to talk about them.
  • In the future, I will monitor my thoughts and reactions as best I can
caelengrubb

Believing in Overcoming Cognitive Biases | Journal of Ethics | American Medical Associa... - 0 views

  • Cognitive biases contribute significantly to diagnostic and treatment errors
  • A 2016 review of their roles in decision making lists 4 domains of concern for physicians: gathering and interpreting evidence, taking action, and evaluating decisions
  • Confirmation bias is the selective gathering and interpretation of evidence consistent with current beliefs and the neglect of evidence that contradicts them.
  • ...14 more annotations...
  • It can occur when a physician refuses to consider alternative diagnoses once an initial diagnosis has been established, despite contradicting data, such as lab results. This bias leads physicians to see what they want to see
  • Anchoring bias is closely related to confirmation bias and comes into play when interpreting evidence. It refers to physicians’ practices of prioritizing information and data that support their initial impressions, even when first impressions are wrong
  • When physicians move from deliberation to action, they are sometimes swayed by emotional reactions rather than rational deliberation about risks and benefits. This is called the affect heuristic, and, while heuristics can often serve as efficient approaches to problem solving, they can sometimes lead to bias
  • Further down the treatment pathway, outcomes bias can come into play. This bias refers to the practice of believing that good or bad results are always attributable to prior decisions, even when there is no valid reason to do so
  • The dual-process theory, a cognitive model of reasoning, can be particularly relevant in matters of clinical decision making
  • This theory is based on the argument that we use 2 different cognitive systems, intuitive and analytical, when reasoning. The former is quick and uses information that is readily available; the latter is slower and more deliberate.
  • Consideration should be given to the difficulty physicians face in employing analytical thinking exclusively. Beyond constraints of time, information, and resources, many physicians are also likely to be sleep deprived, work in an environment full of distractions, and be required to respond quickly while managing heavy cognitive loads
  • Simply increasing physicians’ familiarity with the many types of cognitive biases—and how to avoid them—may be one of the best strategies to decrease bias-related errors
  • The same review suggests that cognitive forcing strategies may also have some success in improving diagnostic outcomes
  • Afterwards, the resident physicians were debriefed on both case-specific details and on cognitive forcing strategies, interviewed, and asked to complete a written survey. The results suggested that resident physicians further along in their training (ie, postgraduate year three) gained more awareness of cognitive strategies than resident physicians in earlier years of training, suggesting that this tool could be more useful after a certain level of training has been completed
  • A 2013 study examined the effect of a 3-part, 1-year curriculum on recognition and knowledge of cognitive biases and debiasing strategies in second-year residents
  • Cognitive biases in clinical practice have a significant impact on care, often in negative ways. They sometimes manifest as physicians seeing what they want to see rather than what is actually there. Or they come into play when physicians make snap decisions and then prioritize evidence that supports their conclusions, as opposed to drawing conclusions from evidence
  • Fortunately, cognitive psychology provides insight into how to prevent biases. Guided reflection and cognitive forcing strategies deflect bias through close examination of our own thinking processes.
  • During medical education and consistently thereafter, we must provide physicians with a full appreciation of the cost of biases and the potential benefits of combatting them.
katedriscoll

Frontiers | A Neural Network Framework for Cognitive Bias | Psychology - 0 views

  • Human decision-making shows systematic simplifications and deviations from the tenets of rationality (‘heuristics’) that may lead to suboptimal decisional outcomes (‘cognitive biases’). There are currently three prevailing theoretical perspectives on the origin of heuristics and cognitive biases: a cognitive-psychological, an ecological and an evolutionary perspective. However, these perspectives are mainly descriptive and none of them provides an overall explanatory framework for
  • the underlying mechanisms of cognitive biases. To enhance our understanding of cognitive heuristics and biases we propose a neural network framework for cognitive biases, which explains why our brain systematically tends to default to heuristic (‘Type 1’) decision making. We argue that many cognitive biases arise from intrinsic brain mechanisms that are fundamental for the working of biological neural networks. To substantiate our viewpoint, we discern and explain four basic neural network principles: (1) Association, (2) Compatibility, (3) Retainment, and (4) Focus. These principles are inherent to (all) neural networks which were originally optimized to perform concrete biological, perceptual, and motor functions. They form the basis for our inclinations to associate and combine (unrelated) information, to prioritize information that is compatible with our present state (such as knowledge, opinions, and expectations), to retain given information that sometimes could better be ignored, and to focus on dominant information while ignoring relevant information that is not directly activated. The supposed mechanisms are complementary and not mutually exclusive. For different cognitive biases they may all contribute in varying degrees to distortion of information. The present viewpoint not only complements the earlier three viewpoints, but also provides a unifying and binding framework for many cognitive bias phenomena.
  • The cognitive-psychological (or heuristics and biases) perspective (Evans, 2008; Kahneman and Klein, 2009) attributes cognitive biases to limitations in the available data and in the human information processing capacity (Simon, 1955; Broadbent, 1958; Kahneman, 1973, 2003; Norman and Bobrow, 1975)
katedriscoll

Cognitive Biases: What They Are and How They Affect People - Effectiviology - 0 views

  • A cognitive bias is a systematic pattern of deviation from rationality, which occurs due to the way our cognitive system works. Accordingly, cognitive biases cause us to be irrational in the way we search for, evaluate, interpret, judge, use, and remember information, as well as in the way we make decisions.
  • Cognitive biases affect every area of our life, from how we form our memories, to how we shape our beliefs, and to how we form relationships with other people. In doing so, they can lead to both relatively minor issues, such as forgetting a small detail from a past event, as well as to major ones, such as choosing to avoid an important medical treatment that could save our life.Because cognitive biases can have such a powerful and pervasive influence on ourselves and on others, it’s important to understand them. As such, in the following article you will learn more about cognitive biases, understand why we experience them, see what types of them exist, and find out what you can do in order to mitigate them successfully.
Adam Clark

The 12 cognitive biases that prevent you from being rational - 0 views

  •  
    "The human brain is capable of 1016 processes per second, which makes it far more powerful than any computer currently in existence. But that doesn't mean our brains don't have major limitations. The lowly calculator can do math thousands of times better than we can, and our memories are often less than useless - plus, we're subject to cognitive biases, those annoying glitches in our thinking that cause us to make questionable decisions and reach erroneous conclusions. Here are a dozen of the most common and pernicious cognitive biases that you need to know about."
katedriscoll

What are Cognitive Biases? | Interaction Design Foundation (IxDF) - 0 views

  • ognitive bias is an umbrella term that refers to the systematic ways in which the context and framing of information influence individuals’ judgment and decision-making. There are many kinds of cognitive biases that influence individuals differently, but their common characteristic is that—in step with human individuality—they lead to judgment and decision-making that deviates from rational objectivity.
  • In some cases, cognitive biases make our thinking and decision-making faster and more efficient. The reason is that we do not stop to consider all available information, as our thoughts proceed down some channels instead of others. In other cases, however, cognitive biases can lead to errors for exactly the same reason. An example is confirmation bias, where we tend to favor information that reinforces or confirms our pre-existing beliefs. For instance, if we believe that planes are dangerous, a handful of stories about plane crashes tend to be more memorable than millions of stories about safe, successful flights. Thus, the prospect of air travel equates to an avoidable risk of doom for a person inclined to think in this way, regardless of how much time has passed without news of an air catastrophe.
caelengrubb

Looking inward in an era of 'fake news': Addressing cognitive bias | YLAI Network - 0 views

  • In an era when everyone seems eager to point out instances of “fake news,” it is easy to forget that knowing how we make sense of the news is as important as knowing how to spot incorrect or biased content
  • While the ability to analyze the credibility of a source and the veracity of its content remains an essential and often-discussed aspect of news literacy, it is equally important to understand how we as news consumers engage with and react to the information we find online, in our feeds, and on our apps
  • People process information they receive from the news in the same way they process all information around them — in the shortest, quickest way possible
  • ...11 more annotations...
  • When we consider how we engage with the news, some shortcuts we may want to pay close attention to, and reflect carefully on, are cognitive biases.
  • In fact, without these heuristics, it would be impossible for us to process all the information we receive daily. However, the use of these shortcuts can lead to “blind spots,” or unintentional ways we respond to information that can have negative consequences for how we engage with, digest, and share the information we encounter
  • These shortcuts, also called heuristics, streamline our problem-solving process and help us make relatively quick decisions.
  • Confirmation bias is the tendency to seek out and value information that confirms our pre-existing beliefs while discarding information that proves our ideas wrong.
  • Cognitive biases are best described as glitches in how we process information
  • Echo chamber effect refers to a situation in which we are primarily exposed to information, people, events, and ideas that already align with our point of view.
  • Anchoring bias, also known as “anchoring,” refers to people’s tendency to consider the first piece of information they receive about a topic as the most reliable
  • The framing effect is what happens when we make decisions based on how information is presented or discussed, rather than its actual substance.
  • Fluency heuristic occurs when a piece of information is deemed more valuable because it is easier to process or recall
  • Everyone operates under one or more cognitive biases. So, when searching for and reading the news (or other information), it is important to be aware of how these biases might shape how we make sense of this information.
  • In conclusion, we may not be able to control the content of the news — whether it is fake, reliable, or somewhere in between — but we can learn to be aware of how we respond to it and adjust our evaluations of the news accordingly.
ilanaprincilus06

You're more biased than you think - even when you know you're biased | News | The Guardian - 0 views

  • there’s plenty of evidence to suggest that we’re all at least somewhat subject to bias
  • Tell Republicans that some imaginary policy is a Republican one, as the psychologist Geoffrey Cohen did in 2003, and they’re much more likely to support it, even if it runs counter to Republican values. But ask them why they support it, and they’ll deny that party affiliation played a role. (Cohen found something similar for Democrats.
  • those who saw the names were biased in favour of famous artists. But even though they acknowledged the risk of bias, when asked to assess their own objectivity, they didn’t view their judgments as any more biased as a result.
  • ...7 more annotations...
  • Even when the risk of bias was explicitly pointed out to them, people remained confident that they weren’t susceptible to it
  • “Even when people acknowledge that what they are about to do is biased,” the researchers write, “they still are inclined to see their resulting decisions as objective.”
  • why it’s often better for companies to hire people, or colleges to admit students, using objective checklists, rather than interviews that rely on gut feelings.
  • It turns out the bias also applies to bias. In other words, we’re convinced that we’re better than most at not falling victim to bias.
  • “used a strategy that they thought was biased,” the researchers note, “and thus they probably expected to feel some bias when using it. The absence of that feeling may have made them more confident in their objectivity.”
  • we have a cognitive bias to the effect that we’re uniquely immune to cognitive biases.
  • Bias spares nobody.
Javier E

Opinion | Do You Live in a 'Tight' State or a 'Loose' One? Turns Out It Matters Quite a... - 0 views

  • Political biases are omnipresent, but what we don’t fully understand yet is how they come about in the first place.
  • In 2014, Michele J. Gelfand, a professor of psychology at the Stanford Graduate School of Business formerly at the University of Maryland, and Jesse R. Harrington, then a Ph.D. candidate, conducted a study designed to rank the 50 states on a scale of “tightness” and “looseness.”
  • titled “Tightness-Looseness Across the 50 United States,” the study calculated a catalog of measures for each state, including the incidence of natural disasters, disease prevalence, residents’ levels of openness and conscientiousness, drug and alcohol use, homelessness and incarceration rates.
  • ...64 more annotations...
  • Gelfand and Harrington predicted that “‘tight’ states would exhibit a higher incidence of natural disasters, greater environmental vulnerability, fewer natural resources, greater incidence of disease and higher mortality rates, higher population density, and greater degrees of external threat.”
  • The South dominated the tight states: Mississippi, Alabama Arkansas, Oklahoma, Tennessee, Texas, Louisiana, Kentucky, South Carolina and North Carolina
  • states in New England and on the West Coast were the loosest: California, Oregon, Washington, Maine, Massachusetts, Connecticut, New Hampshire and Vermont.
  • Cultural differences, Gelfand continued, “have a certain logic — a rationale that makes good sense,” noting that “cultures that have threats need rules to coordinate to survive (think about how incredibly coordinated Japan is in response to natural disasters).
  • “Rule Makers, Rule Breakers: How Tight and Loose Cultures Wire the World” in 2018, in which she described the results of a 2016 pre-election survey she and two colleagues had commissioned
  • The results were telling: People who felt the country was facing greater threats desired greater tightness. This desire, in turn, correctly predicted their support for Trump. In fact, desired tightness predicted support for Trump far better than other measures. For example, a desire for tightness predicted a vote for Trump with 44 times more accuracy than other popular measures of authoritarianism.
  • The 2016 election, Gelfand continued, “turned largely on primal cultural reflexes — ones that had been conditioned not only by cultural forces, but by a candidate who was able to exploit them.”
  • Gelfand said:Some groups have much stronger norms than others; they’re tight. Others have much weaker norms; they’re loose. Of course, all cultures have areas in which they are tight and loose — but cultures vary in the degree to which they emphasize norms and compliance with them.
  • In both 2016 and 2020, Donald Trump carried all 10 of the top “tight” states; Hillary Clinton and Joe Biden carried all 10 of the top “loose” states.
  • The tight-loose concept, Gelfand argued,is an important framework to understand the rise of President Donald Trump and other leaders in Poland, Hungary, Italy, and France,
  • cultures that don’t have a lot of threat can afford to be more permissive and loose.”
  • The gist is this: when people perceive threat — whether real or imagined, they want strong rules and autocratic leaders to help them survive
  • My research has found that within minutes of exposing study participants to false information about terrorist incidents, overpopulation, pathogen outbreaks and natural disasters, their minds tightened. They wanted stronger rules and punishments.
  • Gelfand writes that tightness encourages conscientiousness, social order and self-control on the plus side, along with close-mindedness, conventional thinking and cultural inertia on the minus side.
  • Looseness, Gelfand posits, fosters tolerance, creativity and adaptability, along with such liabilities as social disorder, a lack of coordination and impulsive behavior.
  • If liberalism and conservatism have historically played a complementary role, each checking the other to constrain extremism, why are the left and right so destructively hostile to each other now, and why is the contemporary political system so polarized?
  • Along the same lines, if liberals and conservatives hold differing moral visions, not just about what makes a good government but about what makes a good life, what turned the relationship between left and right from competitive to mutually destructive?
  • As a set, Niemi wrote, conservative binding values encompassthe values oriented around group preservation, are associated with judgments, decisions, and interpersonal orientations that sacrifice the welfare of individuals
  • She cited research thatfound 47 percent of the most extreme conservatives strongly endorsed the view that “The world is becoming a more and more dangerous place,” compared to 19 percent of the most extreme liberals
  • Conservatives and liberals, Niemi continued,see different things as threats — the nature of the threat and how it happens to stir one’s moral values (and their associated emotions) is a better clue to why liberals and conservatives react differently.
  • Unlike liberals, conservatives strongly endorse the binding moral values aimed at protecting groups and relationships. They judge transgressions involving personal and national betrayal, disobedience to authority, and disgusting or impure acts such as sexually or spiritually unchaste behavior as morally relevant and wrong.
  • Underlying these differences are competing sets of liberal and conservative moral priorities, with liberals placing more stress than conservatives on caring, kindness, fairness and rights — known among scholars as “individualizing values
  • conservatives focus more on loyalty, hierarchy, deference to authority, sanctity and a higher standard of disgust, known as “binding values.”
  • Niemi contended that sensitivity to various types of threat is a key factor in driving differences between the far left and far right.
  • For example, binding values are associated with Machiavellianism (e.g., status-seeking and lying, getting ahead by any means, 2013); victim derogation, blame, and beliefs that victims were causal contributors for a variety of harmful acts (2016, 2020); and a tendency to excuse transgressions of ingroup members with attributions to the situation rather than the person (2023).
  • Niemi cited a paper she and Liane Young, a professor of psychology at Boston College, published in 2016, “When and Why We See Victims as Responsible: The Impact of Ideology on Attitudes Toward Victims,” which tested responses of men and women to descriptions of crimes including sexual assaults and robberies.
  • We measured moral values associated with unconditionally prohibiting harm (“individualizing values”) versus moral values associated with prohibiting behavior that destabilizes groups and relationships (“binding values”: loyalty, obedience to authority, and purity)
  • Increased endorsement of binding values predicted increased ratings of victims as contaminated, increased blame and responsibility attributed to victims, increased perceptions of victims’ (versus perpetrators’) behaviors as contributing to the outcome, and decreased focus on perpetrators.
  • A central explanation typically offered for the current situation in American politics is that partisanship and political ideology have developed into strong social identities where the mass public is increasingly sorted — along social, partisan, and ideological lines.
  • What happened to people ecologically affected social-political developments, including the content of the rules people made and how they enforced them
  • Just as ecological factors differing from region to region over the globe produced different cultural values, ecological factors differed throughout the U.S. historically and today, producing our regional and state-level dimensions of culture and political patterns.
  • Joshua Hartshorne, who is also a professor of psychology at Boston College, took issue with the binding versus individualizing values theory as an explanation for the tendency of conservatives to blame victims:
  • I would guess that the reason conservatives are more likely to blame the victim has less to do with binding values and more to do with the just-world bias (the belief that good things happen to good people and bad things happen to bad people, therefore if a bad thing happened to you, you must be a bad person).
  • Belief in a just world, Hartshorne argued, is crucial for those seeking to protect the status quo:It seems psychologically necessary for anyone who wants to advocate for keeping things the way they are that the haves should keep on having, and the have-nots have got as much as they deserve. I don’t see how you could advocate for such a position while simultaneously viewing yourself as moral (and almost everyone believes that they themselves are moral) without also believing in the just world
  • Conversely, if you generally believe the world is not just, and you view yourself as a moral person, then you are likely to feel like you have an obligation to change things.
  • I asked Lene Aaroe, a political scientist at Aarhus University in Denmark, why the contemporary American political system is as polarized as it is now, given that the liberal-conservative schism is longstanding. What has happened to produce such intense hostility between left and right?
  • There is variation across countries in hostility between left and right. The United States is a particularly polarized case which calls for a contextual explanatio
  • I then asked Aaroe why surveys find that conservatives are happier than liberals. “Some research,” she replied, “suggests that experiences of inequality constitute a larger psychological burden to liberals because it is more difficult for liberals to rationalize inequality as a phenomenon with positive consequences.”
  • Numerous factors potentially influence the evolution of liberalism and conservatism and other social-cultural differences, including geography, topography, catastrophic events, and subsistence styles
  • Steven Pinker, a professor of psychology at Harvard, elaborated in an email on the link between conservatism and happiness:
  • t’s a combination of factors. Conservatives are likelier to be married, patriotic, and religious, all of which make people happier
  • They may be less aggrieved by the status quo, whereas liberals take on society’s problems as part of their own personal burdens. Liberals also place politics closer to their identity and striving for meaning and purpose, which is a recipe for frustration.
  • Some features of the woke faction of liberalism may make people unhappier: as Jon Haidt and Greg Lukianoff have suggested, wokeism is Cognitive Behavioral Therapy in reverse, urging upon people maladaptive mental habits such as catastrophizing, feeling like a victim of forces beyond one’s control, prioritizing emotions of hurt and anger over rational analysis, and dividing the world into allies and villains.
  • Why, I asked Pinker, would liberals and conservatives react differently — often very differently — to messages that highlight threat?
  • It may be liberals (or at least the social-justice wing) who are more sensitive to threats, such as white supremacy, climate change, and patriarchy; who may be likelier to moralize, seeing racism and transphobia in messages that others perceive as neutral; and being likelier to surrender to emotions like “harm” and “hurt.”
  • While liberals and conservatives, guided by different sets of moral values, may make agreement on specific policies difficult, that does not necessarily preclude consensus.
  • there are ways to persuade conservatives to support liberal initiatives and to persuade liberals to back conservative proposals:
  • While liberals tend to be more concerned with protecting vulnerable groups from harm and more concerned with equality and social justice than conservatives, conservatives tend to be more concerned with moral issues like group loyalty, respect for authority, purity and religious sanctity than liberals are. Because of these different moral commitments, we find that liberals and conservatives can be persuaded by quite different moral arguments
  • For example, we find that conservatives are more persuaded by a same-sex marriage appeal articulated in terms of group loyalty and patriotism, rather than equality and social justice.
  • Liberals who read the fairness argument were substantially more supportive of military spending than those who read the loyalty and authority argument.
  • We find support for these claims across six studies involving diverse political issues, including same-sex marriage, universal health care, military spending, and adopting English as the nation’s official language.”
  • In one test of persuadability on the right, Feinberg and Willer assigned some conservatives to read an editorial supporting universal health care as a matter of “fairness (health coverage is a basic human right)” or to read an editorial supporting health care as a matter of “purity (uninsured people means more unclean, infected, and diseased Americans).”
  • Conservatives who read the purity argument were much more supportive of health care than those who read the fairness case.
  • “political arguments reframed to appeal to the moral values of those holding the opposing political position are typically more effective
  • In “Conservative and Liberal Attitudes Drive Polarized Neural Responses to Political Content,” Willer, Yuan Chang Leong of the University of Chicago, Janice Chen of Johns Hopkins and Jamil Zaki of Stanford address the question of how partisan biases are encoded in the brain:
  • society. How do such biases arise in the brain? We measured the neural activity of participants watching videos related to immigration policy. Despite watching the same videos, conservative and liberal participants exhibited divergent neural responses. This “neural polarization” between groups occurred in a brain area associated with the interpretation of narrative content and intensified in response to language associated with risk, emotion, and morality. Furthermore, polarized neural responses predicted attitude change in response to the videos.
  • The four authors argue that their “findings suggest that biased processing in the brain drives divergent interpretations of political information and subsequent attitude polarization.” These results, they continue, “shed light on the psychological and neural underpinnings of how identical information is interpreted differently by conservatives and liberals.”
  • The authors used neural imaging to follow changes in the dorsomedial prefrontal cortex (known as DMPFC) as conservatives and liberals watched videos presenting strong positions, left and right, on immigration.
  • or each video,” they write,participants with DMPFC activity time courses more similar to that of conservative-leaning participants became more likely to support the conservative positio
  • Conversely, those with DMPFC activity time courses more similar to that of liberal-leaning participants became more likely to support the liberal position. These results suggest that divergent interpretations of the same information are associated with increased attitude polarizatio
  • Together, our findings describe a neural basis for partisan biases in processing political information and their effects on attitude change.
  • Describing their neuroimaging method, the authors point out that theysearched for evidence of “neural polarization” activity in the brain that diverges between people who hold liberal versus conservative political attitudes. Neural polarization was observed in the dorsomedial prefrontal cortex (DMPFC), a brain region associated with the interpretation of narrative content.
  • The question is whether the political polarization that we are witnessing now proves to be a core, encoded aspect of the human mind, difficult to overcome — as Leong, Chen, Zaki and Willer sugges
  • — or whether, with our increased knowledge of the neural basis of partisan and other biases, we will find more effective ways to manage these most dangerous of human predispositions.
Javier E

Our Dangerous Inability to Agree on What is TRUE | Risk: Reason and Reality | Big Think - 2 views

  • Given that human cognition is never the product of pure dispassionate reason, but a subjective interpretation of the facts based on our feelings and biases and instincts, when can we ever say that we know who is right and who is wrong, about anything? When can we declare a fact so established that it’s fair to say, without being called arrogant, that those who deny this truth don’t just disagree…that they’re just plain wrong.
  • This isn’t about matters of faith, or questions of ultimately unknowable things which by definition can not be established by fact. This is a question about what is knowable, and provable by careful objective scientific inquiry, a process which includes challenging skepticism rigorously applied precisely to establish what, beyond any reasonable doubt, is in fact true.
  • With enough careful investigation and scrupulously challenged evidence, we can establish knowable truths that are not just the product of our subjective motivated reasoning.
  • ...8 more annotations...
  • This matters for social animals like us, whose safety and very survival ultimately depend on our ability to coexist. Views that have more to do with competing tribal biases than objective interpretations of the evidence create destructive and violent conflict. Denial of scientifically established ‘truth’ cause all sorts of serious direct harms. Consider a few examples; • The widespread faith-based rejection of evolution feeds intense polarization. • Continued fear of vaccines is allowing nearly eradicated diseases to return. • Those who deny the evidence of the safety of genetically modified food are also denying the immense potential benefits of that technology to millions. • Denying the powerful evidence for climate change puts us all in serious jeopardy should that evidence prove to be true.
  • To address these harms, we need to understand why we often have trouble agreeing on what is true (what some have labeled science denialism). Social science has taught us that human cognition is innately, and inescapably, a process of interpreting the hard data about our world – its sights and sound and smells and facts and ideas - through subjective affective filters that help us turn those facts into the judgments and choices and behaviors that help us survive. The brain’s imperative, after all, is not to reason. It’s job is survival, and subjective cognitive biases and instincts have developed to help us make sense of information in the pursuit of safety, not so that we might come to know ‘THE universal absolute truth
  • This subjective cognition is built-in, subconscious, beyond free will, and unavoidably leads to different interpretations of the same facts.
  • But here is a truth with which I hope we can all agree. Our subjective system of cognition can be dangerous.
  • It can produce perceptions that conflict with the evidence, what I call The Perception Gap, which can in turn produce profound harm
  • We need to recognize the greater threat that our subjective system of cognition can pose, and in the name of our own safety and the welfare of the society on which we depend, do our very best to rise above it or, when we can’t, account for this very real danger in the policies we adopt.
  • "Everyone engages in motivated reasoning, everyone screens out unwelcome evidence, no one is a fully rational actor. Sure. But when it comes to something with such enormous consequences to human welfare
  • I think it's fair to say we have an obligation to confront our own ideological priors. We have an obligation to challenge ourselves, to push ourselves, to be suspicious of conclusions that are too convenient, to be sure that we're getting it right.
jaxredd10

What Is Cognitive Bias? - 0 views

  • Because of this, subtle biases can creep in and influence the way you see and think about the world. The concept of cognitive bias was first introduced by researchers Amos Tversky and Daniel Kahneman in 1972. Since then, researchers have described a number of different types of biases that affect decision-making in a wide range of areas including social behavior, cognition, behavioral economics, education, management, healthcare, business, and finance.
  • People sometimes confuse cognitive biases with logical fallacies, but the two are not the same. A logical fallacy stems from an error in a logical argument, while a cognitive bias is rooted in thought processing errors often arising from problems with memory, attention, attribution, and other mental mistakes.
oliviaodon

How One Psychologist Is Tackling Human Biases in Science - 0 views

  • It’s likely that some researchers are consciously cherry-picking data to get their work published. And some of the problems surely lie with journal publication policies. But the problems of false findings often begin with researchers unwittingly fooling themselves: they fall prey to cognitive biases, common modes of thinking that lure us toward wrong but convenient or attractive conclusions.
  • Peer review seems to be a more fallible instrument—especially in areas such as medicine and psychology—than is often appreciated, as the emerging “crisis of replicability” attests.
  • Psychologists have shown that “most of our reasoning is in fact rationalization,” he says. In other words, we have already made the decision about what to do or to think, and our “explanation” of our reasoning is really a justification for doing what we wanted to do—or to believe—anyway. Science is of course meant to be more objective and skeptical than everyday thought—but how much is it, really?
  • ...10 more annotations...
  • common response to this situation is to argue that, even if individual scientists might fool themselves, others have no hesitation in critiquing their ideas or their results, and so it all comes out in the wash: Science as a communal activity is self-correcting. Sometimes this is true—but it doesn’t necessarily happen as quickly or smoothly as we might like to believe.
  • The idea, says Nosek, is that researchers “write down in advance what their study is for and what they think will happen.” Then when they do their experiments, they agree to be bound to analyzing the results strictly within the confines of that original plan
  • He is convinced that the process and progress of science would be smoothed by bringing these biases to light—which means making research more transparent in its methods, assumptions, and interpretations
  • Psychologist Brian Nosek of the University of Virginia says that the most common and problematic bias in science is “motivated reasoning”: We interpret observations to fit a particular idea.
  • Surprisingly, Nosek thinks that one of the most effective solutions to cognitive bias in science could come from the discipline that has weathered some of the heaviest criticism recently for its error-prone and self-deluding ways: pharmacology.
  • Sometimes it seems surprising that science functions at all.
  • Whereas the falsification model of the scientific method championed by philosopher Karl Popper posits that the scientist looks for ways to test and falsify her theories—to ask “How am I wrong?”—Nosek says that scientists usually ask instead “How am I right?” (or equally, to ask “How are you wrong?”).
  • Statistics may seem to offer respite from bias through strength in numbers, but they are just as fraught.
  • Given that science has uncovered a dizzying variety of cognitive biases, the relative neglect of their consequences within science itself is peculiar. “I was aware of biases in humans at large,” says Hartgerink, “but when I first ‘learned’ that they also apply to scientists, I was somewhat amazed, even though it is so obvious.”
  • Nosek thinks that peer review might sometimes actively hinder clear and swift testing of scientific claims.
Javier E

Our Dangerous Inability to Agree on What is TRUE | Risk: Reason and Reality | Big Think - 1 views

  • Given that human cognition is never the product of pure dispassionate reason, but a subjective interpretation of the facts based on our feelings and biases and instincts, when can we ever say that we know who is right and who is wrong, about anything? When can we declare a fact so established that it’s fair to say, without being called arrogant, that those who deny this truth don’t just disagree…that they’re just plain wrong
  • This isn’t about matters of faith, or questions of ultimately unknowable things which by definition can not be established by fact. This is a question about what is knowable, and provable by careful objective scientific inquiry, a process which includes challenging skepticism rigorously applied precisely to establish what, beyond any reasonable doubt, is in fact true. The way evolution has been established
  • With enough careful investigation and scrupulously challenged evidence, we can establish knowable truths that are not just the product of our subjective motivated reasoning. We can apply our powers of reason and our ability to objectively analyze the facts and get beyond the point where what we 'know' is just an interpretation of the evidence through the subconscious filters of who we trust and our biases and instincts. We can get to the point where if someone wants to continue believe that the sun revolves around the earth, or that vaccines cause autism, or that evolution is a deceit, it is no longer arrogant - though it may still be provocative - to call those people wrong.
  • ...6 more annotations...
  • here is a truth with which I hope we can all agree. Our subjective system of cognition can be dangerous. It can produce perceptions that conflict with the evidence, what I call The Perception Gap, which can in turn produce profound harm.
  • The Perception Gap can lead to disagreements that create destructive and violent social conflict, to dangerous personal choices that feel safe but aren’t, and to policies more consistent with how we feel than what is in fact in our best interest. The Perception Gap may in fact be potentially more dangerous than any individual risk we face.
  • We need to recognize the greater threat that our subjective system of cognition can pose, and in the name of our own safety and the welfare of the society on which we depend, do our very best to rise above it or, when we can’t, account for this very real danger in the policies we adopt.
  • we have an obligation to confront our own ideological priors. We have an obligation to challenge ourselves, to push ourselves, to be suspicious of conclusions that are too convenient, to be sure that we're getting it right.
  • subjective cognition is built-in, subconscious, beyond free will, and unavoidably leads to different interpretations of the same facts.
  • Views that have more to do with competing tribal biases than objective interpretations of the evidence create destructive and violent conflict.
oliviaodon

How scientists fool themselves - and how they can stop : Nature News & Comment - 1 views

  • In 2013, five years after he co-authored a paper showing that Democratic candidates in the United States could get more votes by moving slightly to the right on economic policy1, Andrew Gelman, a statistician at Columbia University in New York City, was chagrined to learn of an error in the data analysis. In trying to replicate the work, an undergraduate student named Yang Yang Hu had discovered that Gelman had got the sign wrong on one of the variables.
  • Gelman immediately published a three-sentence correction, declaring that everything in the paper's crucial section should be considered wrong until proved otherwise.
  • Reflecting today on how it happened, Gelman traces his error back to the natural fallibility of the human brain: “The results seemed perfectly reasonable,” he says. “Lots of times with these kinds of coding errors you get results that are just ridiculous. So you know something's got to be wrong and you go back and search until you find the problem. If nothing seems wrong, it's easier to miss it.”
  • ...6 more annotations...
  • This is the big problem in science that no one is talking about: even an honest person is a master of self-deception. Our brains evolved long ago on the African savannah, where jumping to plausible conclusions about the location of ripe fruit or the presence of a predator was a matter of survival. But a smart strategy for evading lions does not necessarily translate well to a modern laboratory, where tenure may be riding on the analysis of terabytes of multidimensional data. In today's environment, our talent for jumping to conclusions makes it all too easy to find false patterns in randomness, to ignore alternative explanations for a result or to accept 'reasonable' outcomes without question — that is, to ceaselessly lead ourselves astray without realizing it.
  • Failure to understand our own biases has helped to create a crisis of confidence about the reproducibility of published results
  • Although it is impossible to document how often researchers fool themselves in data analysis, says Ioannidis, findings of irreproducibility beg for an explanation. The study of 100 psychology papers is a case in point: if one assumes that the vast majority of the original researchers were honest and diligent, then a large proportion of the problems can be explained only by unconscious biases. “This is a great time for research on research,” he says. “The massive growth of science allows for a massive number of results, and a massive number of errors and biases to study. So there's good reason to hope we can find better ways to deal with these problems.”
  • Although the human brain and its cognitive biases have been the same for as long as we have been doing science, some important things have changed, says psychologist Brian Nosek, executive director of the non-profit Center for Open Science in Charlottesville, Virginia, which works to increase the transparency and reproducibility of scientific research. Today's academic environment is more competitive than ever. There is an emphasis on piling up publications with statistically significant results — that is, with data relationships in which a commonly used measure of statistical certainty, the p-value, is 0.05 or less. “As a researcher, I'm not trying to produce misleading results,” says Nosek. “But I do have a stake in the outcome.” And that gives the mind excellent motivation to find what it is primed to find.
  • Another reason for concern about cognitive bias is the advent of staggeringly large multivariate data sets, often harbouring only a faint signal in a sea of random noise. Statistical methods have barely caught up with such data, and our brain's methods are even worse, says Keith Baggerly, a statistician at the University of Texas MD Anderson Cancer Center in Houston. As he told a conference on challenges in bioinformatics last September in Research Triangle Park, North Carolina, “Our intuition when we start looking at 50, or hundreds of, variables sucks.”
  • One trap that awaits during the early stages of research is what might be called hypothesis myopia: investigators fixate on collecting evidence to support just one hypothesis; neglect to look for evidence against it; and fail to consider other explanations.
Javier E

COVID-19: Individually Rational, Collectively Disastrous - The Atlantic - 0 views

  • One major problem is that stopping the virus from spreading requires us to override our basic intuitions.
  • Three cognitive biases make it hard for us to avoid actions that put us in great collective danger.
  • 1. Misleading Feedback
  • ...14 more annotations...
  • some activities, including dangerous ones, provide negative feedback only rarely. When I am in a rush, I often cross the street at a red light. I understand intellectually that this is stupid, but I’ve never once seen evidence of my stupidity.
  • Exposure to COVID-19 works the same way. Every time you engage in a risky activity—like meeting up with your friends indoors—the world is likely to send you a signal that you made the right choice. I saw my pal and didn’t get sick. Clearly, I shouldn’t have worried so much about socializing!
  • Let’s assume, for example, that going to a large indoor gathering gives you a one in 20 chance of contracting COVID-19—a significant risk. Most likely, you’ll get away with it the first time. You’ll then infer that taking part in such gatherings is pretty safe, and will do so again. Eventually, you are highly likely to fall sick.
  • 2. Individually Rational, Collectively DisastrousWe tend to think behavior that is justifiable on the individual level is also justifiable on the collective level, and vice versa. If eating the occasional sugary treat is fine for me it’s fine for all of us. And if smoking indoors is bad for me, it’s bad for all of us.
  • The dynamics of contagion in a pandemic do not work like that
  • if everyone who isn’t at especially high risk held similar dinner parties, some percentage of these events would lead to additional infections. And because each newly infected person might spread the virus to others, everyone’s decision to hold a one-off dinner party would quickly lead to a significant spike in transmissions.
  • The dynamic here is reminiscent of classic collective-action problems. If you go to one dinner, you’ll likely be fine. But if everyone goes to one dinner, the virus will spread with such speed that your own chances of contracting COVID-19 will also rise precipitously.
  • 3. Dangers Are Hard to Recognize and Avoid
  • Many of the dangers we face in life are easy to spot—and we have, over many millennia, developed biological instincts and social conventions to avoid them
  • When we deal with an unaccustomed danger, such as a new airborne virus, we can’t rely on any of these protective mechanisms.
  • The virus is invisible. This makes it hard to spot or anticipate. We don’t see little viral particles floating through the air
  • In time, we can overcome these biases (at least to some extent).
  • Social disapprobation can help
  • We all should do what we can to identify the biases from which we suffer—and try to stop them from influencing our behavior.
caelengrubb

How Cognitive Bias Affects Your Business - 0 views

  • Human beings often act in irrational and unexpected ways when it comes to business decisions, money, and finance.
  • Behavioral finance tries to explain the difference between what economic theory predicts people will do and what they actually do in the heat of the moment. 
  • There are two main types of biases that people commit causing them to deviate from rational decision-making: cognitive and emotional.
  • ...13 more annotations...
  • Cognitive errors result from incomplete information or the inability to analyze the information that is available. These cognitive errors can be classified as either belief perseverance or processing errors
  • Processing errors occur when an individual fails to manage and organize information properly, which can be due in part to the mental effort required to compute and analyze data.
  • Conservatism bias, where people emphasize original, pre-existing information over new data.
  • Base rate neglect is the opposite effect, whereby people put too little emphasis on the original information. 
  • Confirmation bias, where people seek information that affirms existing beliefs while discounting or discarding information that might contradict them.
  • Anchoring and Adjustment happens when somebody fixates on a target number, such as the result of a calculation or valuation.
  • Hindsight bias occurs when people perceive actual outcomes as reasonable and expected, but only after the fact.
  • Sample size neglect is an error made when people infer too much from a too-small sample size.
  • Mental accounting is when people earmark certain funds for certain goals and keep them separate. When this happens, the risk and reward of projects undertaken to achieve these goals are not considered as an overall portfolio and the effect of one on another is ignored.
  • Availability bias, or recency bias skews perceived future probabilities based on memorable past events
  • Framing bias is when a person will process the same information differently depending on how it is presented and received.
  • Cognitive errors in the way people process and analyze information can lead them to make irrational decisions which can negatively impact business or investing decisions
  • . These information processing errors could have arisen to help primitive humans survive in a time before money or finance came into existence.
caelengrubb

Cognitive Bias and Public Health Policy During the COVID-19 Pandemic | Critical Care Me... - 0 views

  • As the coronavirus disease 2019 (COVID-19) pandemic abates in many countries worldwide, and a new normal phase arrives, critically assessing policy responses to this public health crisis may promote better preparedness for the next wave or the next pandemic
  • A key lesson is revealed by one of the earliest and most sizeable US federal responses to the pandemic: the investment of $3 billion to build more ventilators. These extra ventilators, even had they been needed, would likely have done little to improve population survival because of the high mortality among patients with COVID-19 who require mechanical ventilation and diversion of clinicians away from more health-promoting endeavors.
  • Why are so many people distressed at the possibility that a patient in plain view—such as a person presenting to an emergency department with severe respiratory distress—would be denied an attempt at rescue because of a ventilator shortfall, but do not mount similarly impassioned concerns regarding failures to implement earlier, more aggressive physical distancing, testing, and contact tracing policies that would have saved far more lives?
  • ...12 more annotations...
  • These cognitive errors, which distract leaders from optimal policy making and citizens from taking steps to promote their own and others’ interests, cannot merely be ascribed to repudiations of science.
  • The first error that thwarts effective policy making during crises stems from what economists have called the “identifiable victim effect.” Humans respond more aggressively to threats to identifiable lives, ie, those that an individual can easily imagine being their own or belonging to people they care about (such as family members) or care for (such as a clinician’s patients) than to the hidden, “statistical” deaths reported in accounts of the population-level tolls of the crisis
  • Yet such views represent a second reason for the broad endorsement of policies that prioritize saving visible, immediately jeopardized lives: that humans are imbued with a strong and neurally mediated3 tendency to predict outcomes that are systematically more optimistic than observed outcomes
  • A third driver of misguided policy responses is that humans are present biased, ie, people tend to prefer immediate benefits to even larger benefits in the future.
  • Even if the tendency to prioritize visibly affected individuals could be resisted, many people would still place greater value on saving a life today than a life tomorrow.
  • Similar psychology helps explain the reluctance of many nations to limit refrigeration and air conditioning, forgo fuel-inefficient transportation, and take other near-term steps to reduce the future effects of climate change
  • The fourth contributing factor is that virtually everyone is subject to omission bias, which involves the tendency to prefer that a harm occur by failure to take action rather than as direct consequence of the actions that are taken
  • Although those who set policies for rationing ventilators and other scarce therapies do not intend the deaths of those who receive insufficient priority for these treatments, such policies nevertheless prevent clinicians from taking all possible steps to save certain lives.
  • An important goal of governance is to mitigate the effects of these and other biases on public policy and to effectively communicate the reasons for difficult decisions to the public. However, health systems’ routine use of wartime terminology of “standing up” and “standing down” intensive care units illustrate problematic messaging aimed at the need to address immediate danger
  • Second, had governments, health systems, and clinicians better understood the “identifiable victim effect,” they may have realized that promoting flattening the curve as a way to reduce pressure on hospitals and health care workers would be less effective than promoting early restaurant and retail store closures by saying “The lives you save when you close your doors include your own.”
  • Third, these leaders’ routine use of terms such as “nonpharmaceutical interventions”9 portrays public health responses negatively by labeling them according to what they are not. Instead, support for heavily funding contact tracing could have been generated by communicating such efforts as “lifesaving.
  • Fourth, although errors of human cognition are challenging to surmount, policy making, even in a crisis, occurs over a sufficient period to be meaningfully improved by deliberate efforts to counter untoward biases
huffem4

Infographic: 11 Cognitive Biases That Influence Political Outcomes - 1 views

  • when searching for facts, our own cognitive biases often get in the way.
  • The media, for example, can exploit our tendency to assign stereotypes to others by only providing catchy, surface-level information.
  • People exhibit confirmation bias when they seek information that only affirms their pre-existing beliefs. This can cause them to become overly rigid in their political opinions, even when presented with conflicting ideas or evidence.
  • ...2 more annotations...
  • In one experiment, participants chose to either support or oppose a given sociopolitical issue. They were then presented with evidence that was conflicting, affirming, or a combination of both. In all scenarios, participants were most likely to stick with their initial decisions. Of those presented with conflicting evidence, just one in five changed their stance. Furthermore, participants who maintained their initial positions became even more confident in the superiority of their decision—a testament to how influential confirmation bias can be.
  • Coverage bias, in the context of politics, is a form of media bias where certain politicians or topics are disproportionately covered.
mmckenziejr01

The Most Egregious Straw Men of 2016 | Inverse - 0 views

  • Fallacies are illicit shortcuts in reasoning, bad arguments that sound good but don’t actually make logical sense. Politicians and other public figures use them all the time ad nauseam in speeches and debates in order to better capture the hearts and minds of their audience.
  • But of all the logical fallacies out there, one stands out as being particularly powerful and popular, especially in politics: the straw man.
  • But that’s the truth, and the truth is irrelevant when it comes to fallacies. Just the fact that Clinton mentioned the term open borders, combined with her more liberal stance on immigration, was enough for Trump to say that she’s in favor of totally open borders
  • ...5 more annotations...
  • The straw man fallacy involves the construction of a second argument that to some degree resembles, in a simplified or exaggerated way, the argument that your opponent is really making. It is much easier for you to attack that perverted point than it is to address the original point being made.
  • Clinton did not say she wanted to get rid of the Second Amendment in any sense. She had expressed support for expanding background checks and a possible assault weapons ban. Those things are reasonable (moderate, even), so it’s much simpler for Trump to ignore her argument and make one up for her
  • What are in large part reasonable measures taken to avoid unnecessary offences are recast by these Republicans to be softness or attacks on people’s freedom of speech. By connoting political correctness with these attributes, Republican politicians can ensure that their constituent remains hostile to the idea. Then, when they come across an argument in need of simplification, they can call back to that already-crafted association.
  • Sanders’s healthcare plan would have ended Medicare and the ACA as we know them, but only to replace them with a universal healthcare system. No one stood to lose their coverage, but the way Clinton was arguing, it looked like people might. By essentially leaving out half of Sanders’s argument, Clinton made a case against a fictional version of Sanders that seemed just real enough to fool the average voter.
  • Still, mostly everyone today understands that the Earth isn’t flat. They recognize that, as Scaramucci said, “science” got one wrong there. By equating these two concepts, he assumes any argument that recognizes climate change to also be one that claims the Earth is flat. For that reason, this particular brand of climate change denial wins the award for best (or maybe worst) straw man of 2016.
  •  
    Here are a few examples of candidates using straw man fallacies to try to win over voters during the 2016 election. This shows just how prevalent cognitive biases are in our everyday lives without us even noticing them.
mmckenziejr01

Forer effect - The Skeptic's Dictionary - Skepdic.com - 0 views

  • orer eff
  • The Forer effect refers to the tendency of people to rate sets of statements as highly accurate for them personally even though the statements could apply to many people.
  • Forer gave a personality test to his students, ignored their answers, and gave each student the above evaluation. He asked them to evaluate the evaluation from 0 to 5, with "5" meaning the recipient felt the evaluation was an "excellent" assessment and "4" meaning the assessment was "good." The class average evaluation was 4.26. That was in 1948. The test has been repeated hundreds of time with psychology students and the average is still around 4.2 out of 5, or 84% accurate.
  • ...5 more annotations...
  • In short, Forer convinced people he could successfully read their character.
  • his personality analysis was taken from a newsstand astrology column and was presented to people without regard to their sun sign.
  • People tend to accept claims about themselves in proportion to their desire that the claims be true rather than in proportion to the empirical accuracy of the claims as measured by some non-subjective standard.
  • The Forer effect, however, only partially explains why so many people accept as accurate occult and pseudoscientific character assessment procedures
  • Favorable assessments are "more readily accepted as accurate descriptions of subjects' personalities than unfavorable" ones. But unfavorable claims are "more readily accepted when delivered by people with high perceived status than low perceived status."
  •  
    From the reading, the Forer effect seemed to be a good example of a couple cognitive biases together. The experiment and some of the findings are very interesting.
1 - 20 of 67 Next › Last »
Showing 20 items per page