Skip to main content

Home/ TOK Friends/ Group items matching "biases" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
Javier E

Cognitive Biases and the Human Brain - The Atlantic - 1 views

  • Present bias shows up not just in experiments, of course, but in the real world. Especially in the United States, people egregiously undersave for retirement—even when they make enough money to not spend their whole paycheck on expenses, and even when they work for a company that will kick in additional funds to retirement plans when they contribute.
  • hen people hear the word bias, many if not most will think of either racial prejudice or news organizations that slant their coverage to favor one political position over another. Present bias, by contrast, is an example of cognitive bias—the collection of faulty ways of thinking that is apparently hardwired into the human brain. The collection is large. Wikipedia’s “List of cognitive biases” contains 185 entries, from actor-observer bias (“the tendency for explanations of other individuals’ behaviors to overemphasize the influence of their personality and underemphasize the influence of their situation … and for explanations of one’s own behaviors to do the opposite”) to the Zeigarnik effect (“uncompleted or interrupted tasks are remembered better than completed ones”)
  • If I had to single out a particular bias as the most pervasive and damaging, it would probably be confirmation bias. That’s the effect that leads us to look for evidence confirming what we already think or suspect, to view facts and ideas we encounter as further confirmation, and to discount or ignore any piece of evidence that seems to support an alternate view
  • ...48 more annotations...
  • Confirmation bias shows up most blatantly in our current political divide, where each side seems unable to allow that the other side is right about anything.
  • The whole idea of cognitive biases and faulty heuristics—the shortcuts and rules of thumb by which we make judgments and predictions—was more or less invented in the 1970s by Amos Tversky and Daniel Kahneman
  • versky died in 1996. Kahneman won the 2002 Nobel Prize in Economics for the work the two men did together, which he summarized in his 2011 best seller, Thinking, Fast and Slow. Another best seller, last year’s The Undoing Project, by Michael Lewis, tells the story of the sometimes contentious collaboration between Tversky and Kahneman
  • Another key figure in the field is the University of Chicago economist Richard Thaler. One of the biases he’s most linked with is the endowment effect, which leads us to place an irrationally high value on our possessions.
  • In an experiment conducted by Thaler, Kahneman, and Jack L. Knetsch, half the participants were given a mug and then asked how much they would sell it for. The average answer was $5.78. The rest of the group said they would spend, on average, $2.21 for the same mug. This flew in the face of classic economic theory, which says that at a given time and among a certain population, an item has a market value that does not depend on whether one owns it or not. Thaler won the 2017 Nobel Prize in Economics.
  • “The question that is most often asked about cognitive illusions is whether they can be overcome. The message … is not encouraging.”
  • that’s not so easy in the real world, when we’re dealing with people and situations rather than lines. “Unfortunately, this sensible procedure is least likely to be applied when it is needed most,” Kahneman writes. “We would all like to have a warning bell that rings loudly whenever we are about to make a serious error, but no such bell is available.”
  • At least with the optical illusion, our slow-thinking, analytic mind—what Kahneman calls System 2—will recognize a Müller-Lyer situation and convince itself not to trust the fast-twitch System 1’s perception
  • Kahneman and others draw an analogy based on an understanding of the Müller-Lyer illusion, two parallel lines with arrows at each end. One line’s arrows point in; the other line’s arrows point out. Because of the direction of the arrows, the latter line appears shorter than the former, but in fact the two lines are the same length.
  • Because biases appear to be so hardwired and inalterable, most of the attention paid to countering them hasn’t dealt with the problematic thoughts, judgments, or predictions themselves
  • Is it really impossible, however, to shed or significantly mitigate one’s biases? Some studies have tentatively answered that question in the affirmative.
  • what if the person undergoing the de-biasing strategies was highly motivated and self-selected? In other words, what if it was me?
  • Over an apple pastry and tea with milk, he told me, “Temperament has a lot to do with my position. You won’t find anyone more pessimistic than I am.”
  • I met with Kahneman
  • “I see the picture as unequal lines,” he said. “The goal is not to trust what I think I see. To understand that I shouldn’t believe my lying eyes.” That’s doable with the optical illusion, he said, but extremely difficult with real-world cognitive biases.
  • In this context, his pessimism relates, first, to the impossibility of effecting any changes to System 1—the quick-thinking part of our brain and the one that makes mistaken judgments tantamount to the Müller-Lyer line illusion
  • he most effective check against them, as Kahneman says, is from the outside: Others can perceive our errors more readily than we can.
  • “slow-thinking organizations,” as he puts it, can institute policies that include the monitoring of individual decisions and predictions. They can also require procedures such as checklists and “premortems,”
  • A premortem attempts to counter optimism bias by requiring team members to imagine that a project has gone very, very badly and write a sentence or two describing how that happened. Conducting this exercise, it turns out, helps people think ahead.
  • “My position is that none of these things have any effect on System 1,” Kahneman said. “You can’t improve intuition.
  • Perhaps, with very long-term training, lots of talk, and exposure to behavioral economics, what you can do is cue reasoning, so you can engage System 2 to follow rules. Unfortunately, the world doesn’t provide cues. And for most people, in the heat of argument the rules go out the window.
  • Kahneman describes an even earlier Nisbett article that showed subjects’ disinclination to believe statistical and other general evidence, basing their judgments instead on individual examples and vivid anecdotes. (This bias is known as base-rate neglect.)
  • over the years, Nisbett had come to emphasize in his research and thinking the possibility of training people to overcome or avoid a number of pitfalls, including base-rate neglect, fundamental attribution error, and the sunk-cost fallacy.
  • Nisbett’s second-favorite example is that economists, who have absorbed the lessons of the sunk-cost fallacy, routinely walk out of bad movies and leave bad restaurant meals uneaten.
  • When Nisbett asks the same question of students who have completed the statistics course, about 70 percent give the right answer. He believes this result shows, pace Kahneman, that the law of large numbers can be absorbed into System 2—and maybe into System 1 as well, even when there are minimal cues.
  • about half give the right answer: the law of large numbers, which holds that outlier results are much more frequent when the sample size (at bats, in this case) is small. Over the course of the season, as the number of at bats increases, regression to the mean is inevitabl
  • When Nisbett has to give an example of his approach, he usually brings up the baseball-phenom survey. This involved telephoning University of Michigan students on the pretense of conducting a poll about sports, and asking them why there are always several Major League batters with .450 batting averages early in a season, yet no player has ever finished a season with an average that high.
  • we’ve tested Michigan students over four years, and they show a huge increase in ability to solve problems. Graduate students in psychology also show a huge gain.”
  • , “I know from my own research on teaching people how to reason statistically that just a few examples in two or three domains are sufficient to improve people’s reasoning for an indefinitely large number of events.”
  • isbett suggested another factor: “You and Amos specialized in hard problems for which you were drawn to the wrong answer. I began to study easy problems, which you guys would never get wrong but untutored people routinely do … Then you can look at the effects of instruction on such easy problems, which turn out to be huge.”
  • Nisbett suggested that I take “Mindware: Critical Thinking for the Information Age,” an online Coursera course in which he goes over what he considers the most effective de-biasing skills and concepts. Then, to see how much I had learned, I would take a survey he gives to Michigan undergraduates. So I did.
  • he course consists of eight lessons by Nisbett—who comes across on-screen as the authoritative but approachable psych professor we all would like to have had—interspersed with some graphics and quizzes. I recommend it. He explains the availability heuristic this way: “People are surprised that suicides outnumber homicides, and drownings outnumber deaths by fire. People always think crime is increasing” even if it’s not.
  • When I finished the course, Nisbett sent me the survey he and colleagues administer to Michigan undergrads
  • It contains a few dozen problems meant to measure the subjects’ resistance to cognitive biases
  • I got it right. Indeed, when I emailed my completed test, Nisbett replied, “My guess is that very few if any UM seniors did as well as you. I’m sure at least some psych students, at least after 2 years in school, did as well. But note that you came fairly close to a perfect score.”
  • Nevertheless, I did not feel that reading Mindware and taking the Coursera course had necessarily rid me of my biases
  • For his part, Nisbett insisted that the results were meaningful. “If you’re doing better in a testing context,” he told me, “you’ll jolly well be doing better in the real world.”
  • The New York–based NeuroLeadership Institute offers organizations and individuals a variety of training sessions, webinars, and conferences that promise, among other things, to use brain science to teach participants to counter bias. This year’s two-day summit will be held in New York next month; for $2,845, you could learn, for example, “why are our brains so bad at thinking about the future, and how do we do it better?”
  • Philip E. Tetlock, a professor at the University of Pennsylvania’s Wharton School, and his wife and research partner, Barbara Mellers, have for years been studying what they call “superforecasters”: people who manage to sidestep cognitive biases and predict future events with far more accuracy than the pundits
  • One of the most important ingredients is what Tetlock calls “the outside view.” The inside view is a product of fundamental attribution error, base-rate neglect, and other biases that are constantly cajoling us into resting our judgments and predictions on good or vivid stories instead of on data and statistics
  • In 2006, seeking to prevent another mistake of that magnitude, the U.S. government created the Intelligence Advanced Research Projects Activity (iarpa), an agency designed to use cutting-edge research and technology to improve intelligence-gathering and analysis. In 2011, iarpa initiated a program, Sirius, to fund the development of “serious” video games that could combat or mitigate what were deemed to be the six most damaging biases: confirmation bias, fundamental attribution error, the bias blind spot (the feeling that one is less biased than the average person), the anchoring effect, the representativeness heuristic, and projection bias (the assumption that everybody else’s thinking is the same as one’s own).
  • most promising are a handful of video games. Their genesis was in the Iraq War
  • Together with collaborators who included staff from Creative Technologies, a company specializing in games and other simulations, and Leidos, a defense, intelligence, and health research company that does a lot of government work, Morewedge devised Missing. Some subjects played the game, which takes about three hours to complete, while others watched a video about cognitive bias. All were tested on bias-mitigation skills before the training, immediately afterward, and then finally after eight to 12 weeks had passed.
  • “The literature on training suggests books and classes are fine entertainment but largely ineffectual. But the game has very large effects. It surprised everyone.”
  • he said he saw the results as supporting the research and insights of Richard Nisbett. “Nisbett’s work was largely written off by the field, the assumption being that training can’t reduce bias,
  • even the positive results reminded me of something Daniel Kahneman had told me. “Pencil-and-paper doesn’t convince me,” he said. “A test can be given even a couple of years later. But the test cues the test-taker. It reminds him what it’s all about.”
  • Morewedge told me that some tentative real-world scenarios along the lines of Missing have shown “promising results,” but that it’s too soon to talk about them.
  • In the future, I will monitor my thoughts and reactions as best I can
katedriscoll

Frontiers | A Neural Network Framework for Cognitive Bias | Psychology - 0 views

  • Human decision-making shows systematic simplifications and deviations from the tenets of rationality (‘heuristics’) that may lead to suboptimal decisional outcomes (‘cognitive biases’). There are currently three prevailing theoretical perspectives on the origin of heuristics and cognitive biases: a cognitive-psychological, an ecological and an evolutionary perspective. However, these perspectives are mainly descriptive and none of them provides an overall explanatory framework for
  • the underlying mechanisms of cognitive biases. To enhance our understanding of cognitive heuristics and biases we propose a neural network framework for cognitive biases, which explains why our brain systematically tends to default to heuristic (‘Type 1’) decision making. We argue that many cognitive biases arise from intrinsic brain mechanisms that are fundamental for the working of biological neural networks. To substantiate our viewpoint, we discern and explain four basic neural network principles: (1) Association, (2) Compatibility, (3) Retainment, and (4) Focus. These principles are inherent to (all) neural networks which were originally optimized to perform concrete biological, perceptual, and motor functions. They form the basis for our inclinations to associate and combine (unrelated) information, to prioritize information that is compatible with our present state (such as knowledge, opinions, and expectations), to retain given information that sometimes could better be ignored, and to focus on dominant information while ignoring relevant information that is not directly activated. The supposed mechanisms are complementary and not mutually exclusive. For different cognitive biases they may all contribute in varying degrees to distortion of information. The present viewpoint not only complements the earlier three viewpoints, but also provides a unifying and binding framework for many cognitive bias phenomena.
  • The cognitive-psychological (or heuristics and biases) perspective (Evans, 2008; Kahneman and Klein, 2009) attributes cognitive biases to limitations in the available data and in the human information processing capacity (Simon, 1955; Broadbent, 1958; Kahneman, 1973, 2003; Norman and Bobrow, 1975)
caelengrubb

Believing in Overcoming Cognitive Biases | Journal of Ethics | American Medical Association - 0 views

  • Cognitive biases contribute significantly to diagnostic and treatment errors
  • A 2016 review of their roles in decision making lists 4 domains of concern for physicians: gathering and interpreting evidence, taking action, and evaluating decisions
  • Confirmation bias is the selective gathering and interpretation of evidence consistent with current beliefs and the neglect of evidence that contradicts them.
  • ...14 more annotations...
  • It can occur when a physician refuses to consider alternative diagnoses once an initial diagnosis has been established, despite contradicting data, such as lab results. This bias leads physicians to see what they want to see
  • Anchoring bias is closely related to confirmation bias and comes into play when interpreting evidence. It refers to physicians’ practices of prioritizing information and data that support their initial impressions, even when first impressions are wrong
  • When physicians move from deliberation to action, they are sometimes swayed by emotional reactions rather than rational deliberation about risks and benefits. This is called the affect heuristic, and, while heuristics can often serve as efficient approaches to problem solving, they can sometimes lead to bias
  • Further down the treatment pathway, outcomes bias can come into play. This bias refers to the practice of believing that good or bad results are always attributable to prior decisions, even when there is no valid reason to do so
  • The dual-process theory, a cognitive model of reasoning, can be particularly relevant in matters of clinical decision making
  • This theory is based on the argument that we use 2 different cognitive systems, intuitive and analytical, when reasoning. The former is quick and uses information that is readily available; the latter is slower and more deliberate.
  • Consideration should be given to the difficulty physicians face in employing analytical thinking exclusively. Beyond constraints of time, information, and resources, many physicians are also likely to be sleep deprived, work in an environment full of distractions, and be required to respond quickly while managing heavy cognitive loads
  • Simply increasing physicians’ familiarity with the many types of cognitive biases—and how to avoid them—may be one of the best strategies to decrease bias-related errors
  • The same review suggests that cognitive forcing strategies may also have some success in improving diagnostic outcomes
  • Afterwards, the resident physicians were debriefed on both case-specific details and on cognitive forcing strategies, interviewed, and asked to complete a written survey. The results suggested that resident physicians further along in their training (ie, postgraduate year three) gained more awareness of cognitive strategies than resident physicians in earlier years of training, suggesting that this tool could be more useful after a certain level of training has been completed
  • A 2013 study examined the effect of a 3-part, 1-year curriculum on recognition and knowledge of cognitive biases and debiasing strategies in second-year residents
  • Cognitive biases in clinical practice have a significant impact on care, often in negative ways. They sometimes manifest as physicians seeing what they want to see rather than what is actually there. Or they come into play when physicians make snap decisions and then prioritize evidence that supports their conclusions, as opposed to drawing conclusions from evidence
  • Fortunately, cognitive psychology provides insight into how to prevent biases. Guided reflection and cognitive forcing strategies deflect bias through close examination of our own thinking processes.
  • During medical education and consistently thereafter, we must provide physicians with a full appreciation of the cost of biases and the potential benefits of combatting them.
ilanaprincilus06

You're more biased than you think - even when you know you're biased | News | The Guardian - 0 views

  • there’s plenty of evidence to suggest that we’re all at least somewhat subject to bias
  • Tell Republicans that some imaginary policy is a Republican one, as the psychologist Geoffrey Cohen did in 2003, and they’re much more likely to support it, even if it runs counter to Republican values. But ask them why they support it, and they’ll deny that party affiliation played a role. (Cohen found something similar for Democrats.
  • those who saw the names were biased in favour of famous artists. But even though they acknowledged the risk of bias, when asked to assess their own objectivity, they didn’t view their judgments as any more biased as a result.
  • ...7 more annotations...
  • Even when the risk of bias was explicitly pointed out to them, people remained confident that they weren’t susceptible to it
  • “Even when people acknowledge that what they are about to do is biased,” the researchers write, “they still are inclined to see their resulting decisions as objective.”
  • why it’s often better for companies to hire people, or colleges to admit students, using objective checklists, rather than interviews that rely on gut feelings.
  • It turns out the bias also applies to bias. In other words, we’re convinced that we’re better than most at not falling victim to bias.
  • “used a strategy that they thought was biased,” the researchers note, “and thus they probably expected to feel some bias when using it. The absence of that feeling may have made them more confident in their objectivity.”
  • we have a cognitive bias to the effect that we’re uniquely immune to cognitive biases.
  • Bias spares nobody.
Adam Clark

The 12 cognitive biases that prevent you from being rational - 0 views

  •  
    "The human brain is capable of 1016 processes per second, which makes it far more powerful than any computer currently in existence. But that doesn't mean our brains don't have major limitations. The lowly calculator can do math thousands of times better than we can, and our memories are often less than useless - plus, we're subject to cognitive biases, those annoying glitches in our thinking that cause us to make questionable decisions and reach erroneous conclusions. Here are a dozen of the most common and pernicious cognitive biases that you need to know about."
qkirkpatrick

Exploring Our Unconscious Biases | Center for American Progress - 0 views

  • The university-led collaborative administers web-based tests that purport to reveal whether a person is unknowingly biased about a wide range of issues.
  • Much has been written on the effects of implicit bias and how the often-unconscious attitudes and beliefs that nearly all of us hold foster our comprehension of race, gender, class, ethnicity, and a host of other social constructs
  • With all this angst rattling in my head, I took the test. It began innocently enough, with a series of questions that allowed me to state whether I had any known biases toward skin tones. I answered as honestly as possible but feared my conscious choices tilted toward skin tones similar to my own chocolate-colored skin.
  • ...2 more annotations...
  • The test then asked me to click on a set of faces paired with words such as “good” and “bad” and concepts such as “joy,” “peace,” “agony,” “terrible,” and “hurt.” Clearly, I thought, the idea would be a measurement of association
  • The entire test took about 10 minutes. I felt drained afterward, fearful of what I might learn about myself. I expected to show a strong to moderate bias for dark skin. According to the test, however—like some 17 percent of those who had taken it before me—I had “little to no automatic preference between skin tones.”
  •  
    Biases affect everything you do.
katedriscoll

Cognitive Biases: What They Are and How They Affect People - Effectiviology - 0 views

  • A cognitive bias is a systematic pattern of deviation from rationality, which occurs due to the way our cognitive system works. Accordingly, cognitive biases cause us to be irrational in the way we search for, evaluate, interpret, judge, use, and remember information, as well as in the way we make decisions.
  • Cognitive biases affect every area of our life, from how we form our memories, to how we shape our beliefs, and to how we form relationships with other people. In doing so, they can lead to both relatively minor issues, such as forgetting a small detail from a past event, as well as to major ones, such as choosing to avoid an important medical treatment that could save our life.Because cognitive biases can have such a powerful and pervasive influence on ourselves and on others, it’s important to understand them. As such, in the following article you will learn more about cognitive biases, understand why we experience them, see what types of them exist, and find out what you can do in order to mitigate them successfully.
kaylynfreeman

Heuristics and Biases, Related But Not the Same | Psychology Today - 0 views

  • By treating them as the same, we miss nuances that are important for understanding human decision-making.
  • Biases—regardless of whether they are hardwired into us due to evolution, learned through socialization or direct experience—or a function of genetically influenced traits, represent predispositions to favor a given conclusion over other conclusions. Therefore, Biases might be considered the leanings, priorities, and inclinations that influence our decisions[2].
  • Heuristics are mental shortcuts that allow us to make decisions more quickly, frugally, and/or accurately than if we considered additional information. They are derived from experience and formal learning and are open to continuous updates based on new experiences and information. Therefore, heuristics represent the strategies we employ to filter and attend to information[3].
  • ...6 more annotations...
  • This preference, which is perhaps a strong one, may have resulted in a bias to maintain the status quo. You rely on heuristics to help identify your deodorant (usually by sight) and you add it to your virtual cart and place your order. In this instance, your bias influenced your preference toward your current deodorant, and your heuristic helped you to identify it. Potential stinkiness crisis averted.
  • In that case, you will likely be motivated to make a purchasing decision consistent with your strong bias (i.e., look to purchase it from a different vendor, maintaining the status quo with your deodorant). Thus, in this scenario, you decide to look elsewhere.
  • At this step, the availability heuristic is likely to guide your decision, causing you to navigate to an alternative site that quickly comes to mind[6].
  • Your heuristics will help you select an alternative product that meets some criteria.
  • satisficing heuristic (opting for the first product that looks good enough), a similarity heuristic (opting for the product that looks closest to your current deodorant) or some other heuristic to help you select the product you decide to order.
  • The question, though, is often whether your biases and heuristics are aiding or inhibiting the ecological rationality of your decision, and that will vary from situation to situation.
Javier E

Opinion | Do You Live in a 'Tight' State or a 'Loose' One? Turns Out It Matters Quite a Bit. - The New York Times - 0 views

  • Political biases are omnipresent, but what we don’t fully understand yet is how they come about in the first place.
  • In 2014, Michele J. Gelfand, a professor of psychology at the Stanford Graduate School of Business formerly at the University of Maryland, and Jesse R. Harrington, then a Ph.D. candidate, conducted a study designed to rank the 50 states on a scale of “tightness” and “looseness.”
  • titled “Tightness-Looseness Across the 50 United States,” the study calculated a catalog of measures for each state, including the incidence of natural disasters, disease prevalence, residents’ levels of openness and conscientiousness, drug and alcohol use, homelessness and incarceration rates.
  • ...64 more annotations...
  • Gelfand and Harrington predicted that “‘tight’ states would exhibit a higher incidence of natural disasters, greater environmental vulnerability, fewer natural resources, greater incidence of disease and higher mortality rates, higher population density, and greater degrees of external threat.”
  • The South dominated the tight states: Mississippi, Alabama Arkansas, Oklahoma, Tennessee, Texas, Louisiana, Kentucky, South Carolina and North Carolina
  • states in New England and on the West Coast were the loosest: California, Oregon, Washington, Maine, Massachusetts, Connecticut, New Hampshire and Vermont.
  • Cultural differences, Gelfand continued, “have a certain logic — a rationale that makes good sense,” noting that “cultures that have threats need rules to coordinate to survive (think about how incredibly coordinated Japan is in response to natural disasters).
  • “Rule Makers, Rule Breakers: How Tight and Loose Cultures Wire the World” in 2018, in which she described the results of a 2016 pre-election survey she and two colleagues had commissioned
  • The results were telling: People who felt the country was facing greater threats desired greater tightness. This desire, in turn, correctly predicted their support for Trump. In fact, desired tightness predicted support for Trump far better than other measures. For example, a desire for tightness predicted a vote for Trump with 44 times more accuracy than other popular measures of authoritarianism.
  • The 2016 election, Gelfand continued, “turned largely on primal cultural reflexes — ones that had been conditioned not only by cultural forces, but by a candidate who was able to exploit them.”
  • Gelfand said:Some groups have much stronger norms than others; they’re tight. Others have much weaker norms; they’re loose. Of course, all cultures have areas in which they are tight and loose — but cultures vary in the degree to which they emphasize norms and compliance with them.
  • In both 2016 and 2020, Donald Trump carried all 10 of the top “tight” states; Hillary Clinton and Joe Biden carried all 10 of the top “loose” states.
  • The tight-loose concept, Gelfand argued,is an important framework to understand the rise of President Donald Trump and other leaders in Poland, Hungary, Italy, and France,
  • cultures that don’t have a lot of threat can afford to be more permissive and loose.”
  • The gist is this: when people perceive threat — whether real or imagined, they want strong rules and autocratic leaders to help them survive
  • My research has found that within minutes of exposing study participants to false information about terrorist incidents, overpopulation, pathogen outbreaks and natural disasters, their minds tightened. They wanted stronger rules and punishments.
  • Gelfand writes that tightness encourages conscientiousness, social order and self-control on the plus side, along with close-mindedness, conventional thinking and cultural inertia on the minus side.
  • Looseness, Gelfand posits, fosters tolerance, creativity and adaptability, along with such liabilities as social disorder, a lack of coordination and impulsive behavior.
  • If liberalism and conservatism have historically played a complementary role, each checking the other to constrain extremism, why are the left and right so destructively hostile to each other now, and why is the contemporary political system so polarized?
  • Along the same lines, if liberals and conservatives hold differing moral visions, not just about what makes a good government but about what makes a good life, what turned the relationship between left and right from competitive to mutually destructive?
  • As a set, Niemi wrote, conservative binding values encompassthe values oriented around group preservation, are associated with judgments, decisions, and interpersonal orientations that sacrifice the welfare of individuals
  • She cited research thatfound 47 percent of the most extreme conservatives strongly endorsed the view that “The world is becoming a more and more dangerous place,” compared to 19 percent of the most extreme liberals
  • Conservatives and liberals, Niemi continued,see different things as threats — the nature of the threat and how it happens to stir one’s moral values (and their associated emotions) is a better clue to why liberals and conservatives react differently.
  • Unlike liberals, conservatives strongly endorse the binding moral values aimed at protecting groups and relationships. They judge transgressions involving personal and national betrayal, disobedience to authority, and disgusting or impure acts such as sexually or spiritually unchaste behavior as morally relevant and wrong.
  • Underlying these differences are competing sets of liberal and conservative moral priorities, with liberals placing more stress than conservatives on caring, kindness, fairness and rights — known among scholars as “individualizing values
  • conservatives focus more on loyalty, hierarchy, deference to authority, sanctity and a higher standard of disgust, known as “binding values.”
  • Niemi contended that sensitivity to various types of threat is a key factor in driving differences between the far left and far right.
  • For example, binding values are associated with Machiavellianism (e.g., status-seeking and lying, getting ahead by any means, 2013); victim derogation, blame, and beliefs that victims were causal contributors for a variety of harmful acts (2016, 2020); and a tendency to excuse transgressions of ingroup members with attributions to the situation rather than the person (2023).
  • Niemi cited a paper she and Liane Young, a professor of psychology at Boston College, published in 2016, “When and Why We See Victims as Responsible: The Impact of Ideology on Attitudes Toward Victims,” which tested responses of men and women to descriptions of crimes including sexual assaults and robberies.
  • We measured moral values associated with unconditionally prohibiting harm (“individualizing values”) versus moral values associated with prohibiting behavior that destabilizes groups and relationships (“binding values”: loyalty, obedience to authority, and purity)
  • Increased endorsement of binding values predicted increased ratings of victims as contaminated, increased blame and responsibility attributed to victims, increased perceptions of victims’ (versus perpetrators’) behaviors as contributing to the outcome, and decreased focus on perpetrators.
  • A central explanation typically offered for the current situation in American politics is that partisanship and political ideology have developed into strong social identities where the mass public is increasingly sorted — along social, partisan, and ideological lines.
  • What happened to people ecologically affected social-political developments, including the content of the rules people made and how they enforced them
  • Just as ecological factors differing from region to region over the globe produced different cultural values, ecological factors differed throughout the U.S. historically and today, producing our regional and state-level dimensions of culture and political patterns.
  • Joshua Hartshorne, who is also a professor of psychology at Boston College, took issue with the binding versus individualizing values theory as an explanation for the tendency of conservatives to blame victims:
  • I would guess that the reason conservatives are more likely to blame the victim has less to do with binding values and more to do with the just-world bias (the belief that good things happen to good people and bad things happen to bad people, therefore if a bad thing happened to you, you must be a bad person).
  • Belief in a just world, Hartshorne argued, is crucial for those seeking to protect the status quo:It seems psychologically necessary for anyone who wants to advocate for keeping things the way they are that the haves should keep on having, and the have-nots have got as much as they deserve. I don’t see how you could advocate for such a position while simultaneously viewing yourself as moral (and almost everyone believes that they themselves are moral) without also believing in the just world
  • Conversely, if you generally believe the world is not just, and you view yourself as a moral person, then you are likely to feel like you have an obligation to change things.
  • I asked Lene Aaroe, a political scientist at Aarhus University in Denmark, why the contemporary American political system is as polarized as it is now, given that the liberal-conservative schism is longstanding. What has happened to produce such intense hostility between left and right?
  • There is variation across countries in hostility between left and right. The United States is a particularly polarized case which calls for a contextual explanatio
  • I then asked Aaroe why surveys find that conservatives are happier than liberals. “Some research,” she replied, “suggests that experiences of inequality constitute a larger psychological burden to liberals because it is more difficult for liberals to rationalize inequality as a phenomenon with positive consequences.”
  • Numerous factors potentially influence the evolution of liberalism and conservatism and other social-cultural differences, including geography, topography, catastrophic events, and subsistence styles
  • Steven Pinker, a professor of psychology at Harvard, elaborated in an email on the link between conservatism and happiness:
  • t’s a combination of factors. Conservatives are likelier to be married, patriotic, and religious, all of which make people happier
  • They may be less aggrieved by the status quo, whereas liberals take on society’s problems as part of their own personal burdens. Liberals also place politics closer to their identity and striving for meaning and purpose, which is a recipe for frustration.
  • Some features of the woke faction of liberalism may make people unhappier: as Jon Haidt and Greg Lukianoff have suggested, wokeism is Cognitive Behavioral Therapy in reverse, urging upon people maladaptive mental habits such as catastrophizing, feeling like a victim of forces beyond one’s control, prioritizing emotions of hurt and anger over rational analysis, and dividing the world into allies and villains.
  • Why, I asked Pinker, would liberals and conservatives react differently — often very differently — to messages that highlight threat?
  • It may be liberals (or at least the social-justice wing) who are more sensitive to threats, such as white supremacy, climate change, and patriarchy; who may be likelier to moralize, seeing racism and transphobia in messages that others perceive as neutral; and being likelier to surrender to emotions like “harm” and “hurt.”
  • While liberals and conservatives, guided by different sets of moral values, may make agreement on specific policies difficult, that does not necessarily preclude consensus.
  • there are ways to persuade conservatives to support liberal initiatives and to persuade liberals to back conservative proposals:
  • While liberals tend to be more concerned with protecting vulnerable groups from harm and more concerned with equality and social justice than conservatives, conservatives tend to be more concerned with moral issues like group loyalty, respect for authority, purity and religious sanctity than liberals are. Because of these different moral commitments, we find that liberals and conservatives can be persuaded by quite different moral arguments
  • For example, we find that conservatives are more persuaded by a same-sex marriage appeal articulated in terms of group loyalty and patriotism, rather than equality and social justice.
  • Liberals who read the fairness argument were substantially more supportive of military spending than those who read the loyalty and authority argument.
  • We find support for these claims across six studies involving diverse political issues, including same-sex marriage, universal health care, military spending, and adopting English as the nation’s official language.”
  • In one test of persuadability on the right, Feinberg and Willer assigned some conservatives to read an editorial supporting universal health care as a matter of “fairness (health coverage is a basic human right)” or to read an editorial supporting health care as a matter of “purity (uninsured people means more unclean, infected, and diseased Americans).”
  • Conservatives who read the purity argument were much more supportive of health care than those who read the fairness case.
  • “political arguments reframed to appeal to the moral values of those holding the opposing political position are typically more effective
  • In “Conservative and Liberal Attitudes Drive Polarized Neural Responses to Political Content,” Willer, Yuan Chang Leong of the University of Chicago, Janice Chen of Johns Hopkins and Jamil Zaki of Stanford address the question of how partisan biases are encoded in the brain:
  • society. How do such biases arise in the brain? We measured the neural activity of participants watching videos related to immigration policy. Despite watching the same videos, conservative and liberal participants exhibited divergent neural responses. This “neural polarization” between groups occurred in a brain area associated with the interpretation of narrative content and intensified in response to language associated with risk, emotion, and morality. Furthermore, polarized neural responses predicted attitude change in response to the videos.
  • The four authors argue that their “findings suggest that biased processing in the brain drives divergent interpretations of political information and subsequent attitude polarization.” These results, they continue, “shed light on the psychological and neural underpinnings of how identical information is interpreted differently by conservatives and liberals.”
  • The authors used neural imaging to follow changes in the dorsomedial prefrontal cortex (known as DMPFC) as conservatives and liberals watched videos presenting strong positions, left and right, on immigration.
  • or each video,” they write,participants with DMPFC activity time courses more similar to that of conservative-leaning participants became more likely to support the conservative positio
  • Conversely, those with DMPFC activity time courses more similar to that of liberal-leaning participants became more likely to support the liberal position. These results suggest that divergent interpretations of the same information are associated with increased attitude polarizatio
  • Together, our findings describe a neural basis for partisan biases in processing political information and their effects on attitude change.
  • Describing their neuroimaging method, the authors point out that theysearched for evidence of “neural polarization” activity in the brain that diverges between people who hold liberal versus conservative political attitudes. Neural polarization was observed in the dorsomedial prefrontal cortex (DMPFC), a brain region associated with the interpretation of narrative content.
  • The question is whether the political polarization that we are witnessing now proves to be a core, encoded aspect of the human mind, difficult to overcome — as Leong, Chen, Zaki and Willer sugges
  • — or whether, with our increased knowledge of the neural basis of partisan and other biases, we will find more effective ways to manage these most dangerous of human predispositions.
oliviaodon

How One Psychologist Is Tackling Human Biases in Science - 0 views

  • It’s likely that some researchers are consciously cherry-picking data to get their work published. And some of the problems surely lie with journal publication policies. But the problems of false findings often begin with researchers unwittingly fooling themselves: they fall prey to cognitive biases, common modes of thinking that lure us toward wrong but convenient or attractive conclusions.
  • Peer review seems to be a more fallible instrument—especially in areas such as medicine and psychology—than is often appreciated, as the emerging “crisis of replicability” attests.
  • Psychologists have shown that “most of our reasoning is in fact rationalization,” he says. In other words, we have already made the decision about what to do or to think, and our “explanation” of our reasoning is really a justification for doing what we wanted to do—or to believe—anyway. Science is of course meant to be more objective and skeptical than everyday thought—but how much is it, really?
  • ...10 more annotations...
  • common response to this situation is to argue that, even if individual scientists might fool themselves, others have no hesitation in critiquing their ideas or their results, and so it all comes out in the wash: Science as a communal activity is self-correcting. Sometimes this is true—but it doesn’t necessarily happen as quickly or smoothly as we might like to believe.
  • The idea, says Nosek, is that researchers “write down in advance what their study is for and what they think will happen.” Then when they do their experiments, they agree to be bound to analyzing the results strictly within the confines of that original plan
  • He is convinced that the process and progress of science would be smoothed by bringing these biases to light—which means making research more transparent in its methods, assumptions, and interpretations
  • Psychologist Brian Nosek of the University of Virginia says that the most common and problematic bias in science is “motivated reasoning”: We interpret observations to fit a particular idea.
  • Surprisingly, Nosek thinks that one of the most effective solutions to cognitive bias in science could come from the discipline that has weathered some of the heaviest criticism recently for its error-prone and self-deluding ways: pharmacology.
  • Sometimes it seems surprising that science functions at all.
  • Whereas the falsification model of the scientific method championed by philosopher Karl Popper posits that the scientist looks for ways to test and falsify her theories—to ask “How am I wrong?”—Nosek says that scientists usually ask instead “How am I right?” (or equally, to ask “How are you wrong?”).
  • Statistics may seem to offer respite from bias through strength in numbers, but they are just as fraught.
  • Given that science has uncovered a dizzying variety of cognitive biases, the relative neglect of their consequences within science itself is peculiar. “I was aware of biases in humans at large,” says Hartgerink, “but when I first ‘learned’ that they also apply to scientists, I was somewhat amazed, even though it is so obvious.”
  • Nosek thinks that peer review might sometimes actively hinder clear and swift testing of scientific claims.
charlottedonoho

Gender and racial bias can be 'unlearnt' during sleep, new study suggests | Science | The Guardian - 0 views

  • Now scientists have found a more noble purpose for the technique in a study that suggests deep-rooted biases about race and gender could be “unlearnt” during a short nap. The findings appear to confirm the idea that sleeping provides a unique window for accessing and altering fundamental beliefs – even prejudices that we don’t know we have.
  • Simply playing auditory cues while people slept partially undid racial and gender bias, the study found, and the effects were still evident at least a week later.
  • “These biases are well-learned,” said Hu. “They can operate efficiently even when we have the good intention to avoid such biases. Moreover, we are often not aware of their influences on our behaviour.”
  • ...4 more annotations...
  • The study, published in Science, began with two Pavlovian-style conditioning exercises designed to counter race and gender biases.
  • However, they caution that the use of the technique in future would need strict ethical guidelines. “Sleep is a state in which the individual is without wilful consciousness and therefore vulnerable to suggestion,” they add.
  • Scientists believe the technique works because we consolidate memories by replaying them during sleep and transferring the information from the brain’s temporary storage to long-term memory. Hearing the distinctive sound would trigger the memory to be replayed repeatedly, the scientists said, enhancing the learning process.
  • “We call this Targeted Memory Reactivation, because the sounds played during sleep could produce relatively better memory for information cued during sleep compared to information not cued during sleep.
katedriscoll

What are Cognitive Biases? | Interaction Design Foundation (IxDF) - 0 views

  • ognitive bias is an umbrella term that refers to the systematic ways in which the context and framing of information influence individuals’ judgment and decision-making. There are many kinds of cognitive biases that influence individuals differently, but their common characteristic is that—in step with human individuality—they lead to judgment and decision-making that deviates from rational objectivity.
  • In some cases, cognitive biases make our thinking and decision-making faster and more efficient. The reason is that we do not stop to consider all available information, as our thoughts proceed down some channels instead of others. In other cases, however, cognitive biases can lead to errors for exactly the same reason. An example is confirmation bias, where we tend to favor information that reinforces or confirms our pre-existing beliefs. For instance, if we believe that planes are dangerous, a handful of stories about plane crashes tend to be more memorable than millions of stories about safe, successful flights. Thus, the prospect of air travel equates to an avoidable risk of doom for a person inclined to think in this way, regardless of how much time has passed without news of an air catastrophe.
caelengrubb

Looking inward in an era of 'fake news': Addressing cognitive bias | YLAI Network - 0 views

  • In an era when everyone seems eager to point out instances of “fake news,” it is easy to forget that knowing how we make sense of the news is as important as knowing how to spot incorrect or biased content
  • While the ability to analyze the credibility of a source and the veracity of its content remains an essential and often-discussed aspect of news literacy, it is equally important to understand how we as news consumers engage with and react to the information we find online, in our feeds, and on our apps
  • People process information they receive from the news in the same way they process all information around them — in the shortest, quickest way possible
  • ...11 more annotations...
  • When we consider how we engage with the news, some shortcuts we may want to pay close attention to, and reflect carefully on, are cognitive biases.
  • In fact, without these heuristics, it would be impossible for us to process all the information we receive daily. However, the use of these shortcuts can lead to “blind spots,” or unintentional ways we respond to information that can have negative consequences for how we engage with, digest, and share the information we encounter
  • These shortcuts, also called heuristics, streamline our problem-solving process and help us make relatively quick decisions.
  • Confirmation bias is the tendency to seek out and value information that confirms our pre-existing beliefs while discarding information that proves our ideas wrong.
  • Cognitive biases are best described as glitches in how we process information
  • Echo chamber effect refers to a situation in which we are primarily exposed to information, people, events, and ideas that already align with our point of view.
  • Anchoring bias, also known as “anchoring,” refers to people’s tendency to consider the first piece of information they receive about a topic as the most reliable
  • The framing effect is what happens when we make decisions based on how information is presented or discussed, rather than its actual substance.
  • Fluency heuristic occurs when a piece of information is deemed more valuable because it is easier to process or recall
  • Everyone operates under one or more cognitive biases. So, when searching for and reading the news (or other information), it is important to be aware of how these biases might shape how we make sense of this information.
  • In conclusion, we may not be able to control the content of the news — whether it is fake, reliable, or somewhere in between — but we can learn to be aware of how we respond to it and adjust our evaluations of the news accordingly.
jlessner

Our Biased Brains - NYTimes.com - 1 views

  • To better understand the roots of racial division in America, think about this:The human brain seems to be wired so that it categorizes people by race in the first one-fifth of a second after seeing a face. Brain scans show that even when people are told to sort people by gender, the brain still groups people by race.
  • Racial bias also begins astonishingly early: Even infants often show a preference for their own racial group. In one study, 3-month-old white infants were shown photos of faces of white adults and black adults; they preferred the faces of whites. For 3-month-old black infants living in Africa, it was the reverse.
  • Scholars suggest that in evolutionary times we became hard-wired to make instantaneous judgments about whether someone is in our “in group” or not — because that could be lifesaving. A child who didn’t prefer his or her own group might have been at risk of being clubbed to death.
  • ...2 more annotations...
  • “It’s a feature of evolution,” says Mahzarin Banaji, a Harvard psychology professor who co-developed tests of unconscious biases. These suggest that people turn out to have subterranean racial and gender biases that they are unaware of and even disapprove of.
  • What’s particularly dispiriting is that this unconscious bias among whites toward blacks seems just as great among preschoolers as among senior citizens.
Javier E

Who You Are - NYTimes.com - 1 views

  • Before Kahneman and Tversky, people who thought about social problems and human behavior tended to assume that we are mostly rational agents. They assumed that people have control over the most important parts of their own thinking. They assumed that people are basically sensible utility-maximizers and that when they depart from reason it’s because some passion like fear or love has distorted their judgment.
  • Kahneman and Tversky conducted experiments. They proved that actual human behavior often deviates from the old models and that the flaws are not just in the passions but in the machinery of cognition. They demonstrated that people rely on unconscious biases and rules of thumb to navigate the world, for good and ill. Many of these biases have become famous: priming, framing, loss-aversion.
  • We are dual process thinkers. We have two interrelated systems running in our heads. One is slow, deliberate and arduous (our conscious reasoning). The other is fast, associative, automatic and supple (our unconscious pattern recognition). There is now a complex debate over the relative strengths and weaknesses of these two systems. In popular terms, think of it as the debate between “Moneyball” (look at the data) and “Blink” (go with your intuition).
  • ...3 more annotations...
  • We are not blank slates. All humans seem to share similar sets of biases. There is such a thing as universal human nature. The trick is to understand the universals and how tightly or loosely they tie us down.
  • We are players in a game we don’t understand. Most of our own thinking is below awareness. Fifty years ago, people may have assumed we are captains of our own ships, but, in fact, our behavior is often aroused by context in ways we can’t see. Our biases frequently cause us to want the wrong things. Our perceptions and memories are slippery, especially about our own mental states. Our free will is bounded. We have much less control over ourselves than we thought.
  • They also figured out ways to navigate around our shortcomings. Kahneman champions the idea of “adversarial collaboration” — when studying something, work with people you disagree with. Tversky had a wise maxim: “Let us take what the terrain gives.” Don’t overreach. Understand what your circumstances are offer
oliviaodon

How scientists fool themselves - and how they can stop : Nature News & Comment - 1 views

  • In 2013, five years after he co-authored a paper showing that Democratic candidates in the United States could get more votes by moving slightly to the right on economic policy1, Andrew Gelman, a statistician at Columbia University in New York City, was chagrined to learn of an error in the data analysis. In trying to replicate the work, an undergraduate student named Yang Yang Hu had discovered that Gelman had got the sign wrong on one of the variables.
  • Gelman immediately published a three-sentence correction, declaring that everything in the paper's crucial section should be considered wrong until proved otherwise.
  • Reflecting today on how it happened, Gelman traces his error back to the natural fallibility of the human brain: “The results seemed perfectly reasonable,” he says. “Lots of times with these kinds of coding errors you get results that are just ridiculous. So you know something's got to be wrong and you go back and search until you find the problem. If nothing seems wrong, it's easier to miss it.”
  • ...6 more annotations...
  • This is the big problem in science that no one is talking about: even an honest person is a master of self-deception. Our brains evolved long ago on the African savannah, where jumping to plausible conclusions about the location of ripe fruit or the presence of a predator was a matter of survival. But a smart strategy for evading lions does not necessarily translate well to a modern laboratory, where tenure may be riding on the analysis of terabytes of multidimensional data. In today's environment, our talent for jumping to conclusions makes it all too easy to find false patterns in randomness, to ignore alternative explanations for a result or to accept 'reasonable' outcomes without question — that is, to ceaselessly lead ourselves astray without realizing it.
  • Failure to understand our own biases has helped to create a crisis of confidence about the reproducibility of published results
  • Although it is impossible to document how often researchers fool themselves in data analysis, says Ioannidis, findings of irreproducibility beg for an explanation. The study of 100 psychology papers is a case in point: if one assumes that the vast majority of the original researchers were honest and diligent, then a large proportion of the problems can be explained only by unconscious biases. “This is a great time for research on research,” he says. “The massive growth of science allows for a massive number of results, and a massive number of errors and biases to study. So there's good reason to hope we can find better ways to deal with these problems.”
  • Although the human brain and its cognitive biases have been the same for as long as we have been doing science, some important things have changed, says psychologist Brian Nosek, executive director of the non-profit Center for Open Science in Charlottesville, Virginia, which works to increase the transparency and reproducibility of scientific research. Today's academic environment is more competitive than ever. There is an emphasis on piling up publications with statistically significant results — that is, with data relationships in which a commonly used measure of statistical certainty, the p-value, is 0.05 or less. “As a researcher, I'm not trying to produce misleading results,” says Nosek. “But I do have a stake in the outcome.” And that gives the mind excellent motivation to find what it is primed to find.
  • Another reason for concern about cognitive bias is the advent of staggeringly large multivariate data sets, often harbouring only a faint signal in a sea of random noise. Statistical methods have barely caught up with such data, and our brain's methods are even worse, says Keith Baggerly, a statistician at the University of Texas MD Anderson Cancer Center in Houston. As he told a conference on challenges in bioinformatics last September in Research Triangle Park, North Carolina, “Our intuition when we start looking at 50, or hundreds of, variables sucks.”
  • One trap that awaits during the early stages of research is what might be called hypothesis myopia: investigators fixate on collecting evidence to support just one hypothesis; neglect to look for evidence against it; and fail to consider other explanations.
Javier E

They're Watching You at Work - Don Peck - The Atlantic - 2 views

  • Predictive statistical analysis, harnessed to big data, appears poised to alter the way millions of people are hired and assessed.
  • By one estimate, more than 98 percent of the world’s information is now stored digitally, and the volume of that data has quadrupled since 2007.
  • The application of predictive analytics to people’s careers—an emerging field sometimes called “people analytics”—is enormously challenging, not to mention ethically fraught
  • ...52 more annotations...
  • By the end of World War II, however, American corporations were facing severe talent shortages. Their senior executives were growing old, and a dearth of hiring from the Depression through the war had resulted in a shortfall of able, well-trained managers. Finding people who had the potential to rise quickly through the ranks became an overriding preoccupation of American businesses. They began to devise a formal hiring-and-management system based in part on new studies of human behavior, and in part on military techniques developed during both world wars, when huge mobilization efforts and mass casualties created the need to get the right people into the right roles as efficiently as possible. By the 1950s, it was not unusual for companies to spend days with young applicants for professional jobs, conducting a battery of tests, all with an eye toward corner-office potential.
  • But companies abandoned their hard-edged practices for another important reason: many of their methods of evaluation turned out not to be very scientific.
  • this regime, so widespread in corporate America at mid-century, had almost disappeared by 1990. “I think an HR person from the late 1970s would be stunned to see how casually companies hire now,”
  • Many factors explain the change, he said, and then he ticked off a number of them: Increased job-switching has made it less important and less economical for companies to test so thoroughly. A heightened focus on short-term financial results has led to deep cuts in corporate functions that bear fruit only in the long term. The Civil Rights Act of 1964, which exposed companies to legal liability for discriminatory hiring practices, has made HR departments wary of any broadly applied and clearly scored test that might later be shown to be systematically biased.
  • about a quarter of the country’s corporations were using similar tests to evaluate managers and junior executives, usually to assess whether they were ready for bigger roles.
  • He has encouraged the company’s HR executives to think about applying the games to the recruitment and evaluation of all professional workers.
  • Knack makes app-based video games, among them Dungeon Scrawl, a quest game requiring the player to navigate a maze and solve puzzles, and Wasabi Waiter, which involves delivering the right sushi to the right customer at an increasingly crowded happy hour. These games aren’t just for play: they’ve been designed by a team of neuroscientists, psychologists, and data scientists to suss out human potential. Play one of them for just 20 minutes, says Guy Halfteck, Knack’s founder, and you’ll generate several megabytes of data, exponentially more than what’s collected by the SAT or a personality test. How long you hesitate before taking every action, the sequence of actions you take, how you solve problems—all of these factors and many more are logged as you play, and then are used to analyze your creativity, your persistence, your capacity to learn quickly from mistakes, your ability to prioritize, and even your social intelligence and personality. The end result, Halfteck says, is a high-resolution portrait of your psyche and intellect, and an assessment of your potential as a leader or an innovator.
  • When the results came back, Haringa recalled, his heart began to beat a little faster. Without ever seeing the ideas, without meeting or interviewing the people who’d proposed them, without knowing their title or background or academic pedigree, Knack’s algorithm had identified the people whose ideas had panned out. The top 10 percent of the idea generators as predicted by Knack were in fact those who’d gone furthest in the process.
  • What Knack is doing, Haringa told me, “is almost like a paradigm shift.” It offers a way for his GameChanger unit to avoid wasting time on the 80 people out of 100—nearly all of whom look smart, well-trained, and plausible on paper—whose ideas just aren’t likely to work out.
  • Aptitude, skills, personal history, psychological stability, discretion, loyalty—companies at the time felt they had a need (and the right) to look into them all. That ambit is expanding once again, and this is undeniably unsettling. Should the ideas of scientists be dismissed because of the way they play a game? Should job candidates be ranked by what their Web habits say about them? Should the “data signature” of natural leaders play a role in promotion? These are all live questions today, and they prompt heavy concerns: that we will cede one of the most subtle and human of skills, the evaluation of the gifts and promise of other people, to machines; that the models will get it wrong; that some people will never get a shot in the new workforce.
  • scoring distance from work could violate equal-employment-opportunity standards. Marital status? Motherhood? Church membership? “Stuff like that,” Meyerle said, “we just don’t touch”—at least not in the U.S., where the legal environment is strict. Meyerle told me that Evolv has looked into these sorts of factors in its work for clients abroad, and that some of them produce “startling results.”
  • consider the alternative. A mountain of scholarly literature has shown that the intuitive way we now judge professional potential is rife with snap judgments and hidden biases, rooted in our upbringing or in deep neurological connections that doubtless served us well on the savanna but would seem to have less bearing on the world of work.
  • We may like to think that society has become more enlightened since those days, and in many ways it has, but our biases are mostly unconscious, and they can run surprisingly deep. Consider race. For a 2004 study called “Are Emily and Greg More Employable Than Lakisha and Jamal?,” the economists Sendhil Mullainathan and Marianne Bertrand put white-sounding names (Emily Walsh, Greg Baker) or black-sounding names (Lakisha Washington, Jamal Jones) on similar fictitious résumés, which they then sent out to a variety of companies in Boston and Chicago. To get the same number of callbacks, they learned, they needed to either send out half again as many résumés with black names as those with white names, or add eight extra years of relevant work experience to the résumés with black names.
  • a sociologist at Northwestern, spent parts of the three years from 2006 to 2008 interviewing professionals from elite investment banks, consultancies, and law firms about how they recruited, interviewed, and evaluated candidates, and concluded that among the most important factors driving their hiring recommendations were—wait for it—shared leisure interests.
  • Lacking “reliable predictors of future performance,” Rivera writes, “assessors purposefully used their own experiences as models of merit.” Former college athletes “typically prized participation in varsity sports above all other types of involvement.” People who’d majored in engineering gave engineers a leg up, believing they were better prepared.
  • the prevailing system of hiring and management in this country involves a level of dysfunction that should be inconceivable in an economy as sophisticated as ours. Recent survey data collected by the Corporate Executive Board, for example, indicate that nearly a quarter of all new hires leave their company within a year of their start date, and that hiring managers wish they’d never extended an offer to one out of every five members on their team
  • In the late 1990s, as these assessments shifted from paper to digital formats and proliferated, data scientists started doing massive tests of what makes for a successful customer-support technician or salesperson. This has unquestionably improved the quality of the workers at many firms.
  • In 2010, however, Xerox switched to an online evaluation that incorporates personality testing, cognitive-skill assessment, and multiple-choice questions about how the applicant would handle specific scenarios that he or she might encounter on the job. An algorithm behind the evaluation analyzes the responses, along with factual information gleaned from the candidate’s application, and spits out a color-coded rating: red (poor candidate), yellow (middling), or green (hire away). Those candidates who score best, I learned, tend to exhibit a creative but not overly inquisitive personality, and participate in at least one but not more than four social networks, among many other factors. (Previous experience, one of the few criteria that Xerox had explicitly screened for in the past, turns out to have no bearing on either productivity or retention
  • When Xerox started using the score in its hiring decisions, the quality of its hires immediately improved. The rate of attrition fell by 20 percent in the initial pilot period, and over time, the number of promotions rose. Xerox still interviews all candidates in person before deciding to hire them, Morse told me, but, she added, “We’re getting to the point where some of our hiring managers don’t even want to interview anymore”
  • Gone are the days, Ostberg told me, when, say, a small survey of college students would be used to predict the statistical validity of an evaluation tool. “We’ve got a data set of 347,000 actual employees who have gone through these different types of assessments or tools,” he told me, “and now we have performance-outcome data, and we can split those and slice and dice by industry and location.”
  • Evolv’s tests allow companies to capture data about everybody who applies for work, and everybody who gets hired—a complete data set from which sample bias, long a major vexation for industrial-organization psychologists, simply disappears. The sheer number of observations that this approach makes possible allows Evolv to say with precision which attributes matter more to the success of retail-sales workers (decisiveness, spatial orientation, persuasiveness) or customer-service personnel at call centers (rapport-building)
  • There are some data that Evolv simply won’t use, out of a concern that the information might lead to systematic bias against whole classes of people
  • the idea that hiring was a science fell out of favor. But now it’s coming back, thanks to new technologies and methods of analysis that are cheaper, faster, and much-wider-ranging than what we had before
  • what most excites him are the possibilities that arise from monitoring the entire life cycle of a worker at any given company.
  • Now the two companies are working together to marry pre-hire assessments to an increasing array of post-hire data: about not only performance and duration of service but also who trained the employees; who has managed them; whether they were promoted to a supervisory role, and how quickly; how they performed in that role; and why they eventually left.
  • What begins with an online screening test for entry-level workers ends with the transformation of nearly every aspect of hiring, performance assessment, and management.
  • I turned to Sandy Pentland, the director of the Human Dynamics Laboratory at MIT. In recent years, Pentland has pioneered the use of specialized electronic “badges” that transmit data about employees’ interactions as they go about their days. The badges capture all sorts of information about formal and informal conversations: their length; the tone of voice and gestures of the people involved; how much those people talk, listen, and interrupt; the degree to which they demonstrate empathy and extroversion; and more. Each badge generates about 100 data points a minute.
  • he tried the badges out on about 2,500 people, in 21 different organizations, and learned a number of interesting lessons. About a third of team performance, he discovered, can usually be predicted merely by the number of face-to-face exchanges among team members. (Too many is as much of a problem as too few.) Using data gathered by the badges, he was able to predict which teams would win a business-plan contest, and which workers would (rightly) say they’d had a “productive” or “creative” day. Not only that, but he claimed that his researchers had discovered the “data signature” of natural leaders, whom he called “charismatic connectors” and all of whom, he reported, circulate actively, give their time democratically to others, engage in brief but energetic conversations, and listen at least as much as they talk.
  • His group is developing apps to allow team members to view their own metrics more or less in real time, so that they can see, relative to the benchmarks of highly successful employees, whether they’re getting out of their offices enough, or listening enough, or spending enough time with people outside their own team.
  • Torrents of data are routinely collected by American companies and now sit on corporate servers, or in the cloud, awaiting analysis. Bloomberg reportedly logs every keystroke of every employee, along with their comings and goings in the office. The Las Vegas casino Harrah’s tracks the smiles of the card dealers and waitstaff on the floor (its analytics team has quantified the impact of smiling on customer satisfaction). E‑mail, of course, presents an especially rich vein to be mined for insights about our productivity, our treatment of co-workers, our willingness to collaborate or lend a hand, our patterns of written language, and what those patterns reveal about our intelligence, social skills, and behavior.
  • people analytics will ultimately have a vastly larger impact on the economy than the algorithms that now trade on Wall Street or figure out which ads to show us. He reminded me that we’ve witnessed this kind of transformation before in the history of management science. Near the turn of the 20th century, both Frederick Taylor and Henry Ford famously paced the factory floor with stopwatches, to improve worker efficiency.
  • “The quantities of data that those earlier generations were working with,” he said, “were infinitesimal compared to what’s available now. There’s been a real sea change in the past five years, where the quantities have just grown so large—petabytes, exabytes, zetta—that you start to be able to do things you never could before.”
  • People analytics will unquestionably provide many workers with more options and more power. Gild, for example, helps companies find undervalued software programmers, working indirectly to raise those people’s pay. Other companies are doing similar work. One called Entelo, for instance, specializes in using algorithms to identify potentially unhappy programmers who might be receptive to a phone cal
  • He sees it not only as a boon to a business’s productivity and overall health but also as an important new tool that individual employees can use for self-improvement: a sort of radically expanded The 7 Habits of Highly Effective People, custom-written for each of us, or at least each type of job, in the workforce.
  • the most exotic development in people analytics today is the creation of algorithms to assess the potential of all workers, across all companies, all the time.
  • The way Gild arrives at these scores is not simple. The company’s algorithms begin by scouring the Web for any and all open-source code, and for the coders who wrote it. They evaluate the code for its simplicity, elegance, documentation, and several other factors, including the frequency with which it’s been adopted by other programmers. For code that was written for paid projects, they look at completion times and other measures of productivity. Then they look at questions and answers on social forums such as Stack Overflow, a popular destination for programmers seeking advice on challenging projects. They consider how popular a given coder’s advice is, and how widely that advice ranges.
  • The algorithms go further still. They assess the way coders use language on social networks from LinkedIn to Twitter; the company has determined that certain phrases and words used in association with one another can distinguish expert programmers from less skilled ones. Gild knows these phrases and words are associated with good coding because it can correlate them with its evaluation of open-source code, and with the language and online behavior of programmers in good positions at prestigious companies.
  • having made those correlations, Gild can then score programmers who haven’t written open-source code at all, by analyzing the host of clues embedded in their online histories. They’re not all obvious, or easy to explain. Vivienne Ming, Gild’s chief scientist, told me that one solid predictor of strong coding is an affinity for a particular Japanese manga site.
  • Gild’s CEO, Sheeroy Desai, told me he believes his company’s approach can be applied to any occupation characterized by large, active online communities, where people post and cite individual work, ask and answer professional questions, and get feedback on projects. Graphic design is one field that the company is now looking at, and many scientific, technical, and engineering roles might also fit the bill. Regardless of their occupation, most people leave “data exhaust” in their wake, a kind of digital aura that can reveal a lot about a potential hire.
  • professionally relevant personality traits can be judged effectively merely by scanning Facebook feeds and photos. LinkedIn, of course, captures an enormous amount of professional data and network information, across just about every profession. A controversial start-up called Klout has made its mission the measurement and public scoring of people’s online social influence.
  • Mullainathan expressed amazement at how little most creative and professional workers (himself included) know about what makes them effective or ineffective in the office. Most of us can’t even say with any certainty how long we’ve spent gathering information for a given project, or our pattern of information-gathering, never mind know which parts of the pattern should be reinforced, and which jettisoned. As Mullainathan put it, we don’t know our own “production function.”
  • Over time, better job-matching technologies are likely to begin serving people directly, helping them see more clearly which jobs might suit them and which companies could use their skills. In the future, Gild plans to let programmers see their own profiles and take skills challenges to try to improve their scores. It intends to show them its estimates of their market value, too, and to recommend coursework that might allow them to raise their scores even more. Not least, it plans to make accessible the scores of typical hires at specific companies, so that software engineers can better see the profile they’d need to land a particular job
  • Knack, for its part, is making some of its video games available to anyone with a smartphone, so people can get a better sense of their strengths, and of the fields in which their strengths would be most valued. (Palo Alto High School recently adopted the games to help students assess careers.) Ultimately, the company hopes to act as matchmaker between a large network of people who play its games (or have ever played its games) and a widening roster of corporate clients, each with its own specific profile for any given type of job.
  • When I began my reporting for this story, I was worried that people analytics, if it worked at all, would only widen the divergent arcs of our professional lives, further gilding the path of the meritocratic elite from cradle to grave, and shutting out some workers more definitively. But I now believe the opposite is likely to happen, and that we’re headed toward a labor market that’s fairer to people at every stage of their careers
  • For decades, as we’ve assessed people’s potential in the professional workforce, the most important piece of data—the one that launches careers or keeps them grounded—has been educational background: typically, whether and where people went to college, and how they did there. Over the past couple of generations, colleges and universities have become the gatekeepers to a prosperous life. A degree has become a signal of intelligence and conscientiousness, one that grows stronger the more selective the school and the higher a student’s GPA, that is easily understood by employers, and that, until the advent of people analytics, was probably unrivaled in its predictive powers.
  • the limitations of that signal—the way it degrades with age, its overall imprecision, its many inherent biases, its extraordinary cost—are obvious. “Academic environments are artificial environments,” Laszlo Bock, Google’s senior vice president of people operations, told The New York Times in June. “People who succeed there are sort of finely trained, they’re conditioned to succeed in that environment,” which is often quite different from the workplace.
  • because one’s college history is such a crucial signal in our labor market, perfectly able people who simply couldn’t sit still in a classroom at the age of 16, or who didn’t have their act together at 18, or who chose not to go to graduate school at 22, routinely get left behind for good. That such early factors so profoundly affect career arcs and hiring decisions made two or three decades later is, on its face, absurd.
  • I spoke with managers at a lot of companies who are using advanced analytics to reevaluate and reshape their hiring, and nearly all of them told me that their research is leading them toward pools of candidates who didn’t attend college—for tech jobs, for high-end sales positions, for some managerial roles. In some limited cases, this is because their analytics revealed no benefit whatsoever to hiring people with college degrees; in other cases, and more often, it’s because they revealed signals that function far better than college history,
  • Google, too, is hiring a growing number of nongraduates. Many of the people I talked with reported that when it comes to high-paying and fast-track jobs, they’re reducing their preference for Ivy Leaguers and graduates of other highly selective schools.
  • This process is just beginning. Online courses are proliferating, and so are online markets that involve crowd-sourcing. Both arenas offer new opportunities for workers to build skills and showcase competence. Neither produces the kind of instantly recognizable signals of potential that a degree from a selective college, or a first job at a prestigious firm, might. That’s a problem for traditional hiring managers, because sifting through lots of small signals is so difficult and time-consuming.
  • all of these new developments raise philosophical questions. As professional performance becomes easier to measure and see, will we become slaves to our own status and potential, ever-focused on the metrics that tell us how and whether we are measuring up? Will too much knowledge about our limitations hinder achievement and stifle our dreams? All I can offer in response to these questions, ironically, is my own gut sense, which leads me to feel cautiously optimistic.
  • Google’s understanding of the promise of analytics is probably better than anybody else’s, and the company has been changing its hiring and management practices as a result of its ongoing analyses. (Brainteasers are no longer used in interviews, because they do not correlate with job success; GPA is not considered for anyone more than two years out of school, for the same reason—the list goes on.) But for all of Google’s technological enthusiasm, these same practices are still deeply human. A real, live person looks at every résumé the company receives. Hiring decisions are made by committee and are based in no small part on opinions formed during structured interviews.
sissij

Prejudice AI? Machine Learning Can Pick up Society's Biases | Big Think - 1 views

  • We think of computers as emotionless automatons and artificial intelligence as stoic, zen-like programs, mirroring Mr. Spock, devoid of prejudice and unable to be swayed by emotion.
  • They say that AI picks up our innate biases about sex and race, even when we ourselves may be unaware of them. The results of this study were published in the journal Science.
  • After interacting with certain users, she began spouting racist remarks.
  • ...2 more annotations...
  • It just learns everything from us and as our echo, picks up the prejudices we’ve become deaf to.
  • AI will have to be programmed to embrace equality.
  •  
    I just feel like this is so ironic. As the parents of the AI, humans themselves can't even be equal , how can we expect the robot we made to be perform perfect humanity and embrace flawless equality. I think equality itself is flawed. How can we define equality? Just like we cannot define fairness, we cannot define equality. I think this robot picking up racist remarks just shows that how children become racist. It also reflects how powerful the cultural context and social norms are. They can shape us subconsciously. --Sissi (4/20/2017)
katedriscoll

Metacontrol and body ownership: divergent thinking increases the virtual hand illusion | SpringerLink - 0 views

  • The virtual hand illusion (VHI) paradigm demonstrates that people tend to perceive agency and bodily ownership for a virtual hand that moves in synchrony with their own movements. Given that this kind of effect can be taken to reflect self–other integration (i.e., the integration of some external, novel event into the representation of oneself), and given that self–other integration has been previously shown to be affected by metacontrol states (biases of information processing towards persistence/selectivity or flexibility/integration), we tested whether the VHI varies in size depending on the metacontrol bias. Persistence and flexibility biases were induced by having participants carry out a convergent thinking (Remote Associates) task or divergent-thinking (Alternate Uses) task, respectively, while experiencing a virtual hand moving synchronously or asynchronously with their real hand. Synchrony-induced agency and ownership effects were more pronounced in the context of divergent thinking than in the context of convergent thinking, suggesting that a metacontrol bias towards flexibility promotes self–other integration.
  • As in previous studies, participants were more likely to experience subjective agency and ownership for a virtual hand if it moved in synchrony with their own, real hand. As predicted, the size of this effect was significantly moderated by the type of creativity task in the context of which the illusion was induced.
  • It is important to keep in mind the fact that our present findings were obtained in a paradigm that strongly interleaved what we considered the task prime (i.e., the particular creativity task) and the induction of the VHI—the process we aimed to prime. The practical reason to do so was to increase the probability that the metacontrol state that the creativity tasks were hypothesized to induce or establish would be sufficiently close in time to the synchrony manipulation to have an impact on the thereby induced changes in self-perception. However, this implies that we are unable to disentangle the effects of the task prime proper and the effects of possible interactions between this task prime and the synchrony manipulation. There are indeed reasons to assume that such interactions are not unlikely to have occurred
  • ...2 more annotations...
  • and that they would make perfect theoretical sense. The observation that the VHI was affected by the type of creativity task and performance in the creativity tasks was affected by the synchrony manipulation suggests some degree of overlap between the ways that engaging in particular creativity tasks and experiencing particular degrees of synchrony are able to bias perceived ownership and agency. In terms of our theoretical framework, this implies that engaging in divergent thinking biases metacontrol towards flexibility in similar ways as experiencing synchrony between one’s own movements and those of a virtual effector does, while engaging in convergent thinking biases metacontrol towards persistence as experiencing asynchrony does. What the present findings demonstrate is that both kinds of manipulation together bias the VHI in the predicted direction, but they do not allow to statistically or numerically separate and estimate the contribution that each of the two confounded manipulations might have made. Accordingly, the present findings should not be taken to provide conclusive evidence that priming tasks alone are able to change self-perception without being supported (and perhaps even enabled) by the experience of synchrony
  • between proprioceptive and visual action feedback.
  •  
    This article relates to the ownership module. It talks about an experiment with VHI that is very interesting.
Javier E

COVID-19: Individually Rational, Collectively Disastrous - The Atlantic - 0 views

  • One major problem is that stopping the virus from spreading requires us to override our basic intuitions.
  • Three cognitive biases make it hard for us to avoid actions that put us in great collective danger.
  • 1. Misleading Feedback
  • ...14 more annotations...
  • some activities, including dangerous ones, provide negative feedback only rarely. When I am in a rush, I often cross the street at a red light. I understand intellectually that this is stupid, but I’ve never once seen evidence of my stupidity.
  • Exposure to COVID-19 works the same way. Every time you engage in a risky activity—like meeting up with your friends indoors—the world is likely to send you a signal that you made the right choice. I saw my pal and didn’t get sick. Clearly, I shouldn’t have worried so much about socializing!
  • Let’s assume, for example, that going to a large indoor gathering gives you a one in 20 chance of contracting COVID-19—a significant risk. Most likely, you’ll get away with it the first time. You’ll then infer that taking part in such gatherings is pretty safe, and will do so again. Eventually, you are highly likely to fall sick.
  • 2. Individually Rational, Collectively DisastrousWe tend to think behavior that is justifiable on the individual level is also justifiable on the collective level, and vice versa. If eating the occasional sugary treat is fine for me it’s fine for all of us. And if smoking indoors is bad for me, it’s bad for all of us.
  • The dynamics of contagion in a pandemic do not work like that
  • if everyone who isn’t at especially high risk held similar dinner parties, some percentage of these events would lead to additional infections. And because each newly infected person might spread the virus to others, everyone’s decision to hold a one-off dinner party would quickly lead to a significant spike in transmissions.
  • The dynamic here is reminiscent of classic collective-action problems. If you go to one dinner, you’ll likely be fine. But if everyone goes to one dinner, the virus will spread with such speed that your own chances of contracting COVID-19 will also rise precipitously.
  • 3. Dangers Are Hard to Recognize and Avoid
  • Many of the dangers we face in life are easy to spot—and we have, over many millennia, developed biological instincts and social conventions to avoid them
  • When we deal with an unaccustomed danger, such as a new airborne virus, we can’t rely on any of these protective mechanisms.
  • The virus is invisible. This makes it hard to spot or anticipate. We don’t see little viral particles floating through the air
  • In time, we can overcome these biases (at least to some extent).
  • Social disapprobation can help
  • We all should do what we can to identify the biases from which we suffer—and try to stop them from influencing our behavior.
1 - 20 of 191 Next › Last »
Showing 20 items per page