Skip to main content

Home/ TOK Friends/ Group items tagged biases

Rss Feed Group items tagged

Javier E

Research Shows That the Smarter People Are, the More Susceptible They Are to Cognitive ... - 0 views

  • While philosophers, economists, and social scientists had assumed for centuries that human beings are rational agents—reason was our Promethean gift—Kahneman, the late Amos Tversky, and others, including Shane Frederick (who developed the bat-and-ball question), demonstrated that we’re not nearly as rational as we like to believe.
  • When people face an uncertain situation, they don’t carefully evaluate the information or look up relevant statistics. Instead, their decisions depend on a long list of mental shortcuts, which often lead them to make foolish decisions.
  • in many instances, smarter people are more vulnerable to these thinking errors. Although we assume that intelligence is a buffer against bias—that’s why those with higher S.A.T. scores think they are less prone to these universal thinking mistakes—it can actually be a subtle curse.
  • ...7 more annotations...
  • they wanted to understand how these biases correlated with human intelligence.
  • self-awareness was not particularly useful: as the scientists note, “people who were aware of their own biases were not better able to overcome them.”
  • Perhaps our most dangerous bias is that we naturally assume that everyone else is more susceptible to thinking errors, a tendency known as the “bias blind spot.”
  • This “meta-bias” is rooted in our ability to spot systematic mistakes in the decisions of others—we excel at noticing the flaws of friends—and inability to spot those same mistakes in ourselves.
  • it applies to every single bias under consideration, from anchoring to so-called “framing effects.” In each instance, we readily forgive our own minds but look harshly upon the minds of other people.
  • intelligence seems to make things worse.
  • the driving forces behind biases—the root causes of our irrationality—are largely unconscious, which means they remain invisible to self-analysis and impermeable to intelligence. In fact, introspection can actually compound the error, blinding us to those primal processes responsible for many of our everyday failings. We spin eloquent stories, but these stories miss the point. The more we attempt to know ourselves, the less we actually understand.
Javier E

Our Dangerous Inability to Agree on What is TRUE | Risk: Reason and Reality | Big Think - 1 views

  • Given that human cognition is never the product of pure dispassionate reason, but a subjective interpretation of the facts based on our feelings and biases and instincts, when can we ever say that we know who is right and who is wrong, about anything? When can we declare a fact so established that it’s fair to say, without being called arrogant, that those who deny this truth don’t just disagree…that they’re just plain wrong
  • This isn’t about matters of faith, or questions of ultimately unknowable things which by definition can not be established by fact. This is a question about what is knowable, and provable by careful objective scientific inquiry, a process which includes challenging skepticism rigorously applied precisely to establish what, beyond any reasonable doubt, is in fact true. The way evolution has been established
  • With enough careful investigation and scrupulously challenged evidence, we can establish knowable truths that are not just the product of our subjective motivated reasoning. We can apply our powers of reason and our ability to objectively analyze the facts and get beyond the point where what we 'know' is just an interpretation of the evidence through the subconscious filters of who we trust and our biases and instincts. We can get to the point where if someone wants to continue believe that the sun revolves around the earth, or that vaccines cause autism, or that evolution is a deceit, it is no longer arrogant - though it may still be provocative - to call those people wrong.
  • ...6 more annotations...
  • here is a truth with which I hope we can all agree. Our subjective system of cognition can be dangerous. It can produce perceptions that conflict with the evidence, what I call The Perception Gap, which can in turn produce profound harm.
  • The Perception Gap can lead to disagreements that create destructive and violent social conflict, to dangerous personal choices that feel safe but aren’t, and to policies more consistent with how we feel than what is in fact in our best interest. The Perception Gap may in fact be potentially more dangerous than any individual risk we face.
  • We need to recognize the greater threat that our subjective system of cognition can pose, and in the name of our own safety and the welfare of the society on which we depend, do our very best to rise above it or, when we can’t, account for this very real danger in the policies we adopt.
  • we have an obligation to confront our own ideological priors. We have an obligation to challenge ourselves, to push ourselves, to be suspicious of conclusions that are too convenient, to be sure that we're getting it right.
  • subjective cognition is built-in, subconscious, beyond free will, and unavoidably leads to different interpretations of the same facts.
  • Views that have more to do with competing tribal biases than objective interpretations of the evidence create destructive and violent conflict.
Javier E

Our Biased Brains - NYTimes.com - 0 views

  • The human brain seems to be wired so that it categorizes people by race in the first one-fifth of a second after seeing a face
  • Racial bias also begins astonishingly early: Even infants often show a preference for their own racial group. In one study, 3-month-old white infants were shown photos of faces of white adults and black adults; they preferred the faces of whites. For 3-month-old black infants living in Africa, it was the reverse.
  • in evolutionary times we became hard-wired to make instantaneous judgments about whether someone is in our “in group” or not — because that could be lifesaving. A child who didn’t prefer his or her own group might have been at risk of being clubbed to death.
  • ...7 more annotations...
  • I encourage you to test yourself at implicit.harvard.edu. It’s sobering to discover that whatever you believe intellectually, you’re biased about race, gender, age or disability.
  • unconscious racial bias turns up in children as soon as they have the verbal skills to be tested for it, at about age 4. The degree of unconscious bias then seems pretty constant: In tests, this unconscious bias turns out to be roughly the same for a 4- or 6-year-old as for a senior citizen who grew up in more racially oppressive times.
  • Many of these experiments on in-group bias have been conducted around the world, and almost every ethnic group shows a bias favoring its own. One exception: African-Americans.
  • in contrast to other groups, African-Americans do not have an unconscious bias toward their own. From young children to adults, they are essentially neutral and favor neither whites nor blacks.
  • even if we humans have evolved to have a penchant for racial preferences from a very young age, this is not destiny. We can resist the legacy that evolution has bequeathed us.
  • “We wouldn’t have survived if our ancestors hadn’t developed bodies that store sugar and fat,” Banaji says. “What made them survive is what kills us.” Yet we fight the battle of the bulge and sometimes win — and, likewise, we can resist a predisposition for bias against other groups.
  • Deep friendships, especially romantic relationships with someone of another race, also seem to mute bias
Javier E

Our Dangerous Inability to Agree on What is TRUE | Risk: Reason and Reality | Big Think - 2 views

  • Given that human cognition is never the product of pure dispassionate reason, but a subjective interpretation of the facts based on our feelings and biases and instincts, when can we ever say that we know who is right and who is wrong, about anything? When can we declare a fact so established that it’s fair to say, without being called arrogant, that those who deny this truth don’t just disagree…that they’re just plain wrong.
  • This isn’t about matters of faith, or questions of ultimately unknowable things which by definition can not be established by fact. This is a question about what is knowable, and provable by careful objective scientific inquiry, a process which includes challenging skepticism rigorously applied precisely to establish what, beyond any reasonable doubt, is in fact true.
  • With enough careful investigation and scrupulously challenged evidence, we can establish knowable truths that are not just the product of our subjective motivated reasoning.
  • ...8 more annotations...
  • This matters for social animals like us, whose safety and very survival ultimately depend on our ability to coexist. Views that have more to do with competing tribal biases than objective interpretations of the evidence create destructive and violent conflict. Denial of scientifically established ‘truth’ cause all sorts of serious direct harms. Consider a few examples; • The widespread faith-based rejection of evolution feeds intense polarization. • Continued fear of vaccines is allowing nearly eradicated diseases to return. • Those who deny the evidence of the safety of genetically modified food are also denying the immense potential benefits of that technology to millions. • Denying the powerful evidence for climate change puts us all in serious jeopardy should that evidence prove to be true.
  • To address these harms, we need to understand why we often have trouble agreeing on what is true (what some have labeled science denialism). Social science has taught us that human cognition is innately, and inescapably, a process of interpreting the hard data about our world – its sights and sound and smells and facts and ideas - through subjective affective filters that help us turn those facts into the judgments and choices and behaviors that help us survive. The brain’s imperative, after all, is not to reason. It’s job is survival, and subjective cognitive biases and instincts have developed to help us make sense of information in the pursuit of safety, not so that we might come to know ‘THE universal absolute truth
  • This subjective cognition is built-in, subconscious, beyond free will, and unavoidably leads to different interpretations of the same facts.
  • But here is a truth with which I hope we can all agree. Our subjective system of cognition can be dangerous.
  • It can produce perceptions that conflict with the evidence, what I call The Perception Gap, which can in turn produce profound harm
  • We need to recognize the greater threat that our subjective system of cognition can pose, and in the name of our own safety and the welfare of the society on which we depend, do our very best to rise above it or, when we can’t, account for this very real danger in the policies we adopt.
  • "Everyone engages in motivated reasoning, everyone screens out unwelcome evidence, no one is a fully rational actor. Sure. But when it comes to something with such enormous consequences to human welfare
  • I think it's fair to say we have an obligation to confront our own ideological priors. We have an obligation to challenge ourselves, to push ourselves, to be suspicious of conclusions that are too convenient, to be sure that we're getting it right.
Javier E

Buddhism Is More 'Western' Than You Think - The New York Times - 0 views

  • Not only have Buddhist thinkers for millenniums been making very much the kinds of claims that Western philosophers and psychologists make — many of these claims are looking good in light of modern Western thought.
  • In fact, in some cases Buddhist thought anticipated Western thought, grasping things about the human mind, and its habitual misperception of reality, that modern psychology is only now coming to appreciate.
  • “Things exist but they are not real.” I agree with Gopnik that this sentence seems a bit hard to unpack. But if you go look at the book it is taken from, you’ll find that the author himself, Mu Soeng, does a good job of unpacking it.
  • ...14 more annotations...
  • It turns out Soeng is explaining an idea that is central to Buddhist philosophy: “not self” — the idea that your “self,” as you intuitively conceive it, is actually an illusion. Soeng writes that the doctrine of not-self doesn’t deny an “existential personality” — it doesn’t deny that there is a you that exists; what it denies is that somewhere within you is an “abiding core,” a kind of essence-of-you that remains constant amid the flux of thoughts, feelings, perceptions and other elements that constitute your experience. So if by “you” we mean a “self” that features an enduring essence, then you aren’t real.
  • In recent decades, important aspects of the Buddhist concept of not-self have gotten support from psychology. In particular, psychology has bolstered Buddhism’s doubts about our intuition of what you might call the “C.E.O. self” — our sense that the conscious “self” is the initiator of thought and action.
  • recognizing that “you” are not in control, that you are not a C.E.O., can help give “you” more control. Or, at least, you can behave more like a C.E.O. is expected to behave: more rationally, more wisely, more reflectively; less emotionally, less rashly, less reactively.
  • Suppose that, via mindfulness meditation, you observe a feeling like anxiety or anger and, rather than let it draw you into a whole train of anxious or angry thoughts, you let it pass away. Though you experience the feeling — and in a sense experience it more fully than usual — you experience it with “non-attachment” and so evade its grip. And you now see the thoughts that accompanied it in a new light — they no longer seem like trustworthy emanations from some “I” but rather as transient notions accompanying transient feelings.
  • Brain-scan studies have produced tentative evidence that this lusting and disliking — embracing thoughts that feel good and rejecting thoughts that feel bad — lies near the heart of certain “cognitive biases.” If such evidence continues to accumulate, the Buddhist assertion that a clear view of the world involves letting go of these lusts and dislikes will have drawn a measure of support from modern science.
  • Note how, in addition to being therapeutic, this clarifies your view of the world. After all, the “anxious” or “angry” trains of thought you avoid probably aren’t objectively true. They probably involve either imagining things that haven’t happened or making subjective judgments about things that have.
  • the Buddhist idea of “not-self” grows out of the belief undergirding this mission — that the world is pervasively governed by causal laws. The reason there is no “abiding core” within us is that the ever-changing forces that impinge on us — the sights, the sounds, the smells, the tastes — are constantly setting off chain reactions inside of us.
  • Buddhism’s doubts about the distinctness and solidity of the “self” — and of other things, for that matter — rests on a recognition of the sense in which pervasive causality means pervasive fluidity.
  • Buddhism long ago generated insights that modern psychology is only now catching up to, and these go beyond doubts about the C.E.O. self.
  • psychology has lately started to let go of its once-sharp distinction between “cognitive” and “affective” parts of the mind; it has started to see that feelings are so finely intertwined with thoughts as to be part of their very coloration. This wouldn’t qualify as breaking news in Buddhist circles.
  • There’s a broader and deeper sense in which Buddhist thought is more “Western” than stereotype suggests. What, after all, is more Western than science’s emphasis on causality, on figuring out what causes what, and hoping to thus explain why all things do the things they do?
  • All we can do is clear away as many impediments to comprehension as possible. Science has a way of doing that — by insisting that entrants in its “competitive storytelling” demonstrate explanatory power in ways that are publicly observable, thus neutralizing, to the extent possible, subjective biases that might otherwise prevail.
  • Buddhism has a different way of doing it: via meditative disciplines that are designed to attack subjective biases at the source, yielding a clearer view of both the mind itself and the world beyond it.
  • The results of these two inquiries converge to a remarkable extent — an extent that can be appreciated only in light of the last few decades of progress in psychology and evolutionary science. At least, that’s my argument.
katedriscoll

Avoiding Psychological Bias in Decision Making - From MindTools.com - 0 views

  • In this scenario, your decision was affected by
  • confirmation bias. With this, you interpret market information in a way that confirms your preconceptions – instead of seeing it objectively – and you make wrong decisions as a result. Confirmation bias is one of many psychological biases to which we're all susceptible when we make decisions. In this article, we'll look at common types of bias, and we'll outline what you can do to avoid them.
  • Psychologists Daniel Kahneman, Paul Slovic, and Amos Tversky introduced the concept of psychological bias in the early 1970s. They published their findings in their 1982 book, "Judgment Under Uncertainty." They explained that psychological bias – also known as cognitive bias – is the tendency to make decisions or take action in an illogical way. For example, you might subconsciously make selective use of data, or you might feel pressured to make a decision by powerful colleagues. Psychological bias is the opposite of common sense and clear, measured judgment. It can lead to missed opportunities and poor decision making.
  • ...1 more annotation...
  • Below, we outline five psychological biases that are common in business decision making. We also look at how you can overcome them, and thereby make better decisions.
caelengrubb

Cognitive Bias and Public Health Policy During the COVID-19 Pandemic | Critical Care Me... - 0 views

  • As the coronavirus disease 2019 (COVID-19) pandemic abates in many countries worldwide, and a new normal phase arrives, critically assessing policy responses to this public health crisis may promote better preparedness for the next wave or the next pandemic
  • A key lesson is revealed by one of the earliest and most sizeable US federal responses to the pandemic: the investment of $3 billion to build more ventilators. These extra ventilators, even had they been needed, would likely have done little to improve population survival because of the high mortality among patients with COVID-19 who require mechanical ventilation and diversion of clinicians away from more health-promoting endeavors.
  • Why are so many people distressed at the possibility that a patient in plain view—such as a person presenting to an emergency department with severe respiratory distress—would be denied an attempt at rescue because of a ventilator shortfall, but do not mount similarly impassioned concerns regarding failures to implement earlier, more aggressive physical distancing, testing, and contact tracing policies that would have saved far more lives?
  • ...12 more annotations...
  • These cognitive errors, which distract leaders from optimal policy making and citizens from taking steps to promote their own and others’ interests, cannot merely be ascribed to repudiations of science.
  • The first error that thwarts effective policy making during crises stems from what economists have called the “identifiable victim effect.” Humans respond more aggressively to threats to identifiable lives, ie, those that an individual can easily imagine being their own or belonging to people they care about (such as family members) or care for (such as a clinician’s patients) than to the hidden, “statistical” deaths reported in accounts of the population-level tolls of the crisis
  • Yet such views represent a second reason for the broad endorsement of policies that prioritize saving visible, immediately jeopardized lives: that humans are imbued with a strong and neurally mediated3 tendency to predict outcomes that are systematically more optimistic than observed outcomes
  • A third driver of misguided policy responses is that humans are present biased, ie, people tend to prefer immediate benefits to even larger benefits in the future.
  • Even if the tendency to prioritize visibly affected individuals could be resisted, many people would still place greater value on saving a life today than a life tomorrow.
  • Similar psychology helps explain the reluctance of many nations to limit refrigeration and air conditioning, forgo fuel-inefficient transportation, and take other near-term steps to reduce the future effects of climate change
  • The fourth contributing factor is that virtually everyone is subject to omission bias, which involves the tendency to prefer that a harm occur by failure to take action rather than as direct consequence of the actions that are taken
  • Although those who set policies for rationing ventilators and other scarce therapies do not intend the deaths of those who receive insufficient priority for these treatments, such policies nevertheless prevent clinicians from taking all possible steps to save certain lives.
  • An important goal of governance is to mitigate the effects of these and other biases on public policy and to effectively communicate the reasons for difficult decisions to the public. However, health systems’ routine use of wartime terminology of “standing up” and “standing down” intensive care units illustrate problematic messaging aimed at the need to address immediate danger
  • Second, had governments, health systems, and clinicians better understood the “identifiable victim effect,” they may have realized that promoting flattening the curve as a way to reduce pressure on hospitals and health care workers would be less effective than promoting early restaurant and retail store closures by saying “The lives you save when you close your doors include your own.”
  • Third, these leaders’ routine use of terms such as “nonpharmaceutical interventions”9 portrays public health responses negatively by labeling them according to what they are not. Instead, support for heavily funding contact tracing could have been generated by communicating such efforts as “lifesaving.
  • Fourth, although errors of human cognition are challenging to surmount, policy making, even in a crisis, occurs over a sufficient period to be meaningfully improved by deliberate efforts to counter untoward biases
huffem4

Infographic: 11 Cognitive Biases That Influence Political Outcomes - 1 views

  • when searching for facts, our own cognitive biases often get in the way.
  • The media, for example, can exploit our tendency to assign stereotypes to others by only providing catchy, surface-level information.
  • People exhibit confirmation bias when they seek information that only affirms their pre-existing beliefs. This can cause them to become overly rigid in their political opinions, even when presented with conflicting ideas or evidence.
  • ...2 more annotations...
  • In one experiment, participants chose to either support or oppose a given sociopolitical issue. They were then presented with evidence that was conflicting, affirming, or a combination of both. In all scenarios, participants were most likely to stick with their initial decisions. Of those presented with conflicting evidence, just one in five changed their stance. Furthermore, participants who maintained their initial positions became even more confident in the superiority of their decision—a testament to how influential confirmation bias can be.
  • Coverage bias, in the context of politics, is a form of media bias where certain politicians or topics are disproportionately covered.
manhefnawi

Views on bias can be biased | Science News - 1 views

  • So preconceived notions of bias do exist. But tackling the problem can result in improvement. “Everyone is going to be better if we address this inequality,” he notes. “The more ideas we bring to the table, the better our science is going to be.” 
huffem4

Algorithms reveal changes in stereotypes | Stanford News - 2 views

  • The researchers used word embeddings – an algorithmic technique that can map relationships and associations between words –  to measure changes in gender and ethnic stereotypes over the past century in the United States.
  • Our prior research has shown that embeddings effectively capture existing stereotypes and that those biases can be systematically removed. But we think that, instead of removing those stereotypes, we can also use embeddings as a historical lens for quantitative, linguistic and sociological analyses of biases.”
  • Take the word “honorable.” Using the embedding tool, previous research found that the adjective has a closer relationship to the word “man” than the word “woman.”
  • ...4 more annotations...
  • One of the key findings to emerge was how biases toward women changed for the better – in some ways – over time.
  • For example, adjectives such as “intelligent,” “logical” and “thoughtful” were associated more with men in the first half of the 20th century. But since the 1960s, the same words have increasingly been associated with women with every following decade, correlating with the women’s movement in the 1960s, although a gap still remains.
  • For example, in the 1910s, words like “barbaric,” “monstrous” and “cruel” were the adjectives most associated with Asian last names. By the 1990s, those adjectives were replaced by words like “inhibited,” “passive” and “sensitive.” This linguistic change correlates with a sharp increase in Asian immigration to the United States in the 1960s and 1980s and a change in cultural stereotypes, the researchers said.
  • “It underscores the importance of humanists and computer scientists working together. There is a power to these new machine-learning methods in humanities research that is just being understood,”
mmckenziejr01

The Most Egregious Straw Men of 2016 | Inverse - 0 views

  • Fallacies are illicit shortcuts in reasoning, bad arguments that sound good but don’t actually make logical sense. Politicians and other public figures use them all the time ad nauseam in speeches and debates in order to better capture the hearts and minds of their audience.
  • But of all the logical fallacies out there, one stands out as being particularly powerful and popular, especially in politics: the straw man.
  • But that’s the truth, and the truth is irrelevant when it comes to fallacies. Just the fact that Clinton mentioned the term open borders, combined with her more liberal stance on immigration, was enough for Trump to say that she’s in favor of totally open borders
  • ...5 more annotations...
  • The straw man fallacy involves the construction of a second argument that to some degree resembles, in a simplified or exaggerated way, the argument that your opponent is really making. It is much easier for you to attack that perverted point than it is to address the original point being made.
  • Clinton did not say she wanted to get rid of the Second Amendment in any sense. She had expressed support for expanding background checks and a possible assault weapons ban. Those things are reasonable (moderate, even), so it’s much simpler for Trump to ignore her argument and make one up for her
  • What are in large part reasonable measures taken to avoid unnecessary offences are recast by these Republicans to be softness or attacks on people’s freedom of speech. By connoting political correctness with these attributes, Republican politicians can ensure that their constituent remains hostile to the idea. Then, when they come across an argument in need of simplification, they can call back to that already-crafted association.
  • Sanders’s healthcare plan would have ended Medicare and the ACA as we know them, but only to replace them with a universal healthcare system. No one stood to lose their coverage, but the way Clinton was arguing, it looked like people might. By essentially leaving out half of Sanders’s argument, Clinton made a case against a fictional version of Sanders that seemed just real enough to fool the average voter.
  • Still, mostly everyone today understands that the Earth isn’t flat. They recognize that, as Scaramucci said, “science” got one wrong there. By equating these two concepts, he assumes any argument that recognizes climate change to also be one that claims the Earth is flat. For that reason, this particular brand of climate change denial wins the award for best (or maybe worst) straw man of 2016.
  •  
    Here are a few examples of candidates using straw man fallacies to try to win over voters during the 2016 election. This shows just how prevalent cognitive biases are in our everyday lives without us even noticing them.
mmckenziejr01

Forer effect - The Skeptic's Dictionary - Skepdic.com - 0 views

  • orer eff
  • The Forer effect refers to the tendency of people to rate sets of statements as highly accurate for them personally even though the statements could apply to many people.
  • Forer gave a personality test to his students, ignored their answers, and gave each student the above evaluation. He asked them to evaluate the evaluation from 0 to 5, with "5" meaning the recipient felt the evaluation was an "excellent" assessment and "4" meaning the assessment was "good." The class average evaluation was 4.26. That was in 1948. The test has been repeated hundreds of time with psychology students and the average is still around 4.2 out of 5, or 84% accurate.
  • ...5 more annotations...
  • In short, Forer convinced people he could successfully read their character.
  • his personality analysis was taken from a newsstand astrology column and was presented to people without regard to their sun sign.
  • People tend to accept claims about themselves in proportion to their desire that the claims be true rather than in proportion to the empirical accuracy of the claims as measured by some non-subjective standard.
  • The Forer effect, however, only partially explains why so many people accept as accurate occult and pseudoscientific character assessment procedures
  • Favorable assessments are "more readily accepted as accurate descriptions of subjects' personalities than unfavorable" ones. But unfavorable claims are "more readily accepted when delivered by people with high perceived status than low perceived status."
  •  
    From the reading, the Forer effect seemed to be a good example of a couple cognitive biases together. The experiment and some of the findings are very interesting.
Javier E

Implicit Bias Training Isn't Improving Corporate Diversity - Bloomberg - 0 views

  • despite the growing adoption of unconscious bias training, there is no convincing scientific evidence that it works
  • In fact, much of the academic evidence on implicit bias interventions highlights their weakness as a method for boosting diversity and inclusion. Instructions to suppress stereotypes often have the opposite effect, and prejudice reduction programs are much more effective when people are already open-minded, altruistic, and concerned about their prejudices to begin with.
  • This is because the main problem with stereotypes is not that people are unaware of them, but that they agree with them (even when they don’t admit it to others). In other words, most people have conscious biases.
  • ...11 more annotations...
  • Moreover, to the extent that people have unconscious biases, there is no clear-cut way to measure them
  • The main tool for measuring unconscious bias, the Implicit Association Test (IAT), has been in use for twenty years but is highly contested.
  • meta-analytic reviews have concluded that IAT scores — in other words, unconscious biases — are very weak predictors of actual behavior.
  • The vast majority of people labeled “racist” by these tests behave the same as the vast majority of people labelled “non-racist.” Do we really want to tell people who behave in non-racist ways that they are unconsciously racists, or, conversely, tell people who behave in racist ways that they aren’t, deep down, racists at all?
  • Instead of worrying what people think of something or someone deep down, we should focus on ways to eliminate the toxic or prejudiced behaviors we can see. That alone will drive a great deal of progress.
  • Scientific evidence suggests that the relationship between attitudes and behaviors is much weaker than one might expect.
  • Even if we lived in a world in which humans always acted in accordance with their beliefs, there would remain better ways to promote diversity than by policing people’s thoughts.
  • Organizations should focus less on extinguishing their employees’ unconscious thoughts, and more on nurturing ethical, benevolent, and inclusive behaviors.
  • This means focusing less on employees’ attitudes, and more on organizational policies and systems, as these play the key role creating the conditions that entice employees (and leaders) to behave in more or less inclusive ways.
  • This gets to the underlying flaw with unconscious bias trainings: behaviors, not thoughts, should be the target of diversity and inclusion interventions.
  • Tomas Chamorro-Premuzic is chief talent scientist at Manpower Group and a professor at University College London and Columbia University.
jaxredd10

What Is Cognitive Bias? - 0 views

  • Because of this, subtle biases can creep in and influence the way you see and think about the world. The concept of cognitive bias was first introduced by researchers Amos Tversky and Daniel Kahneman in 1972. Since then, researchers have described a number of different types of biases that affect decision-making in a wide range of areas including social behavior, cognition, behavioral economics, education, management, healthcare, business, and finance.
  • People sometimes confuse cognitive biases with logical fallacies, but the two are not the same. A logical fallacy stems from an error in a logical argument, while a cognitive bias is rooted in thought processing errors often arising from problems with memory, attention, attribution, and other mental mistakes.
Javier E

GPT-4 has arrived. It will blow ChatGPT out of the water. - The Washington Post - 0 views

  • GPT-4, in contrast, is a state-of-the-art system capable of creating not just words but describing images in response to a person’s simple written commands.
  • When shown a photo of a boxing glove hanging over a wooden seesaw with a ball on one side, for instance, a person can ask what will happen if the glove drops, and GPT-4 will respond that it would hit the seesaw and cause the ball to fly up.
  • an AI program, known as a large language model, that early testers had claimed was remarkably advanced in its ability to reason and learn new things
  • ...22 more annotations...
  • hose promises have also fueled anxiety over how people will be able to compete for jobs outsourced to eerily refined machines or trust the accuracy of what they see online.
  • Officials with the San Francisco lab said GPT-4’s “multimodal” training across text and images would allow it to escape the chat box and more fully emulate a world of color and imagery, surpassing ChatGPT in its “advanced reasoning capabilities.”
  • A person could upload an image and GPT-4 could caption it for them, describing the objects and scene.
  • AI language models often confidently offer wrong answers because they are designed to spit out cogent phrases, not actual facts. And because they have been trained on internet text and imagery, they have also learned to emulate human biases of race, gender, religion and class.
  • GPT-4 still makes many of the errors of previous versions, including “hallucinating” nonsense, perpetuating social biases and offering bad advice. It also lacks knowledge of events that happened after about September 2021, when its training data was finalized, and “does not learn from its experience,” limiting people’s ability to teach it new things.
  • Microsoft has invested billions of dollars in OpenAI in the hope its technology will become a secret weapon for its workplace software, search engine and other online ambitions. It has marketed the technology as a super-efficient companion that can handle mindless work and free people for creative pursuits, helping one software developer to do the work of an entire team or allowing a mom-and-pop shop to design a professional advertising campaign without outside help.
  • it could lead to business models and creative ventures no one can predict.
  • sparked criticism that the companies are rushing to exploit an untested, unregulated and unpredictable technology that could deceive people, undermine artists’ work and lead to real-world harm.
  • the company held back the feature to better understand potential risks. As one example, she said, the model might be able to look at an image of a big group of people and offer up known information about them, including their identities — a possible facial recognition use case that could be used for mass surveillance.
  • OpenAI researchers wrote, “As GPT-4 and AI systems like it are adopted more widely,” they “will have even greater potential to reinforce entire ideologies, worldviews, truths and untruths, and to cement them or lock them in.”
  • “We can agree as a society broadly on some harms that a model should not contribute to,” such as building a nuclear bomb or generating child sexual abuse material, she said. “But many harms are nuanced and primarily affect marginalized groups,” she added, and those harmful biases, especially across other languages, “cannot be a secondary consideration in performance.”
  • OpenAI said its new model would be able to handle more than 25,000 words of text, a leap forward that could facilitate longer conversations and allow for the searching and analysis of long documents.
  • OpenAI developers said GPT-4 was more likely to provide factual responses and less likely to refuse harmless requests
  • Duolingo, the language learning app, has already used GPT-4 to introduce new features, such as an AI conversation partner and a tool that tells users why an answer was incorrect.
  • The company did not share evaluations around bias that have become increasingly common after pressure from AI ethicists.
  • GPT-4 will have competition in the growing field of multisensory AI. DeepMind, an AI firm owned by Google’s parent company Alphabet, last year released a “generalist” model named Gato that can describe images and play video games. And Google this month released a multimodal system, PaLM-E, that folded AI vision and language expertise into a one-armed robot on wheels: If someone told it to go fetch some chips, for instance, it could comprehend the request, wheel over to a drawer and choose the right bag.
  • The systems, though — as critics and the AI researchers are quick to point out — are merely repeating patterns and associations found in their training data without a clear understanding of what it’s saying or when it’s wrong.
  • GPT-4, the fourth “generative pre-trained transformer” since OpenAI’s first release in 2018, relies on a breakthrough neural-network technique in 2017 known as the transformer that rapidly advanced how AI systems can analyze patterns in human speech and imagery.
  • The systems are “pre-trained” by analyzing trillions of words and images taken from across the internet: news articles, restaurant reviews and message-board arguments; memes, family photos and works of art.
  • Giant supercomputer clusters of graphics processing chips are mapped out their statistical patterns — learning which words tended to follow each other in phrases, for instance — so that the AI can mimic those patterns, automatically crafting long passages of text or detailed images, one word or pixel at a time.
  • In 2019, the company refused to publicly release GPT-2, saying it was so good they were concerned about the “malicious applications” of its use, from automated spam avalanches to mass impersonation and disinformation campaigns.
  • Altman has also marketed OpenAI’s vision with the aura of science fiction come to life. In a blog post last month, he said the company was planning for ways to ensure that “all of humanity” benefits from “artificial general intelligence,” or AGI — an industry term for the still-fantastical idea of an AI superintelligence that is generally as smart as, or smarter than, the humans themselves.
Javier E

The American Scholar: Hardwired for Talk? - Jessica Love - 0 views

  • during the last decade, the pendulum of scientific thought has begun its inevitable swing in the other direction. These days, general cognitive mechanisms, not language-specific ones, are all the rage. We humans are really smart. We’re fantastic at recognizing patterns in our environments—patterns that may have nothing to do with language. Who says that the same abilities that allow us to play the violin aren’t also sufficient for learning subject-verb agreement? Perhaps speech isn’t genetically privileged so much as babies are just really motivated to learn to communicate.
  • If the brain did evolve for language, how did it do so? An idea favored by some scholars is that better communicators may also have been more reproductively successful. Gradually, as the prevalence of these smooth talkers’ offspring increased in the population, the concentration of genes favorable to linguistic communication may have increased as well.
  • two recent articles, one published in 2009 in the Proceedings of the National Academy of the Sciences and a 2012 follow-up in PLOS ONE (freely available), rebut this approach
  • ...4 more annotations...
  • Over the course of many generations, the gene pool thickens with helpful alleles until—voila!—the overwhelming number of these alleles are helpful and learners guesses are so uncannily accurate as to seem instinctual. Makes sense, no? But now consider that languages change. (And in the real world they do—quickly.) If the language’s principles switch often, many of those helpfully biased alleles are suddenly not so helpful at all. For fast-changing languages, the model finds, neutral alleles win out:
  • when the language is programmed to hardly mutate at all, the genes have a chance to adapt to the new language. The two populations become genetically distinct, their alleles heavily biased toward the idiosyncrasies of their local language—precisely what we don’t see in the real world
  • when the language is programmed to change quickly, neutral alleles are again favored.
  • maybe our brains couldn’t have evolved to handle language’s more arbitrary properties, because languages never stay the same and, as far as we know, they never have. What goes unspoken here is that the simulations seem to suggest that truly universal properties—such as language’s hierarchical nature—could have been encoded in our brains.
Javier E

The Signal and the Noise: Why So Many Predictions Fail-but Some Don't: Nate Silver: 978... - 0 views

  • Nate Silver built an innovative system for predicting baseball performance, predicted the 2008 election within a hair’s breadth, and became a national sensation as a blogger—all by the time he was thirty. The New York Times now publishes FiveThirtyEight.com, where Silver is one of the nation’s most influential political forecasters.
  • Silver examines the world of prediction, investigating how we can distinguish a true signal from a universe of noisy data. Most predictions fail, often at great cost to society, because most of us have a poor understanding of probability and uncertainty. Both experts and laypeople mistake more confident predictions for more accurate ones. But overconfidence is often the reason for failure. If our appreciation of uncertainty improves, our predictions can get better too. This is the “prediction paradox”: The more humility we have about our ability to make predictions, the more successful we can be in planning for the future.
  • the most accurate forecasters tend to have a superior command of probability, and they tend to be both humble and hardworking. They distinguish the predictable from the unpredictable, and they notice a thousand little details that lead them closer to the truth. Because of their appreciation of probability, they can distinguish the signal from the noise.
  • ...3 more annotations...
  • Baseball, weather forecasting, earthquake prediction, economics, and polling: In all of these areas, Silver finds predictions gone bad thanks to biases, vested interests, and overconfidence. But he also shows where sophisticated forecasters have gotten it right (and occasionally been ignored to boot)
  • This is the best general-readership book on applied statistics that I've read. Short review: if you're interested in science, economics, or prediction: read it. It's full of interesting cases, builds intuition, and is a readable example of Bayesian thinking.
  • The core concept is this: prediction is a vital part of science, of business, of politics, of pretty much everything we do. But we're not very good at it, and fall prey to cognitive biases and other systemic problems such as information overload that make things worse. However, we are simultaneously learning more about how such things occur and that knowledge can be used to make predictions better -- and to improve our models in science, politics, business, medicine, and so many other areas.
kortanekev

Beyond Drake's Equation --"Life on Other Planets is More Optimism Than Science" (Monday... - 0 views

  • the Princeton University researchers have found that the expectation that life — from bacteria to sentient beings — has or will develop on other planets as on Earth might be based more on optimism than scientific evidence.
  •  
    The optimism fallacy -- being biased based on what one hopes to find -- is impeding the accuracy of our research into life beyond earth. This fallacy causes one to form false assumptions from data we have already collected and jump to conclusions that are most probably, fallacious. 
Javier E

Covering politics in a "post-truth" America | Brookings Institution - 0 views

  • The media scandal of 2016 isn’t so much about what reporters failed to tell the American public; it’s about what they did report on, and the fact that it didn’t seem to matter.
  • Facebook and Snapchat and the other social media sites should rightfully be doing a lot of soul-searching about their role as the most efficient distribution network for conspiracy theories, hatred, and outright falsehoods ever invented.
  • I’ve been obsessively looking back over our coverage, too, trying to figure out what we missed along the way to the upset of the century
  • ...28 more annotations...
  • (An early conclusion: while we were late to understand how angry white voters were, a perhaps even more serious lapse was in failing to recognize how many disaffected Democrats there were who would stay home rather than support their party’s flawed candidate.)
  • Stories that would have killed any other politician—truly worrisome revelations about everything from the federal taxes Trump dodged to the charitable donations he lied about, the women he insulted and allegedly assaulted, and the mob ties that have long dogged him—did not stop Trump from thriving in this election year
  • the Oxford Dictionaries announced that “post-truth” had been chosen as the 2016 word of the year, defining it as a condition “in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief.”
  • Meantime, Trump personally blacklisted news organizations like Politico and The Washington Post when they published articles he didn’t like during the campaign, has openly mused about rolling back press freedoms enshrined by the U.S. Supreme Court, and has now named Stephen Bannon, until recently the executive chairman of Breitbart—a right-wing fringe website with a penchant for conspiracy theories and anti-Semitic tropes—to serve as one of his top White House advisers.
  • none of this has any modern precedent. And what makes it unique has nothing to do with the outcome of the election. This time, the victor was a right-wing demagogue; next time, it may be a left-wing populist who learns the lessons of Trump’s win.
  • This is no mere academic argument. The election of 2016 showed us that Americans are increasingly choosing to live in a cloud of like-minded spin, surrounded by the partisan political hackery and fake news that poisons their Facebook feeds.
  • To help us understand it all, there were choices, but not that many: three TV networks that mattered, ABC, CBS, and NBC; two papers for serious journalism, The New York Times and The Washington Post; and two giant-circulation weekly newsmagazines, Time and Newsweek. That, plus whatever was your local daily newspaper, pretty much constituted the news.
  • Fake news is thriving In the final three months of the presidential campaign, the 20 top-performing fake election news stories generated more engagement on Facebook than the top stories from major news outlets such as The New York Times.
  • Eventually, I came to think of the major media outlets of that era as something very similar to the big suburban shopping malls we flocked to in the age of shoulder pads and supply-side economics: We could choose among Kmart and Macy’s and Saks Fifth Avenue as our budgets and tastes allowed, but in the end the media were all essentially department stores, selling us sports and stock tables and foreign news alongside our politics, whether we wanted them or not. It may not have been a monopoly, but it was something pretty close.
  • This was still journalism in the scarcity era, and it affected everything from what stories we wrote to how fast we could produce them. Presidents could launch global thermonuclear war with the Russians in a matter of minutes, but news from the American hinterlands often took weeks to reach their sleepy capital. Even information within that capital was virtually unobtainable without a major investment of time and effort. Want to know how much a campaign was raising and spending from the new special-interest PACs that had proliferated? Prepare to spend a day holed up at the Federal Election Commission’s headquarters down on E Street across from the hulking concrete FBI building, and be sure to bring a bunch of quarters for the copy machine.
  • I am writing this in the immediate, shocking aftermath of a 2016 presidential election in which the Pew Research Center found that a higher percentage of Americans got their information about the campaign from late-night TV comedy shows than from a national newspaper. Don Graham sold the Post three years ago and though its online audience has been skyrocketing with new investments from Amazon.com founder Jeff Bezos, it will never be what it was in the ‘80s. That same Pew survey reported that a mere 2 percent of Americans today turned to such newspapers as the “most helpful” guides to the presidential campaign.
  • In 2013, Mark Leibovich wrote a bestselling book called This Town about the party-hopping, lobbyist-enabling nexus between Washington journalists and the political world they cover. A key character was Politico’s Mike Allen, whose morning email newsletter “Playbook” had become a Washington ritual, offering all the news and tidbits a power player might want to read before breakfast—and Politico’s most successful ad franchise to boot. In many ways, even that world of just a few years ago now seems quaint: the notion that anyone could be a single, once-a-day town crier in This Town (or any other) has been utterly exploded by the move to Twitter, Facebook, and all the rest. We are living, as Mark put it to me recently, “in a 24-hour scrolling version of what ‘Playbook’ was.”
  • Whether it was Walter Cronkite or The New York Times, they preached journalistic “objectivity” and spoke with authority when they pronounced on the day’s developments—but not always with the depth and expertise that real competition or deep specialization might have provided. They were great—but they were generalists.
  • I remained convinced that reporting would hold its value, especially as our other advantages—like access to information and the expensive means to distribute it—dwindled. It was all well and good to root for your political team, but when it mattered to your business (or the country, for that matter), I reasoned, you wouldn’t want cheerleading but real reporting about real facts. Besides, the new tools might be coming at us with dizzying speed—remember when that radical new video app Meerkat was going to change absolutely everything about how we cover elections?—but we would still need reporters to find a way inside Washington’s closed doors and back rooms, to figure out what was happening when the cameras weren’t rolling.
  • And if the world was suffering from information overload—well, so much the better for us editors; we would be all the more needed to figure out what to listen to amid the noise.
  • Trump turned out to be more correct than we editors were: the more relevant point of the Access Hollywood tape was not about the censure Trump would now face but the political reality that he, like Bill Clinton, could survive this—or perhaps any scandal. Yes, we were wrong about the Access Hollywood tape, and so much else.
  • These days, Politico has a newsroom of 200-odd journalists, a glossy award-winning magazine, dozens of daily email newsletters, and 16 subscription policy verticals. It’s a major player in coverage not only of Capitol Hill but many other key parts of the capital, and some months during this election year we had well over 30 million unique visitors to our website, a far cry from the controlled congressional circulation of 35,000 that I remember Roll Call touting in our long-ago sales materials.
  • , we journalists were still able to cover the public theater of politics while spending more of our time, resources, and mental energy on really original reporting, on digging up stories you couldn’t read anywhere else. Between Trump’s long and checkered business past, his habit of serial lying, his voluminous and contradictory tweets, and his revision of even his own biography, there was lots to work with. No one can say that Trump was elected without the press telling us all about his checkered past.
  • politics was NEVER more choose-your-own-adventure than in 2016, when entire news ecosystems for partisans existed wholly outside the reach of those who at least aim for truth
  • Pew found that nearly 50 percent of self-described conservatives now rely on a single news source, Fox, for political information they trust.
  • As for the liberals, they trust only that they should never watch Fox, and have MSNBC and Media Matters and the remnants of the big boys to confirm their biases.
  • And then there are the conspiracy-peddling Breitbarts and the overtly fake-news outlets of this overwhelming new world; untethered from even the pretense of fact-based reporting, their version of the campaign got more traffic on Facebook in the race’s final weeks than all the traditional news outlets combined.
  • When we assigned a team of reporters at Politico during the primary season to listen to every single word of Trump’s speeches, we found that he offered a lie, half-truth, or outright exaggeration approximately once every five minutes—for an entire week. And it didn’t hinder him in the least from winning the Republican presidential nomination.
  • when we repeated the exercise this fall, in the midst of the general election campaign, Trump had progressed to fibs of various magnitudes just about once every three minutes!
  • By the time Trump in September issued his half-hearted disavowal of the Obama “birther” whopper he had done so much to create and perpetuate, one national survey found that only 1 in 4 Republicans was sure that Obama was born in the U.S., and various polls found that somewhere between a quarter and a half of Republicans believed he’s Muslim. So not only did Trump think he was entitled to his own facts, so did his supporters. It didn’t stop them at all from voting for him.
  • in part, it’s not just because they disagree with the facts as reporters have presented them but because there’s so damn many reporters, and from such a wide array of outlets, that it’s often impossible to evaluate their standards and practices, biases and preconceptions. Even we journalists are increasingly overwhelmed.
  • So much terrific reporting and writing and digging over the years and … Trump? What happened to consequences? Reporting that matters? Sunlight, they used to tell us, was the best disinfectant for what ails our politics.
  • 2016 suggests a different outcome: We’ve achieved a lot more transparency in today’s Washington—without the accountability that was supposed to come with it.
kortanekev

How scientific is political science? | David Wearing | Opinion | The Guardian - 0 views

  • The prevailing view within the discipline is that scholars should set aside moral values and political concerns in favour of detached enquiry into the mechanics of how the political world functions.
  • But I have yet to be convinced by the idea that the study of politics can be apolitical and value-neutral. Our choice of research topics will inevitably reflect our own political and moral priorities, and the way in which that research is framed and conducted is bound to reflect assumptions which – whether held consciously, semi-consciously or unconsciously – remain of a moral and political nature.
  •  
    Good example of the way our biases affect our ability to set aside preconceived notions and beliefs, our ability to objectively analyze things and conduct good science- this fallacy is especially prevalent in political "science" as most all go in with strong personal opinions.  (Evie 12/7/16) 
‹ Previous 21 - 40 of 191 Next › Last »
Showing 20 items per page