Skip to main content

Home/ New Media Ethics 2009 course/ Group items tagged Cognitive Bias

Rss Feed Group items tagged

Weiye Loh

Skepticblog » Cognitive Biases and Handedness - 0 views

  • A recent study concerns the bias of being left or right-handed. Our handedness affects our judgments regarding the quality and “goodness” of things in our environment. There is a clear language bias favoring the dominant right-handers: “right” is correct, while left-handed complements are undesirable, for example. It turns out this is not mere cultural bias, but reflects an underlying cognitive bias. For example: In experiments by psychologist Daniel Casasanto, when people were asked which of two products to buy, which of two job applicants to hire, or which of two alien creatures looks more intelligent, right-handers tended to choose the product, person, or creature they saw on their right, but most left-handers chose the one on their left.
  • when put into a situation where we have to make a judgment based mostly on our gut feelings or intuition, biases will tend to come out. (It is probably difficult for most people to come up with an evidence-based system for assessing which alien looks more intelligent.) It is possible the common evolved sensibilities will dominate in such situations – most people, for example, might pick the alien with the larger eyes. But that is not what the researchers found – simple handedness was the determining factor.
  • This is a subconscious bias. If a subject were asked why they chose the alien on the right, they would probably not say, “because I am right-handed and have an inherent bias toward things in the right side of my visual field.” Rather, they would justify their judgment post-hoc – pointing out features that had nothing to do with their actual decision-making, but giving the illusion of a rational choice.
  • ...2 more annotations...
  • Casasanto found, in the new study, that these biases are also easily manipulated. First he studies stroke patients who were paralyzed on one side of the body or the other. If a right-hander were weak on the left side (as a control) this had no effect on their choice. But if their right side were weak, then their preference shifted to their intact left side. This, however, can be due to damage to the brain, rather than the fact that they are now obligate left-handers. So he did a follow up experiment in which subjects were made to perform a task with a ski-glove on one hand. If right-handers wore the glove on their left hand, again this had no effect on their choices. But if they wore it on their right hand while performing tasks for as little as 12 minutes, then their cognitive bias shifted to that of a left-hander.
  • Casasanto observes: ‘People generally think their judgments are rational, and their concepts are stable. But if wearing a glove for a few minutes can reverse people’s usual judgments about what’s good and bad, perhaps the mind is more malleable than we thought.’
  •  
    believers generally operate under the paradigm of seeing is believing, while skeptics operate under the paradigm that often believing is seeing.
Weiye Loh

gssq: Rational and Irrational Thought: The Thinking That IQ Tests Miss - 0 views

  • When approaching a problem, we can choose from any of several cognitive mechanisms. Some mechanisms have great computational power, letting us solve many problems with great accuracy, but they are slow, require much concentration and can interfere with other cognitive tasks. Others are comparatively low in computational power, but they are fast, require little concentration and do not interfere with other ongoing cognition. Humans are cognitive misers because our basic tendency is to default to the processing mechanisms that require less computational effort, even if they are less accurate.
  • our tendency to evaluate a situation from our own perspective. We weigh evidence and make moral judgments with a my-side bias that often leads to dysrationalia that is independent of measured intelligence. The same is true for other tendencies of the cognitive miser that have been much studied, such as attribute substitution and conjunction errors; they are at best only slightly related to intelligence and are poorly captured by conventional intelligence tests.
  •  
    No doubt you know several folks with perfectly respectable IQs who just don't seem all that sharp. The behavior of such people tells us that we are missing something important by treating intelligence as if it encompassed all cognitive abilities. I coined the term dysrationalia (analogous to "dyslexia"), meaning the inability to think and behave rationally despite having adequate intelligence, to draw attention to a large domain of cognitive life that intelligence tests fail to assess.
Weiye Loh

The Decline Effect and the Scientific Method : The New Yorker - 0 views

  • On September 18, 2007, a few dozen neuroscientists, psychiatrists, and drug-company executives gathered in a hotel conference room in Brussels to hear some startling news. It had to do with a class of drugs known as atypical or second-generation antipsychotics, which came on the market in the early nineties.
  • the therapeutic power of the drugs appeared to be steadily waning. A recent study showed an effect that was less than half of that documented in the first trials, in the early nineteen-nineties. Many researchers began to argue that the expensive pharmaceuticals weren’t any better than first-generation antipsychotics, which have been in use since the fifties. “In fact, sometimes they now look even worse,” John Davis, a professor of psychiatry at the University of Illinois at Chicago, told me.
  • Before the effectiveness of a drug can be confirmed, it must be tested and tested again. Different scientists in different labs need to repeat the protocols and publish their results. The test of replicability, as it’s known, is the foundation of modern research. Replicability is how the community enforces itself. It’s a safeguard for the creep of subjectivity. Most of the time, scientists know what results they want, and that can influence the results they get. The premise of replicability is that the scientific community can correct for these flaws.
  • ...30 more annotations...
  • But now all sorts of well-established, multiply confirmed findings have started to look increasingly uncertain. It’s as if our facts were losing their truth: claims that have been enshrined in textbooks are suddenly unprovable. This phenomenon doesn’t yet have an official name, but it’s occurring across a wide range of fields, from psychology to ecology. In the field of medicine, the phenomenon seems extremely widespread, affecting not only antipsychotics but also therapies ranging from cardiac stents to Vitamin E and antidepressants: Davis has a forthcoming analysis demonstrating that the efficacy of antidepressants has gone down as much as threefold in recent decades.
  • the effect is especially troubling because of what it exposes about the scientific process. If replication is what separates the rigor of science from the squishiness of pseudoscience, where do we put all these rigorously validated findings that can no longer be proved? Which results should we believe? Francis Bacon, the early-modern philosopher and pioneer of the scientific method, once declared that experiments were essential, because they allowed us to “put nature to the question.” But it appears that nature often gives us different answers.
  • At first, he assumed that he’d made an error in experimental design or a statistical miscalculation. But he couldn’t find anything wrong with his research. He then concluded that his initial batch of research subjects must have been unusually susceptible to verbal overshadowing. (John Davis, similarly, has speculated that part of the drop-off in the effectiveness of antipsychotics can be attributed to using subjects who suffer from milder forms of psychosis which are less likely to show dramatic improvement.) “It wasn’t a very satisfying explanation,” Schooler says. “One of my mentors told me that my real mistake was trying to replicate my work. He told me doing that was just setting myself up for disappointment.”
  • In private, Schooler began referring to the problem as “cosmic habituation,” by analogy to the decrease in response that occurs when individuals habituate to particular stimuli. “Habituation is why you don’t notice the stuff that’s always there,” Schooler says. “It’s an inevitable process of adjustment, a ratcheting down of excitement. I started joking that it was like the cosmos was habituating to my ideas. I took it very personally.”
  • The most likely explanation for the decline is an obvious one: regression to the mean. As the experiment is repeated, that is, an early statistical fluke gets cancelled out. The extrasensory powers of Schooler’s subjects didn’t decline—they were simply an illusion that vanished over time. And yet Schooler has noticed that many of the data sets that end up declining seem statistically solid—that is, they contain enough data that any regression to the mean shouldn’t be dramatic. “These are the results that pass all the tests,” he says. “The odds of them being random are typically quite remote, like one in a million. This means that the decline effect should almost never happen. But it happens all the time!
  • this is why Schooler believes that the decline effect deserves more attention: its ubiquity seems to violate the laws of statistics. “Whenever I start talking about this, scientists get very nervous,” he says. “But I still want to know what happened to my results. Like most scientists, I assumed that it would get easier to document my effect over time. I’d get better at doing the experiments, at zeroing in on the conditions that produce verbal overshadowing. So why did the opposite happen? I’m convinced that we can use the tools of science to figure this out. First, though, we have to admit that we’ve got a problem.”
  • In 2001, Michael Jennions, a biologist at the Australian National University, set out to analyze “temporal trends” across a wide range of subjects in ecology and evolutionary biology. He looked at hundreds of papers and forty-four meta-analyses (that is, statistical syntheses of related studies), and discovered a consistent decline effect over time, as many of the theories seemed to fade into irrelevance. In fact, even when numerous variables were controlled for—Jennions knew, for instance, that the same author might publish several critical papers, which could distort his analysis—there was still a significant decrease in the validity of the hypothesis, often within a year of publication. Jennions admits that his findings are troubling, but expresses a reluctance to talk about them publicly. “This is a very sensitive issue for scientists,” he says. “You know, we’re supposed to be dealing with hard facts, the stuff that’s supposed to stand the test of time. But when you see these trends you become a little more skeptical of things.”
  • the worst part was that when I submitted these null results I had difficulty getting them published. The journals only wanted confirming data. It was too exciting an idea to disprove, at least back then.
  • the steep rise and slow fall of fluctuating asymmetry is a clear example of a scientific paradigm, one of those intellectual fads that both guide and constrain research: after a new paradigm is proposed, the peer-review process is tilted toward positive results. But then, after a few years, the academic incentives shift—the paradigm has become entrenched—so that the most notable results are now those that disprove the theory.
  • Jennions, similarly, argues that the decline effect is largely a product of publication bias, or the tendency of scientists and scientific journals to prefer positive data over null results, which is what happens when no effect is found. The bias was first identified by the statistician Theodore Sterling, in 1959, after he noticed that ninety-seven per cent of all published psychological studies with statistically significant data found the effect they were looking for. A “significant” result is defined as any data point that would be produced by chance less than five per cent of the time. This ubiquitous test was invented in 1922 by the English mathematician Ronald Fisher, who picked five per cent as the boundary line, somewhat arbitrarily, because it made pencil and slide-rule calculations easier. Sterling saw that if ninety-seven per cent of psychology studies were proving their hypotheses, either psychologists were extraordinarily lucky or they published only the outcomes of successful experiments. In recent years, publication bias has mostly been seen as a problem for clinical trials, since pharmaceutical companies are less interested in publishing results that aren’t favorable. But it’s becoming increasingly clear that publication bias also produces major distortions in fields without large corporate incentives, such as psychology and ecology.
  • While publication bias almost certainly plays a role in the decline effect, it remains an incomplete explanation. For one thing, it fails to account for the initial prevalence of positive results among studies that never even get submitted to journals. It also fails to explain the experience of people like Schooler, who have been unable to replicate their initial data despite their best efforts
  • an equally significant issue is the selective reporting of results—the data that scientists choose to document in the first place. Palmer’s most convincing evidence relies on a statistical tool known as a funnel graph. When a large number of studies have been done on a single subject, the data should follow a pattern: studies with a large sample size should all cluster around a common value—the true result—whereas those with a smaller sample size should exhibit a random scattering, since they’re subject to greater sampling error. This pattern gives the graph its name, since the distribution resembles a funnel.
  • The funnel graph visually captures the distortions of selective reporting. For instance, after Palmer plotted every study of fluctuating asymmetry, he noticed that the distribution of results with smaller sample sizes wasn’t random at all but instead skewed heavily toward positive results.
  • Palmer has since documented a similar problem in several other contested subject areas. “Once I realized that selective reporting is everywhere in science, I got quite depressed,” Palmer told me. “As a researcher, you’re always aware that there might be some nonrandom patterns, but I had no idea how widespread it is.” In a recent review article, Palmer summarized the impact of selective reporting on his field: “We cannot escape the troubling conclusion that some—perhaps many—cherished generalities are at best exaggerated in their biological significance and at worst a collective illusion nurtured by strong a-priori beliefs often repeated.”
  • Palmer emphasizes that selective reporting is not the same as scientific fraud. Rather, the problem seems to be one of subtle omissions and unconscious misperceptions, as researchers struggle to make sense of their results. Stephen Jay Gould referred to this as the “shoehorning” process. “A lot of scientific measurement is really hard,” Simmons told me. “If you’re talking about fluctuating asymmetry, then it’s a matter of minuscule differences between the right and left sides of an animal. It’s millimetres of a tail feather. And so maybe a researcher knows that he’s measuring a good male”—an animal that has successfully mated—“and he knows that it’s supposed to be symmetrical. Well, that act of measurement is going to be vulnerable to all sorts of perception biases. That’s not a cynical statement. That’s just the way human beings work.”
  • One of the classic examples of selective reporting concerns the testing of acupuncture in different countries. While acupuncture is widely accepted as a medical treatment in various Asian countries, its use is much more contested in the West. These cultural differences have profoundly influenced the results of clinical trials. Between 1966 and 1995, there were forty-seven studies of acupuncture in China, Taiwan, and Japan, and every single trial concluded that acupuncture was an effective treatment. During the same period, there were ninety-four clinical trials of acupuncture in the United States, Sweden, and the U.K., and only fifty-six per cent of these studies found any therapeutic benefits. As Palmer notes, this wide discrepancy suggests that scientists find ways to confirm their preferred hypothesis, disregarding what they don’t want to see. Our beliefs are a form of blindness.
  • John Ioannidis, an epidemiologist at Stanford University, argues that such distortions are a serious issue in biomedical research. “These exaggerations are why the decline has become so common,” he says. “It’d be really great if the initial studies gave us an accurate summary of things. But they don’t. And so what happens is we waste a lot of money treating millions of patients and doing lots of follow-up studies on other themes based on results that are misleading.”
  • In 2005, Ioannidis published an article in the Journal of the American Medical Association that looked at the forty-nine most cited clinical-research studies in three major medical journals. Forty-five of these studies reported positive results, suggesting that the intervention being tested was effective. Because most of these studies were randomized controlled trials—the “gold standard” of medical evidence—they tended to have a significant impact on clinical practice, and led to the spread of treatments such as hormone replacement therapy for menopausal women and daily low-dose aspirin to prevent heart attacks and strokes. Nevertheless, the data Ioannidis found were disturbing: of the thirty-four claims that had been subject to replication, forty-one per cent had either been directly contradicted or had their effect sizes significantly downgraded.
  • The situation is even worse when a subject is fashionable. In recent years, for instance, there have been hundreds of studies on the various genes that control the differences in disease risk between men and women. These findings have included everything from the mutations responsible for the increased risk of schizophrenia to the genes underlying hypertension. Ioannidis and his colleagues looked at four hundred and thirty-two of these claims. They quickly discovered that the vast majority had serious flaws. But the most troubling fact emerged when he looked at the test of replication: out of four hundred and thirty-two claims, only a single one was consistently replicable. “This doesn’t mean that none of these claims will turn out to be true,” he says. “But, given that most of them were done badly, I wouldn’t hold my breath.”
  • the main problem is that too many researchers engage in what he calls “significance chasing,” or finding ways to interpret the data so that it passes the statistical test of significance—the ninety-five-per-cent boundary invented by Ronald Fisher. “The scientists are so eager to pass this magical test that they start playing around with the numbers, trying to find anything that seems worthy,” Ioannidis says. In recent years, Ioannidis has become increasingly blunt about the pervasiveness of the problem. One of his most cited papers has a deliberately provocative title: “Why Most Published Research Findings Are False.”
  • The problem of selective reporting is rooted in a fundamental cognitive flaw, which is that we like proving ourselves right and hate being wrong. “It feels good to validate a hypothesis,” Ioannidis said. “It feels even better when you’ve got a financial interest in the idea or your career depends upon it. And that’s why, even after a claim has been systematically disproven”—he cites, for instance, the early work on hormone replacement therapy, or claims involving various vitamins—“you still see some stubborn researchers citing the first few studies that show a strong effect. They really want to believe that it’s true.”
  • scientists need to become more rigorous about data collection before they publish. “We’re wasting too much time chasing after bad studies and underpowered experiments,” he says. The current “obsession” with replicability distracts from the real problem, which is faulty design. He notes that nobody even tries to replicate most science papers—there are simply too many. (According to Nature, a third of all studies never even get cited, let alone repeated.)
  • Schooler recommends the establishment of an open-source database, in which researchers are required to outline their planned investigations and document all their results. “I think this would provide a huge increase in access to scientific work and give us a much better way to judge the quality of an experiment,” Schooler says. “It would help us finally deal with all these issues that the decline effect is exposing.”
  • Although such reforms would mitigate the dangers of publication bias and selective reporting, they still wouldn’t erase the decline effect. This is largely because scientific research will always be shadowed by a force that can’t be curbed, only contained: sheer randomness. Although little research has been done on the experimental dangers of chance and happenstance, the research that exists isn’t encouraging
  • John Crabbe, a neuroscientist at the Oregon Health and Science University, conducted an experiment that showed how unknowable chance events can skew tests of replicability. He performed a series of experiments on mouse behavior in three different science labs: in Albany, New York; Edmonton, Alberta; and Portland, Oregon. Before he conducted the experiments, he tried to standardize every variable he could think of. The same strains of mice were used in each lab, shipped on the same day from the same supplier. The animals were raised in the same kind of enclosure, with the same brand of sawdust bedding. They had been exposed to the same amount of incandescent light, were living with the same number of littermates, and were fed the exact same type of chow pellets. When the mice were handled, it was with the same kind of surgical glove, and when they were tested it was on the same equipment, at the same time in the morning.
  • The premise of this test of replicability, of course, is that each of the labs should have generated the same pattern of results. “If any set of experiments should have passed the test, it should have been ours,” Crabbe says. “But that’s not the way it turned out.” In one experiment, Crabbe injected a particular strain of mouse with cocaine. In Portland the mice given the drug moved, on average, six hundred centimetres more than they normally did; in Albany they moved seven hundred and one additional centimetres. But in the Edmonton lab they moved more than five thousand additional centimetres. Similar deviations were observed in a test of anxiety. Furthermore, these inconsistencies didn’t follow any detectable pattern. In Portland one strain of mouse proved most anxious, while in Albany another strain won that distinction.
  • The disturbing implication of the Crabbe study is that a lot of extraordinary scientific data are nothing but noise. The hyperactivity of those coked-up Edmonton mice wasn’t an interesting new fact—it was a meaningless outlier, a by-product of invisible variables we don’t understand. The problem, of course, is that such dramatic findings are also the most likely to get published in prestigious journals, since the data are both statistically significant and entirely unexpected. Grants get written, follow-up studies are conducted. The end result is a scientific accident that can take years to unravel.
  • This suggests that the decline effect is actually a decline of illusion.
  • While Karl Popper imagined falsification occurring with a single, definitive experiment—Galileo refuted Aristotelian mechanics in an afternoon—the process turns out to be much messier than that. Many scientific theories continue to be considered true even after failing numerous experimental tests. Verbal overshadowing might exhibit the decline effect, but it remains extensively relied upon within the field. The same holds for any number of phenomena, from the disappearing benefits of second-generation antipsychotics to the weak coupling ratio exhibited by decaying neutrons, which appears to have fallen by more than ten standard deviations between 1969 and 2001. Even the law of gravity hasn’t always been perfect at predicting real-world phenomena. (In one test, physicists measuring gravity by means of deep boreholes in the Nevada desert found a two-and-a-half-per-cent discrepancy between the theoretical predictions and the actual data.) Despite these findings, second-generation antipsychotics are still widely prescribed, and our model of the neutron hasn’t changed. The law of gravity remains the same.
  • Such anomalies demonstrate the slipperiness of empiricism. Although many scientific ideas generate conflicting results and suffer from falling effect sizes, they continue to get cited in the textbooks and drive standard medical practice. Why? Because these ideas seem true. Because they make sense. Because we can’t bear to let them go. And this is why the decline effect is so troubling. Not because it reveals the human fallibility of science, in which data are tweaked and beliefs shape perceptions. (Such shortcomings aren’t surprising, at least for scientists.) And not because it reveals that many of our most exciting theories are fleeting fads and will soon be rejected. (That idea has been around since Thomas Kuhn.) The decline effect is troubling because it reminds us how difficult it is to prove anything. We like to pretend that our experiments define the truth for us. But that’s often not the case. Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true. When the experiments are done, we still have to choose what to believe.
Weiye Loh

Political - or politicized? - psychology » Scienceline - 0 views

  • The idea that your personal characteristics could be linked to your political ideology has intrigued political psychologists for decades. Numerous studies suggest that liberals and conservatives differ not only in their views toward government and society, but also in their behavior, their personality, and even how they travel, decorate, clean and spend their leisure time. In today’s heated political climate, understanding people on the “other side” — whether that side is left or right — takes on new urgency. But as researchers study the personal side of politics, could they be influenced by political biases of their own?
  • Consider the following 2006 study by the late California psychologists Jeanne and Jack Block, which compared the personalities of nursery school children to their political leanings as 23-year olds. Preschoolers who went on to identify as liberal were described by the authors as self-reliant, energetic, somewhat dominating and resilient. The children who later identified as conservative were described as easily offended, indecisive, fearful, rigid, inhibited and vulnerable. The negative descriptions of conservatives in this study strike Jacob Vigil, a psychologist at the University of New Mexico, as morally loaded. Studies like this one, he said, use language that suggests the researchers are “motivated to present liberals with more ideal descriptions as compared to conservatives.”
  • Most of the researchers in this field are, in fact, liberal. In 2007 UCLA’s Higher Education Research Institute conducted a survey of faculty at four-year colleges and universities in the United States. About 68 percent of the faculty in history, political science and social science departments characterized themselves as liberal, 22 percent characterized themselves as moderate, and only 10 percent as conservative. Some social psychologists, like Jonathan Haidt of the University of Virginia, have charged that this liberal majority distorts the research in political psychology.
  • ...9 more annotations...
  • It’s a charge that John Jost, a social psychologist at New York University, flatly denies. Findings in political psychology bear upon deeply held personal beliefs and attitudes, he said, so they are bound to spark controversy. Research showing that conservatives score higher on measures of “intolerance of ambiguity” or the “need for cognitive closure” might bother some people, said Jost, but that does not make it biased.
  • “The job of the behavioral scientist is not to try to find something to say that couldn’t possibly be offensive,” said Jost. “Our job is to say what we think is true, and why.
  • Jost and his colleagues in 2003 compiled a meta-analysis of 88 studies from 12 different countries conducted over a 40-year period. They found strong evidence that conservatives tend to have higher needs to reduce uncertainty and threat. Conservatives also share psychological factors like fear, aggression, dogmatism, and the need for order, structure and closure. Political conservatism, they explained, could serve as a defense against anxieties and threats that arise out of everyday uncertainty, by justifying the status quo and preserving conditions that are comfortable and familiar.
  • The study triggered quite a public reaction, particularly within the conservative blogosphere. But the criticisms, according to Jost, were mistakenly focused on the researchers themselves; the findings were not disputed by the scientific community and have since been replicated. For example, a 2009 study followed college students over the span of their undergraduate experience and found that higher perceptions of threat did indeed predict political conservatism. Another 2009 study found that when confronted with a threat, liberals actually become more psychologically and politically conservative. Some studies even suggest that physiological traits like sensitivity to sudden noises or threatening images are associated with conservative political attitudes.
  • “The debate should always be about the data and its proper interpretation,” said Jost, “and never about the characteristics or motives of the researchers.” Phillip Tetlock, a psychologist at the University of California, Berkeley, agrees. However, Tetlock thinks that identifying the proper interpretation can be tricky, since personality measures can be described in many ways. “One observer’s ‘dogmatism’ can be another’s ‘principled,’ and one observer’s ‘open-mindedness’ can be another’s ‘flaccid and vacillating,’” Tetlock explained.
  • Richard Redding, a professor of law and psychology at Chapman University in Orange, California, points to a more general, indirect bias in political psychology. “It’s not the case that researchers are intentionally skewing the data,” which rarely happens, Redding said. Rather, the problem may lie in what sorts of questions are or are not asked.
  • For example, a conservative might be more inclined to undertake research on affirmative action in a way that would identify any negative outcomes, whereas a liberal probably wouldn’t, said Redding. Likewise, there may be aspects of personality that liberals simply haven’t considered. Redding is currently conducting a large-scale study on self-righteousness, which he suspects may be associated more highly with liberals than conservatives.
  • “The way you frame a problem is to some extent dictated by what you think the problem is,” said David Sears, a political psychologist at the University of California, Los Angeles. People’s strong feelings about issues like prejudice, sexism, authoritarianism, aggression, and nationalism — the bread and butter of political psychology — may influence how they design a study or present a problem.
  • The indirect bias that Sears and Redding identify is a far cry from the liberal groupthink others warn against. But given that psychology departments are predominantly left leaning, it’s important to seek out alternative viewpoints and explanations, said Jesse Graham, a social psychologist at the University of Southern California. A self-avowed liberal, Graham thinks it would be absurd to say he couldn’t do fair science because of his political preferences. “But,” he said, “it is something that I try to keep in mind.”
  •  
    The idea that your personal characteristics could be linked to your political ideology has intrigued political psychologists for decades. Numerous studies suggest that liberals and conservatives differ not only in their views toward government and society, but also in their behavior, their personality, and even how they travel, decorate, clean and spend their leisure time. In today's heated political climate, understanding people on the "other side" - whether that side is left or right - takes on new urgency. But as researchers study the personal side of politics, could they be influenced by political biases of their own?
Weiye Loh

Why Evolution May Favor Irrationality - Newsweek - 0 views

  • The reason we succumb to confirmation bias, why we are blind to counterexamples, and why we fall short of CartesianCartesian logic in so many other ways is that these lapses have a purpose: they help us “devise and evaluate arguments that are intended to persuade other people,” says psychologist Hugo Mercier of the University of Pennsylvania. Failures of logic, he and cognitive scientist Dan Sperber of the Institut Jean Nicod in Paris propose, are in fact effective ploys to win arguments.
  • That puts poor reasoning in a completely different light. Arguing, after all, is less about seeking truth than about overcoming opposing views.
  • while confirmation bias, for instance, may mislead us about what’s true and real, by letting examples that support our view monopolize our memory and perception, it maximizes the artillery we wield when trying to convince someone that, say, he really is “late all the time.” Confirmation bias “has a straightforward explanation,” argues Mercier. “It contributes to effective argumentation.”
  • ...2 more annotations...
  • finding counterexamples can, in general, weaken our confidence in our own arguments. Forms of reasoning that are good for solving logic puzzles but bad for winning arguments lost out, over the course of evolution, to those that help us be persuasive but cause us to struggle with abstract syllogisms. Interestingly, syllogisms are easier to evaluate in the form “No flying things are penguins; all penguins are birds; so some birds are not fliers.” That’s because we are more likely to argue about animals than A, B, and C.
  • The sort of faulty thinking called motivated reasoning also impedes our search for truth but advances arguments. For instance, we tend to look harder for flaws in a study when we don’t agree with its conclusions and are more critical of evidence that undermines our point of view.
  •  
    The Limits of Reason Why evolution may favor irrationality.
Weiye Loh

The Greening of the American Brain - TIME - 0 views

  • The past few years have seen a marked decline in the percentage of Americans who believe what scientists say about climate, with belief among conservatives falling especially fast. It's true that the science community has hit some bumps — the IPCC was revealed to have made a few dumb errors in its recent assessment, and the "Climategate" hacked emails showed scientists behaving badly. But nothing changed the essential truth that more man-made CO2 means more warming; in fact, the basic scientific case has only gotten stronger. Yet still, much of the American public remains unconvinced — and importantly, last November that public returned control of the House of Representatives to a Republican party that is absolutely hostile to the basic truths of climate science.
  • facts and authority alone may not shift people's opinions on climate science or many other topics. That was the conclusion I took from the Climate, Mind and Behavior conference, a meeting of environmentalists, neuroscientists, psychologists and sociologists that I attended last week at the Garrison Institute in New York's Hudson Valley. We like to think of ourselves as rational creatures who select from the choices presented to us for maximum individual utility — indeed, that's the essential principle behind most modern economics. But when you do assume rationality, the politics of climate change get confusing. Why would so many supposedly rational human beings choose to ignore overwhelming scientific authority?
  • Maybe because we're not actually so rational after all, as research is increasingly showing. Emotions and values — not always fully conscious — play an enormous role in how we process information and make choices. We are beset by cognitive biases that throw what would be sound decision-making off-balance. Take loss aversion: psychologists have found that human beings tend to be more concerned about avoiding losses than achieving gains, holding onto what they have even when this is not in their best interests. That has a simple parallel to climate politics: environmentalists argue that the shift to a low-carbon economy will create abundant new green jobs, but for many people, that prospect of future gain — even if it comes with a safer planet — may not be worth the risk of losing the jobs and economy they have.
  • ...4 more annotations...
  • What's the answer for environmentalists? Change the message and frame the issue in a way that doesn't trigger unconscious opposition among so many Americans. That can be a simple as using the right labels: a recent study by researchers at the University of Michigan found that Republicans are less skeptical of "climate change" than "global warming," possibly because climate change sounds less specific. Possibly too because so broad a term includes the severe snowfalls of the past winter that can be a paradoxical result of a generally warmer world. Greens should also pin their message on subjects that are less controversial, like public health or national security. Instead of issuing dire warnings about an apocalyptic future — which seems to make many Americans stop listening — better to talk about the present generation's responsibility to the future, to bequeath their children and grandchildren a safer and healthy planet.
  • Group identification also plays a major role in how we make decisions — and that's another way facts can get filtered. Declining belief in climate science has been, for the most part in America, a conservative phenomenon. On the surface, that's curious: you could expect Republicans to be skeptical of economic solutions to climate change like a carbon tax, since higher taxes tend to be a Democratic policy, but scientific information ought to be non-partisan. Politicians never debate the physics of space travel after all, even if they argue fiercely over the costs and priorities associated with it. That, however, is the power of group thinking; for most conservative Americans, the very idea of climate science has been poisoned by ideologues who seek to advance their economic arguments by denying scientific fact. No additional data — new findings about CO2 feedback loops or better modeling of ice sheet loss — is likely to change their mind.
  • The bright side of all this irrationality is that it means human beings can act in ways that sometimes go against their immediate utility, sacrificing their own interests for the benefit of the group.
  • Our brains develop socially, not just selfishly, which means sustainable behavior — and salvation for the planet — may not be as difficult as it sometimes seem. We can motivate people to help stop climate change — it may just not be climate science that convinces them to act.
Weiye Loh

7 Essential Skills You Didn't Learn in College | Magazine - 0 views

shared by Weiye Loh on 15 Oct 10 - No Cached
  • Statistical Literacy Why take this course? We are misled by numbers and by our misunderstanding of probability.
  • Our world is shaped by widespread statistical illiteracy. We fear things that probably won’t kill us (terrorist attacks) and ignore things that probably will (texting while driving). We buy lottery tickets. We fall prey to misleading gut instincts, which lead to biases like loss aversion—an inability to gauge risk against potential gain. The effects play out in the grocery store, the office, and the voting booth (not to mention the bedroom: People who are more risk-averse are less successful in love).
  • We are now 53 percent more likely than our parents to trust polls of dubious merit. (That figure is totally made up. See?) Where do all these numbers that we remember so easily and cite so readily come from? How are they calculated, and by whom? How do we misuse them to make them say what we want them to? We’ll explore all of these questions in a sequence on sourcing statistics.
  • ...9 more annotations...
  • probabilistic intuition. We’ll learn to judge what’s likely and unlikely—and what’s impossible to know. We’ll learn about distorting habits of mind like selection bias—and how to guard against them. We’ll gamble. We’ll read The Art of Probability for Scientists and Engineers by Richard Hamming, Expert Political Judgment by Philip Tetlock, and How to Cheat Your Friends at Poker by Penn Jillette and Mickey Lynn.
  • Post-State Diplomacy Why take this course? As the world becomes evermore atomized, understanding the new leaders and constituencies becomes increasingly important.
  • tribal insurgents to multinational corporations, private charities to pirate gangs, religious movements to armies for hire, a range of organizations now compete with (and sometimes eclipse) the nation-states in which they reside. Without capitals or traditional constituencies, they can’t be persuaded or deterred by traditional tactics.
  • that doesn’t mean diplomacy is dead; quite the opposite. Negotiating with these parties requires the same skills as dealing with belligerent nations—understanding the shareholders and alliances they must answer to, the cultures that inform how they behave, and the religious, economic, and political interests they must address.
  • Power has always depended on who can provide justice, commerce, and stability.
  • Remix Culture Why take this course? Modern artists don’t start with a blank page or empty canvas. They start with preexisting works. What you’ll learn: How to analyze—and create—artworks made out of other artworks
  • philosophical roots of remix culture and study seminal works like Robert Rauschenberg’s Monogram and Jorge Luis Borges’ Pierre Menard, Author of Don Quixote. And we’ll examine modern-day exemplars from DJ Shadow’s Endtroducing to Auto-Tune the News.
  • Applied Cognition Why take this course? You have to know the brain to train the brain. What you’ll learn: How the mind works and how you can make it work for you.
  • Writing for New Forms Why take this course? You can write a cogent essay, but can you write it in 140 characters or less? What you’ll learn: How to adapt your message to multiple formats and audiences—human and machine.
  •  
    7 Essential Skills You Didn't Learn in College
1 - 7 of 7
Showing 20 items per page