Skip to main content

Home/ TOK Friends/ Group items tagged self-deception

Rss Feed Group items tagged

oliviaodon

How scientists fool themselves - and how they can stop : Nature News & Comment - 1 views

  • In 2013, five years after he co-authored a paper showing that Democratic candidates in the United States could get more votes by moving slightly to the right on economic policy1, Andrew Gelman, a statistician at Columbia University in New York City, was chagrined to learn of an error in the data analysis. In trying to replicate the work, an undergraduate student named Yang Yang Hu had discovered that Gelman had got the sign wrong on one of the variables.
  • Gelman immediately published a three-sentence correction, declaring that everything in the paper's crucial section should be considered wrong until proved otherwise.
  • Reflecting today on how it happened, Gelman traces his error back to the natural fallibility of the human brain: “The results seemed perfectly reasonable,” he says. “Lots of times with these kinds of coding errors you get results that are just ridiculous. So you know something's got to be wrong and you go back and search until you find the problem. If nothing seems wrong, it's easier to miss it.”
  • ...6 more annotations...
  • This is the big problem in science that no one is talking about: even an honest person is a master of self-deception. Our brains evolved long ago on the African savannah, where jumping to plausible conclusions about the location of ripe fruit or the presence of a predator was a matter of survival. But a smart strategy for evading lions does not necessarily translate well to a modern laboratory, where tenure may be riding on the analysis of terabytes of multidimensional data. In today's environment, our talent for jumping to conclusions makes it all too easy to find false patterns in randomness, to ignore alternative explanations for a result or to accept 'reasonable' outcomes without question — that is, to ceaselessly lead ourselves astray without realizing it.
  • Failure to understand our own biases has helped to create a crisis of confidence about the reproducibility of published results
  • Although it is impossible to document how often researchers fool themselves in data analysis, says Ioannidis, findings of irreproducibility beg for an explanation. The study of 100 psychology papers is a case in point: if one assumes that the vast majority of the original researchers were honest and diligent, then a large proportion of the problems can be explained only by unconscious biases. “This is a great time for research on research,” he says. “The massive growth of science allows for a massive number of results, and a massive number of errors and biases to study. So there's good reason to hope we can find better ways to deal with these problems.”
  • Although the human brain and its cognitive biases have been the same for as long as we have been doing science, some important things have changed, says psychologist Brian Nosek, executive director of the non-profit Center for Open Science in Charlottesville, Virginia, which works to increase the transparency and reproducibility of scientific research. Today's academic environment is more competitive than ever. There is an emphasis on piling up publications with statistically significant results — that is, with data relationships in which a commonly used measure of statistical certainty, the p-value, is 0.05 or less. “As a researcher, I'm not trying to produce misleading results,” says Nosek. “But I do have a stake in the outcome.” And that gives the mind excellent motivation to find what it is primed to find.
  • Another reason for concern about cognitive bias is the advent of staggeringly large multivariate data sets, often harbouring only a faint signal in a sea of random noise. Statistical methods have barely caught up with such data, and our brain's methods are even worse, says Keith Baggerly, a statistician at the University of Texas MD Anderson Cancer Center in Houston. As he told a conference on challenges in bioinformatics last September in Research Triangle Park, North Carolina, “Our intuition when we start looking at 50, or hundreds of, variables sucks.”
  • One trap that awaits during the early stages of research is what might be called hypothesis myopia: investigators fixate on collecting evidence to support just one hypothesis; neglect to look for evidence against it; and fail to consider other explanations.
Javier E

Why these friendly robots can't be good friends to our kids - The Washington Post - 0 views

  • before adding a sociable robot to the holiday gift list, parents may want to pause to consider what they would be inviting into their homes. These machines are seductive and offer the wrong payoff: the illusion of companionship without the demands of friendship, the illusion of connection without the reciprocity of a mutual relationship. And interacting with these empathy machines may get in the way of children’s ability to develop a capacity for empathy themselves.
  • In our study, the children were so invested in their relationships with Kismet and Cog that they insisted on understanding the robots as living beings, even when the roboticists explained how the machines worked or when the robots were temporarily broken.
  • The children took the robots’ behavior to signify feelings. When the robots interacted with them, the children interpreted this as evidence that the robots liked them. And when the robots didn’t work on cue, the children likewise took it personally. Their relationships with the robots affected their state of mind and self-esteem.
  • ...14 more annotations...
  • We were led to wonder whether a broken robot can break a child.
  • Kids are central to the sociable-robot project, because its agenda is to make people more comfortable with robots in roles normally reserved for humans, and robotics companies know that children are vulnerable consumers who can bring the whole family along.
  • In October, Mattel scrapped plans for Aristotle — a kind of Alexa for the nursery, designed to accompany children as they progress from lullabies and bedtime stories through high school homework — after lawmakers and child advocacy groups argued that the data the device collected about children could be misused by Mattel, marketers, hackers and other third parties. I was part of that campaign: There is something deeply unsettling about encouraging children to confide in machines that are in turn sharing their conversations with countless others.
  • Recently, I opened my MIT mail and found a “call for subjects” for a study involving sociable robots that will engage children in conversation to “elicit empathy.” What will these children be empathizing with, exactly? Empathy is a capacity that allows us to put ourselves in the place of others, to know what they are feeling. Robots, however, have no emotions to share
  • What they can do is push our buttons. When they make eye contact and gesture toward us, they predispose us to view them as thinking and caring. They are designed to be cute, to provoke a nurturing response. And when it comes to sociable AI, nurturance is the killer app: We nurture what we love, and we love what we nurture. If a computational object or robot asks for our help, asks us to teach it or tend to it, we attach. That is our human vulnerability.
  • digital companions don’t understand our emotional lives. They present themselves as empathy machines, but they are missing the essential equipment: They have not known the arc of a life. They have not been born; they don’t know pain, or mortality, or fear. Simulated thinking may be thinking, but simulated feeling is never feeling, and simulated love is never love.
  • Breazeal’s position is this: People have relationships with many classes of things. They have relationships with children and with adults, with animals and with machines. People, even very little people, are good at this. Now, we are going to add robots to the list of things with which we can have relationships. More powerful than with pets. Less powerful than with people. We’ll figure it out.
  • The nature of the attachments to dolls and sociable machines is different. When children play with dolls, they project thoughts and emotions onto them. A girl who has broken her mother’s crystal will put her Barbies into detention and use them to work on her feelings of guilt. The dolls take the role she needs them to take.
  • Sociable machines, by contrast, have their own agenda. Playing with robots is not about the psychology of projection but the psychology of engagement. Children try to meet the robot’s needs, to understand the robot’s unique nature and wants. There is an attempt to build a mutual relationship.
  • Some people might consider that a good thing: encouraging children to think beyond their own needs and goals. Except the whole commercial program is an exercise in emotional deception.
  • when we offer these robots as pretend friends to our children, it’s not so clear they can wink with us. We embark on an experiment in which our children are the human subjects.
  • it is hard to imagine what those “right types” of ties might be. These robots can’t be in a two-way relationship with a child. They are machines whose art is to put children in a position of pretend empathy. And if we put our children in that position, we shouldn’t expect them to understand what empathy is. If we give them pretend relationships, we shouldn’t expect them to learn how real relationships — messy relationships — work. On the contrary. They will learn something superficial and inauthentic, but mistake it for real connection.
  • In the process, we can forget what is most central to our humanity: truly understanding each other.
  • For so long, we dreamed of artificial intelligence offering us not only instrumental help but the simple salvations of conversation and care. But now that our fantasy is becoming reality, it is time to confront the emotional downside of living with the robots of our dreams.
Javier E

George Orwell: The Prevention of Literature - The Atlantic - 0 views

  • the much more tenable and dangerous proposition that freedom is undesirable and that intellectual honesty is a form of antisocial selfishness
  • the controversy over freedom of speech and of the press is at bottom a controversy over the desirability, or otherwise, of telling lies.
  • What is really at issue is the right to report contemporary events truthfully, or as truthfully as is consistent with the ignorance, bias, and self-deception from which every observer necessarily suffers
  • ...10 more annotations...
  • it is necessary to strip away the irrelevancies in which this controversy is usually wrapped up.
  • The enemies of intellectual liberty always try to present their case as a plea for discipline versus individualism.
  • The issue truth-versus-untruth is as far as possible kept in the background.
  • the writer who refuses to sell his opinions is always branded as a mere egoist, He is accused, that is, either of wanting to shut himself up in an ivory tower, or of making an exhibitionist display of his own personality, or of resisting the inevitable current, of history in an attempt to cling to unjustified privileges.
  • Each of them tacitly claims that “the truth” has already been revealed, and that the heretic, if he is not simply a fool, is secretly aware of “the truth” and merely resists it out of selfish motives.
  • Freedom of the intellect means the freedom to report what one has seen, heard, and fell, and not to be obliged to fabricate imaginary facts and feelings.
  • known facts are suppressed and distorted to such an extent as to make it doubtful whether a true history of our times can ever be written.
  • A totalitarian state is in effect a theocracy, and its ruling caste, in order to keep its position, has to be thought of as infallible. But since, in practice, no one is infallible, it is frequently necessary to rearrange past events in order to show that this or that mistake was not made, or that this or that imaginary triumph actually happened
  • Then, again, every major change in policy demands a corresponding change of doctrine and a revaluation of prominent historical figures. This kind of thing happens everywhere, but clearly it is likelier to lead to outright falsification in societies where only one opinion is permissible at any given moment.
  • The friends of totalitarianism in England usually tend to argue that since absolute truth is not attainable, a big lie is no worse than a little lie. It is pointed out that all historical records are biased and inaccurate, or, on the other hand, that modem physics has proved that what seems to us the real world is an illusion, so that to believe in the evidence of one’s senses is simply vulgar philistinism.
‹ Previous 21 - 23 of 23
Showing 20 items per page