Skip to main content

Home/ TOK@ISPrague/ Group items tagged statistics

Rss Feed Group items tagged

Lawrence Hrubes

Google Ngram Viewer - 0 views

  •  
    statistical mapping of the frequency of words found in 30+ million digitized books
Lawrence Hrubes

New Critique Sees Flaws in Landmark Analysis of Psychology Studies - The New York Times - 0 views

  • A landmark 2015 report that cast doubt on the results of dozens of published psychology studies has exposed deep divisions in the field, serving as a reality check for many working researchers but as an affront to others who continue to insist the original research was sound. On Thursday, a group of four researchers publicly challenged the report, arguing that it was statistically flawed and, as a result, wrong.The 2015 report, called the Reproducibility Project, found that less than 40 studies in a sample of 100 psychology papers in leading journals held up when retested by an independent team. The new critique by the four researchers countered that when that team’s statistical methodology was adjusted, the rate was closer to 100 percent.
Lawrence Hrubes

How a Gay-Marriage Study Went Wrong - The New Yorker - 1 views

  • ast December, Science published a provocative paper about political persuasion. Persuasion is famously difficult: study after study—not to mention much of world history—has shown that, when it comes to controversial subjects, people rarely change their minds, especially if those subjects are important to them. You may think that you’ve made a convincing argument about gun control, but your crabby uncle isn’t likely to switch sides in the debate. Beliefs are sticky, and hardly any approach, no matter how logical it may be, can change that. The Science study, “When contact changes minds: An experiment on transmission of support for gay equality,” seemed to offer a method that could work.
  • In the document, “Irregularities in LaCour (2014),” Broockman, along with a fellow graduate student, Joshua Kalla, and a professor at Yale, Peter Aronow, argued that the survey data in the study showed multiple statistical irregularities and was likely “not collected as described.”
  • If, in the end, the data do turn out to be fraudulent, does that say anything about social science as a whole? On some level, the case would be a statistical fluke. Despite what news headlines would have you believe, outright fraud is incredibly rare; almost no one commits it, and almost no one experiences it firsthand. As a result, innocence is presumed, and the mindset is one of trust.
  • ...2 more annotations...
  • There’s another issue at play: the nature of belief. As I’ve written before, we are far quicker to believe things that mesh with our view of how life should be. Green is a firm supporter of gay marriage, and that may have made him especially pleased about the study. (Did it have a similar effect on liberally minded reviewers at Science? We know that studies confirming liberal thinking sometimes get a pass where ones challenging those ideas might get killed in review; the same effect may have made journalists more excited about covering the results.)
  • In short, confirmation bias—which is especially powerful when we think about social issues—may have made the study’s shakiness easier to overlook.
markfrankel18

A Cambridge professor on how to stop being so easily manipulated by misleading statisti... - 0 views

  • Graphs can be as manipulative as words. Using tricks such as cutting axes, rescaling things, changing data from positive to negative, etc. Sometimes putting zero on the y-axis is wrong. So to be sure that you are communicating the right things, you need to evaluate the message that people are taking away. There are no absolute rules. It all depends on what you want to communicate.
  • The bottom line is that humans are very bad at understanding probability. Everyone finds it difficult, even I do. We just have to get better at it. We need to learn to spot when we are being manipulated.
markfrankel18

Correlation is not causation | OUPblog - 0 views

  • A famous slogan in statistics is that correlation does not imply causation. We know that there is a statistical correlation between eating ice cream and drowning incidents, for instance, but ice cream consumption does not cause drowning. Where any two factors –  A and B – are correlated, there are four possibilities: 1. A is a cause of B, 2. B is a cause of A, 3. the correlation is pure coincidence and 4., as in the ice cream case, A and B are connected by a common cause. Increased ice cream consumption and drowning rates both have a common cause in warm summer weather.
  • We know that smoking causes cancer. But we also know that many people who smoke don’t get cancer. Causal claims are not falsified by counterexamples, not even by a whole bunch of them. Contraceptive pills have been shown to cause thrombosis, but only in 1 of 1000 women. Following Popper, we could say that for every case where the cause is followed by the effect there are 999 counterexamples. Instead of falsifying the hypothesis that the pill causes thrombosis, however, we list thrombosis as a known side-effect. Causation is still very much assumed even though it occurs only in rare cases.
  • One could understand a cause, for instance, as a tendency towards its effect. Smoking has a tendency towards cancer, but it doesn’t guarantee it.. Contraception pills have a tendency towards thrombosis but a relatively small one. However, being hit by a train strongly tends towards death. We see that tendencies come in degrees, as do causes, some strongly tending towards their effect and some only weakly.
  • ...1 more annotation...
  • Correlation does not imply causation. At best it might be taken as indicative or symptomatic of it. And perfect correlation, if this is understood along the lines of Hume’s constant conjunction, does not indicate causation at all but probably something quite different.
markfrankel18

Elizabeth Loftus: The fiction of memory | Video on TED.com - 0 views

  •  
    "Psychologist Elizabeth Loftus studies memories. More precisely, she studies false memories, when people either remember things that didn't happen or remember them differently from the way they really were. It's more common than you might think, and Loftus shares some startling stories and statistics, and raises some important ethical questions we should all remember to consider. Memory-manipulation expert Elizabeth Loftus explains how our memories might not be what they seem -- and how implanted memories can have real-life repercussions."
Lawrence Hrubes

Malcolm Gladwell - Zeitgeist Americas 2013 - YouTube - 1 views

  •  
    Using statistics from high and low ranked universities, Gladwell argues that what really matters is how well we do relative to our peer group, not how elite our school is globally. Better to be a top student in a mediocre school than an average student in Harvard.
Lawrence Hrubes

Brain Games are Bogus | GarethCook - 0 views

  •  
    " A pair of scientists in Europe recently gathered all of the best research-twenty-three investigations of memory training by teams around the world-and employed a standard statistical technique (called meta-analysis) to settle this controversial issue. The conclusion: the games may yield improvements in the narrow task being trained, but this does not transfer to broader skills like the ability to read or do arithmetic, or to other measures of intelligence."
markfrankel18

Science Isn't Broken | FiveThirtyEight - 0 views

  • If we’re going to rely on science as a means for reaching the truth — and it’s still the best tool we have — it’s important that we understand and respect just how difficult it is to get a rigorous result
  • Scientists’ overreliance on p-values has led at least one journal to decide it has had enough of them. In February, Basic and Applied Social Psychology announced that it will no longer publish p-values.
  • P-hacking and similar types of manipulations often arise from human biases. “You can do it in unconscious ways — I’ve done it in unconscious ways,” Simonsohn said. “You really believe your hypothesis and you get the data and there’s ambiguity about how to analyze it.” When the first analysis you try doesn’t spit out the result you want, you keep trying until you find one that does.
  • ...4 more annotations...
  • Science isn’t broken, nor is it untrustworthy. It’s just more difficult than most of us realize. We can apply more scrutiny to study designs and require more careful statistics and analytic methods, but that’s only a partial solution. To make science more reliable, we need to adjust our expectations of it.
  • From 2001 to 2009, the number of retractions issued in the scientific literature rose tenfold. It remains a matter of debate whether that’s because misconduct is increasing or is just easier to root out.
  • Science is not a magic wand that turns everything it touches to truth. Instead, “science operates as a procedure of uncertainty reduction,” said Nosek, of the Center for Open Science. “The goal is to get less wrong over time.”
  • Some of these biases are helpful, at least to a point. Take, for instance, naive realism — the idea that whatever belief you hold, you believe it because it’s true. This mindset is almost essential for doing science, quantum mechanics researcher Seth Lloyd of MIT told me. “You have to believe that whatever you’re working on right now is the solution to give you the energy and passion you need to work.” But hypotheses are usually incorrect, and when results overturn a beloved idea, a researcher must learn from the experience and keep, as Lloyd described it, “the hopeful notion that, ‘OK, maybe that idea wasn’t right, but this next one will be.’”
Lawrence Hrubes

BBC News - The blind breast cancer detectors - 0 views

  • Gerd Gigerenzer's test In 2006 and 2007 Gigerenzer gave a series of statistics workshops to gynaecologists, and kicked off every session with the same question: A 50-year-old woman, no symptoms, participates in routine mammography screening. She tests positive, is alarmed, and wants to know from you whether she has breast cancer for certain or what the chances are. Apart from the screening results, you know nothing else about this woman. How many women who test positive actually have breast cancer? What is the best answer? nine in 10 eight in 10 one in 10 one in 100 Gigerenzer then supplied the doctors with data about Western women of this age. (His figures were based on US studies from the 1990s, rounded up or down for simplicity - recent stats from Britain's National Health Service are slightly different.) The probability that a woman has breast cancer is 1% ("prevalence") If a woman has breast cancer, the probability that she tests positive is 90% ("sensitivity") If a woman does not have breast cancer, the probability that she nevertheless tests positive is 9% ("false alarm rate") In one session, almost half the gynaecologists said the woman's chance of having cancer was nine in 10. Only 21% said that the figure was one in 10 - which is the correct answer.
Lawrence Hrubes

BBC News - Are most victims of terrorism Muslim? - 1 views

  • After the Charlie Hebdo attack, a Paris imam went to the scene and condemned the murders. "These victims are martyrs, and I shall pray for them with all my heart," said Hassen Chalghoumi (above). He was also quoted as saying that 95% of victims of terrorism are Muslim. How accurate is this statistic?
  • When people in the West think of terrorist attacks, they may think of Charlie Hebdo, or the 7/7 London tube and bus bombs, the Madrid train bombs and of course 9/11 - and although some Muslims did die in these attacks, most of the victims wouldn't have been Muslim. The overall number of deadly terrorist attacks in France, the UK, Spain and the US, however, is very low by international standards. Between 2004-2013, the UK suffered 400 terrorist attacks, mostly in Northern Ireland, and almost all of them were non-lethal. The US suffered 131 attacks, fewer than 20 of which were lethal. France suffered 47 attacks. But in Iraq, there were 12,000 attacks and 8,000 of them were lethal.
Lawrence Hrubes

BBC World Service - More or Less, The death toll in Syria - 0 views

  •  
    As global leaders remain divided on whether to carry out a military strike against Syria in response to the apparent use of chemical weapons against its people, Tim Harford looks at the different claims made about how many people have been killed. The United States, the UK and France are sharing intelligence, but all quote different estimates of how many people they think died in the attack by Syrian President Bashar al-Assad's forces. Tim speaks to Kelly Greenhill, a professor of political science at Tufts University in the US, and co-author of Sex, Drugs and Body Counts about why the numbers vary so widely. And he speaks to Megan Price from the Human Rights Data Analysis Group, who has been trying to keep a tally of the deaths in Syria since the conflict began.
Lawrence Hrubes

BBC News - Do 85 rich people have same wealth as half the world? - 0 views

  • Are the 85 richest people in the world as wealthy as the world's poorest half? A number of listeners got in touch to ask about this fact, widely reported around the world, from the Washington Post to CNN. The figure comes from a report by British aid charity Oxfam. It got lots of attention, so the charity produced another figure for the UK, stating that the five richest families had more wealth than the poorest 20%. But how were these figures calculated?
Lawrence Hrubes

Student Course Evaluations Get An 'F' : NPR Ed : NPR - 0 views

  • In universities around the world, semesters end with students filling out similar surveys about their experience in the class and the quality of the teacher. Student ratings are high-stakes. They come up when faculty are being considered for tenure or promotions. In fact, they're often the only method a university uses to monitor the quality of teaching. Recently, a number of faculty members have been publishing research showing that the comment-card approach may not be the best way to measure the central function of higher education.
markfrankel18

Why Smart People Are Stupid - The New Yorker - 1 views

  • When people face an uncertain situation, they don’t carefully evaluate the information or look up relevant statistics. Instead, their decisions depend on a long list of mental shortcuts, which often lead them to make foolish decisions.
  • Perhaps our most dangerous bias is that we naturally assume that everyone else is more susceptible to thinking errors, a tendency known as the “bias blind spot.” This “meta-bias” is rooted in our ability to spot systematic mistakes in the decisions of others—we excel at noticing the flaws of friends—and inability to spot those same mistakes in ourselves.
markfrankel18

Daniel Kahneman: 'What would I eliminate if I had a magic wand? Overconfidenc... - 0 views

  • Not even he believes that the various flaws that bedevil decision-making can be successfully corrected. The most damaging of these is overconfidence: the kind of optimism that leads governments to believe that wars are quickly winnable and capital projects will come in on budget despite statistics predicting exactly the opposite. It is the bias he says he would most like to eliminate if he had a magic wand. But it “is built so deeply into the structure of the mind that you couldn’t change it without changing many other things”.
markfrankel18

Want to spin your data? Five Ways to Lie with Charts - 0 views

  • In the right (or wrong) hands, bar graphs and pie charts can become powerful agents of deception, tricking you into inferring trends that don’t exist, mistaking less for more, and missing alarming facts. The best measure of a chart’s honesty is the amount of time it takes to interpret it, says Massachusetts Institute of Technology perceptual scientist Ruth Rosenholtz: “A bad chart requires more cognitive processes and more reasoning about what you’ve seen.”
markfrankel18

Rich countries and the minorities they discriminate against, mapped - Quartz - 1 views

  • So what do these findings really mean? Well there are a few different ways of thinking about the economics of discrimination in the workplace. One, known as taste-based discrimination, simply suggests that some employers have a preference against hiring minorities, even if they’re just as productive as other workers. Another, implicit discrimination, is thought to reflect attitudes that the people making discriminatory decisions they are themselves unaware of. Finally, there’s the notion of statistical discrimination, in which the person making the decision is relying not on the characteristics—for example the job skills—of the person in question, but rather some other notion of the “the average characteristics of the group” to which that person belongs. But really those are only elaborate ways of dressing up the obvious: discrimination is discrimination.
1 - 20 of 30 Next ›
Showing 20 items per page