Skip to main content

Home/ UTS-AEI/ Group items tagged error

Rss Feed Group items tagged

Simon Knight

The margin of error: 7 tips for journalists writing about polls and surveys - 0 views

  •  
    Journalists often make mistakes when reporting on data such as opinion poll results, federal jobs reports and census surveys because they don't quite understand - or they ignore - the data's margin of error. Data collected from a sample of the population will never perfectly represent the population as a whole. The margin of error, which depends primarily on sample size, is a measure of how precise the estimate is. The margin of error for an opinion poll indicates how close the match is likely to be between the responses of the people in the poll and those of the population as a whole. To help journalists understand margin of error and how to correctly interpret data from polls and surveys, we've put together a list of seven tips, Look for the margin of error - and report it. It tells you and your audience how much the results can vary. Remember that the larger the margin of error, the greater the likelihood the survey estimate will be inaccurate. Make sure a political candidate really has the lead before you report it. Note that there are real trends, and then there are mistaken claims of a trend. Watch your adjectives. (And it might be best to avoid them altogether.) Keep in mind that the margin of error for subgroups of a sample will always be larger than the margin of error for the sample. Use caution when comparing results from different polls and surveys, especially those conducted by different organizations.
Simon Knight

The Tangled Story Behind Trump's False Claims Of Voter Fraud | FiveThirtyEight - 0 views

  •  
    Say you have a 3,000-person presidential election survey from a state where 3 percent of the population is black. If your survey is exactly representative of reality, you'd end up with 90 black people out of that 3,000. Then you ask them who they plan to vote for (for our purposes, we're assuming they're all voting). History suggests the vast majority will go with the Democrat. Over the last five presidential elections, Republicans have earned an average of only 7 percent of the black vote nationwide. However, your survey comes back with 19.5 percent of black voters leaning Republican. Now, that's the sort of unexpected result that's likely to draw the attention of a social scientist (or a curious journalist). But it should also make them suspicious. That's because when you're focusing on a tiny population like the black voters of a state with few black citizens, even a measurement error rate of 1 percent can produce an outcome that's wildly different from reality. That error could come from white voters who clicked the wrong box and misidentified their race. It could come from black voters who meant to say they were voting Democratic. In any event, the combination of an imbalanced sample ratio and measurement error can be deadly to attempts at deriving meaning from numbers - a grand piano dangling from a rope above a crenulated, four-tiered wedding cake. Just a handful of miscategorized people and - crash! - your beautiful, fascinating insight collapses into a messy disaster.
Simon Knight

Should newspapers be adding confidence intervals to their graphics? - Storybench - 1 views

  •  
    Should newspapers be adding confidence intervals to their graphics? Why, she asked, are newspapers like hers hesitant to print confidence intervals, a statistical measure of uncertainty? With the exception of noting sampling error in polling data, newspapers like the Times only show uncertainty when they're forced to - and often to prove the opposite of what point data might show.
Simon Knight

The biggest stats lesson of 2016 - Sense About Science USA - 0 views

  •  
    Data aren't dead, contrary to what some pundits stated post-election [2], rather the limitations of data are not always well reported. While pollsters will be reworking their models following the election, what can media journalists do to improve their overall coverage of statistical issues in the future? First, discuss possible statistical biases, such as errors in sampling and polling, and what impact these might have on the results. Second, always provide measures of uncertainty, and root these uncertainties in real-world examples.
Simon Knight

Bad Medicine, Part 1: The Story Of 98.6 - Freakonomics Radio (podcast) - 0 views

  •  
    How statistics, and research-design have changed the face of medicine. We tend to think of medicine as a science, but for most of human history it has been scientific-ish at best. In the first episode of a three-part series, we look at the grotesque mistakes produced by centuries of trial-and-error, and ask whether the new era of evidence-based medicine is the solution.
Simon Knight

The science of influencing people: six ways to win an argument | Science | The Guardian - 0 views

  •  
    "I am quite sure now that often, very often, in matters of religion and politics a man's reasoning powers are not above the monkey's," wrote Mark Twain. Having written a book about our most common reasoning errors, I would argue that Twain was being rather uncharitable - to monkeys. Whether we are discussing Trump, Brexit, or the Tory leadership, we have all come across people who appear to have next to no understanding of world events - but who talk with the utmost confidence and conviction. And the latest psychological research can now help us to understand why.
Simon Knight

Fitness trackers' calorie measurements are prone to error - Health News - NHS Choices - 0 views

  •  
    "Fitness trackers out of step when measuring calories, research shows," The Guardian reports. An independent analysis of a number of leading brands found they were all prone to inaccurate recording of energy expenditure.
Simon Knight

Why polls seem to struggle to get it right - on elections and everything else | News & ... - 1 views

  •  
    The public understandably focuses on polling results and how much these results seem to vary. Take two presidential approval polls from March 21. Polling firm Rasmussen Reports reported that 50 percent of Americans approve of President Donald Trump's performance, while, that same day, Gallup stated that only 37 percent do. In late February, the website FiveThirtyEight listed 18 other presidential approval polls in which Trump's approval ratings ranged from 39 percent to 55 percent. Some of these pollsters queried likely voters, some registered voters and others adults, regardless of their voting status. Almost half of the polls relied on phone calls, another half on online polling and a few used a mix of the two. Further complicating matters, it's not entirely clear how calling cellphones or landlines affects a poll's results. Each of these choices has a consequence, and the range of results attests to the degree that these choices can influence results.
Simon Knight

National poll vs sample survey: how to know what we really think on marriage equality - 0 views

  •  
    The plan to use the Australian Bureau of Statistics to conduct the federal government's postal plebiscite on marriage reform raises an interesting question: wouldn't it be easier, and just as accurate, to ask the ABS to poll a representative sample of the Australian population rather than everyone?
Simon Knight

A Million Children Didn't Show Up In The 2010 Census. How Many Will Be Missing In 2020?... - 0 views

  •  
    Since the census is the ultimate measure of population in the U.S., one might wonder how we could even know if its count was off. In other words, who recounts the count? Well, the Census Bureau itself, but using a different data source. After each modern census, the bureau carries out research to gauge the accuracy of the most recent count and to improve the survey for the next time around. The best method for determining the scope of the undercount is refreshingly simple: The bureau compares the total number of recorded births and deaths for people of each birth year, then adds in an estimate of net international migration and … that's it. With that number, the bureau can vet the census - which missed 4.6 percent of kids under 5 in 2010, according to this check.
Simon Knight

The Media Has A Probability Problem | FiveThirtyEight - 0 views

  •  
    The Media Has A Probability Problem The media's demand for certainty - and its lack of statistical rigor - is a bad match for our complex world.
1 - 11 of 11
Showing 20 items per page