Skip to main content

Home/ UTS-AEI/ Group items tagged accuracy

Rss Feed Group items tagged

Simon Knight

A Million Children Didn't Show Up In The 2010 Census. How Many Will Be Missing In 2020?... - 0 views

  •  
    Since the census is the ultimate measure of population in the U.S., one might wonder how we could even know if its count was off. In other words, who recounts the count? Well, the Census Bureau itself, but using a different data source. After each modern census, the bureau carries out research to gauge the accuracy of the most recent count and to improve the survey for the next time around. The best method for determining the scope of the undercount is refreshingly simple: The bureau compares the total number of recorded births and deaths for people of each birth year, then adds in an estimate of net international migration and … that's it. With that number, the bureau can vet the census - which missed 4.6 percent of kids under 5 in 2010, according to this check.
Simon Knight

What happens when misinformation is corrected? Understanding the labeling of content - 0 views

  •  
    What happens once misinformation is corrected? Is it effective at all? A major problem for social media platforms resides in the difficulty to reduce the spread of misinformation. In response, measures such as the labeling of false content and related articles have been created to correct users' perceptions and accuracy assessment. Although this may seem a clever initiative coming from social media platforms, helping users to understand which information can be trusted, restrictive measures also raise pivotal questions. What happens to those posts which are false, but do not display any tag flagging their untruthfulness? Will we be able to discern them?
Simon Knight

Lies, damned lies and statistics: Why reporters must handle data with care | News & Ana... - 0 views

  •  
    During the 2016 EU referendum campaign, both sides used statistics pretty freely to back their arguments. Understandably, UK broadcasters felt compelled to balance competing perspectives, giving audiences the opportunity to hear the relative merits of leaving or remaining in the EU. In doing so, however, the truth of these statistical claims was not always properly tested. This might help explain some of the public's misconceptions about EU membership. So, for example, although independent sources repeatedly challenged the Leave campaign's claim that the UK government spent £350m per week on EU membership, an IPSOS MORI survey found that almost half of respondents believed this was true just days before the election. Of the 6,916 news items examined in our research, more than 20% featured a statistic. Most of these statistical references were fairly vague, with little or limited context or explanation. Overall, only a third provided some context or made use of comparative data. Statistics featured mostly in stories about business, the economy, politics and health. So, for example, three-quarters of all economics items featured at least one statistic, compared to almost half of news about business. But there were some areas - where statistics might play a useful role in communicating trends or levels of risk - that statistics were rarely used.
Simon Knight

Sensitivity, specificity and understanding medical tests - 0 views

  •  
    Interesting discussion of why headlines like this one "85% accurate" for the detection of stomach cancer" about an experimental breath test are problematic (because some people who don't have the condition get diagnosed with it, and they can miss people who genuinely do have the condition!). Good example using pregnancy tests as an infographic.
Simon Knight

The way we train AI is fundamentally flawed - MIT Technology Review - 0 views

  •  
    Roughly put, building a machine-learning model involves training it on a large number of examples and then testing it on a bunch of similar examples that it has not yet seen. When the model passes the test, you're done. What the Google researchers point out is that this bar is too low. The training process can produce many different models that all pass the test but-and this is the crucial part-these models will differ in small, arbitrary ways, depending on things like the random values given to the nodes in a neural network before training starts, the way training data is selected or represented, the number of training runs, and so on. These small, often random, differences are typically overlooked if they don't affect how a model does on the test. But it turns out they can lead to huge variation in performance in the real world. In other words, the process used to build most machine-learning models today cannot tell which models will work in the real world and which ones won't.
Simon Knight

How accurate is your RAT? 3 scenarios show it's about more than looking for lines - 0 views

  •  
    As Omicron surges through the community, getting the right answer from a Rapid Antigen Test (RAT) is not as straightforward as reading one or two lines off the kit. RATs are a convenient diagnostic tool to detect COVID virus fragments in nasal secretions or saliva. They are designed to be self-administered and give an answer in minutes. Detecting infection early is critical to preventing spread and allowing persons at risk of severe disease to get timely access to close monitoring and new life-saving therapies. As governments plan to distribute tens of millions of RAT kits to schools and workplaces in coming weeks to help Australians work and study safely, it is important that we understand how to best use this diagnostic tool to reduce transmission and unnecessary disruptions to our lives and economy.
1 - 6 of 6
Showing 20 items per page