Skip to main content

Home/ UTS-AEI/ Group items tagged numbers

Rss Feed Group items tagged

Simon Knight

What's the Right Number of Taxis (or Uber or Lyft Cars) in a City? - The New York Times - 0 views

  •  
    When Uber and Lyft first entered the market, offering a ride-hailing service that would come to include tens of thousands of amateur drivers, most major American cities had been tightly controlling the competition. New York City allowed exactly 13,637 licenses for taxicabs. Chicago permitted 6,904, Boston 1,825 and Philadelphia 1,600. These numbers weren't entirely arbitrary. Cities had spent decades trying to set numbers that would keep drivers and passengers satisfied and streets safe. But the exercise was always a fraught one. And New York City now faces an even more complex version of it, after the passage of legislation this week that will temporarily cap services like Uber and Lyft. The city plans to halt new licenses for a year while it studies the impact of ride-hailing and establishes new rules for driver pay. In doing so, it renews an old question: What's the right number of vehicles anyway? The answer isn't easy because it depends largely on which problem officials are trying to solve. Do they want to minimize wait times for passengers or maximize wages for drivers? Do they want the best experience for individual users, or the best outcome for the city - including for residents who use city streets but never ride taxis or Uber at all?
Simon Knight

What is gender pay gap reporting, and what does it mean? | Society | The Guardian - 0 views

  •  
    When talking about the gender pay gap people tend to talk about the median figure rather than the mean. The mean is calculated by adding up all of the wages of employees in a company and dividing that figure by the number of employees. This means the final figure can be skewed by a small number of highly paid individuals. The median is the number that falls in the middle of a range when everyone's wages are lined up from smallest to largest and is more representative when there is a lot of variation in pay.
Simon Knight

The NHS doesn't need £2,000 from each household to survive. It's fake maths |... - 0 views

  •  
    Some great quotes in this piece! The language of politics warps our democracy again and again, as in this tax calculation. The media must unpack statistics Last week, the Institute for Fiscal Studies and the Health Foundation published a report on funding for health and social care. One figure from the report was repeated across the headlines. For the NHS to stay afloat, it would require "£2,000 in tax from every household". Shocking stuff!If you're sitting at a bar with a group of friends and Bill Gates walks in, the average wealth of everyone in the room makes you all millionaires. But if you try to buy the most expensive bottle of champagne in the place, your debit card will still be declined. The issue to be addressed, and one to which there is no fully correct answer, is how we can put numbers into a context that enables people to make informed choices. Big numbers are hard to conceptualise - most of us have no intuitive understanding of what £56bn even looks like.
Simon Knight

"1 in 10 pregnant women" or "51 babies"? Only NPR meets challenge of interpre... - 1 views

  •  
    almost all the stories I looked at emphasized that "1 in 10 pregnant women" with Zika gave birth to babies with birth defects.But how many actual women does the "1 in 10" figure represent? How many actual babies with birth defects?You have to wade far down into all of these stories to find the numbers, whereas NPR puts them right in its headline:51 Babies Born With Zika-Related Birth Defects In The U.S. Last YearThe fact that 1 in 10 women with Zika have babies with birth defects is accurate but not nearly as informative as it could be.And when communicating to a general audience, it's misleading to the point of scaremongering to make the "1 in 10" headline the take-home message from the study.
Simon Knight

A Million Children Didn't Show Up In The 2010 Census. How Many Will Be Missing In 2020?... - 0 views

  •  
    Since the census is the ultimate measure of population in the U.S., one might wonder how we could even know if its count was off. In other words, who recounts the count? Well, the Census Bureau itself, but using a different data source. After each modern census, the bureau carries out research to gauge the accuracy of the most recent count and to improve the survey for the next time around. The best method for determining the scope of the undercount is refreshingly simple: The bureau compares the total number of recorded births and deaths for people of each birth year, then adds in an estimate of net international migration and … that's it. With that number, the bureau can vet the census - which missed 4.6 percent of kids under 5 in 2010, according to this check.
Simon Knight

4 examples of computational thinking in journalism - Online Journalism Blog - 1 views

  •  
    Nice piece on computational thinking and data journalism. For example... This story, published in the UK tabloid newspaper The Mirror, is a great example of understanding how a computer might 'see' information and be able to help you extract a story from it. The data behind the story is a collection of over 300,000 pieces of sheet music. On paper that music would be a collection of ink on paper. But because that has now been digitised, it is now quantified. That means we can perform calculations and comparisons against it. We could: Count the number of notes Calculate the variety (number of different) of notes Identify the most common notes Identify the notes with the maximum value Identify the notes with the minimum value Calculate a 'range' by subtracting the minimum from the maximum The journalist has seen this, and decided that the last option has perhaps the most potential to be newsworthy - we assume some singers have wider ranges than others, and the reality may surprise us (a quality of newsworthiness).
Simon Knight

The way we train AI is fundamentally flawed - MIT Technology Review - 0 views

  •  
    Roughly put, building a machine-learning model involves training it on a large number of examples and then testing it on a bunch of similar examples that it has not yet seen. When the model passes the test, you're done. What the Google researchers point out is that this bar is too low. The training process can produce many different models that all pass the test but-and this is the crucial part-these models will differ in small, arbitrary ways, depending on things like the random values given to the nodes in a neural network before training starts, the way training data is selected or represented, the number of training runs, and so on. These small, often random, differences are typically overlooked if they don't affect how a model does on the test. But it turns out they can lead to huge variation in performance in the real world. In other words, the process used to build most machine-learning models today cannot tell which models will work in the real world and which ones won't.
Simon Knight

Significant Digits For Monday, Dec. 12, 2016 | FiveThirtyEight - 0 views

  •  
    "Significant Digits" is a daily digest of the numbers tucked inside the news by fivethirtyeight.com - e.g. in this issue 29 percent Percentage of Americans who regularly work weekends. Another 27 percent regularly work between the hours of 10 p.m. and 6 a.m. Maybe useful for understanding how important quantitative information is in the world around us.
Simon Knight

Methodology: finding the numbers on Australia's foreign aid spending over time - 0 views

  •  
    As the author of this FactCheck, I was asked to review the facts on Australia's foreign aid spending from the Menzies era to 2016-17. Sir Robert Menzies was prime minister from 1949 to 1966, which is the Menzies era for present purposes. (Menzies also served as prime minister from 1939 to 1941.) I examined the evidence for and against this statement: Aid was at its highest under Menzies, at 0.5% … when per capita income was much lower. - World Vision Australia Chief Advocate Tim Costello, quoted in The Sydney Morning Herald, December 28, 2016. I found the statement to be incorrect, strictly interpreted, though Costello's broader point is valid. The ratio of Australia's aid to its gross national income has never exceeded 0.48%, and that level was achieved slightly after the conclusion of the Menzies era, in the financial year 1967-68. Below, I explain how I arrived at this conclusion, providing more detail than could be accommodated in the FactCheck itself.
Simon Knight

Want to quit a bad habit? Here's one way to compare treatments - 0 views

  •  
    Whether it's quitting smoking, reducing alcohol intake or making healthier dietary choices, many of us have habits we'd like to change. But it's really hard to know which treatment path to take. To advise their patients on the best of course of action, doctors sometimes compare treatments using something called the "number needed to treat" (NNT). In deciding whether to embark on a course of treatment, NNT can help. But the term is easily misunderstood by patients, and doctors as well. So it's useful to break down what NNT means.
Simon Knight

Communicating large amounts: A new strategy is needed | News & Analysis | Data Driven J... - 1 views

  •  
    What's the most efficient way to communicate a large amount to a reader? We ran an experiment to find out. The results show that we must give up with senseless "football fields" comparisons and focus on finding out if a number matters or not.
Simon Knight

How do we know statistics can be trusted? We talked to the humans behind the numbers to... - 0 views

  •  
    in our research, which involves talking to statisticians, public servants and journalists who produce and communicate the statistics that govern our lives, people say overwhelmingly that faith and trust are essential parts of what makes statistics useful. Despite the objective and impartial appearance of statistics, it is a web of people and human processes that makes them trustworthy.
Simon Knight

When doing data reporting, look at the raw numbers, not just at percentages -and write ... - 0 views

  •  
    A headline in The New York Times today reads "In the Shopping Cart of a Food Stamp Household: Lots of Soda." Is it true? The story itself provides hints that the headline is misleading, and likely to damage the image of the SNAP program and its beneficiaries. This is dangerous, considering that many readers look at clickbaity headlines, like the NYTimes one, but don't read stories. SNAP households aren't different than the rest of households. Most Americans buy and drink way too much soda and, as a result, obesity and Type II diabetes have reached epidemic levels. The story says that households that receive food stamps spend 9.3% of their grocery budget on soft drinks, while families in general spend 7.1%. This is one of those cases when reporting just percentages, and not taking into account other variables, such as total spending in groceries, sounds fishy.
Simon Knight

Could Trump Really Deport Millions of Unauthorized Immigrants? - The New York Times - 0 views

  •  
    This is a really great example of using a visualisation to communicate a quantitative fact check. This claim is a good case for doing a basic plausibility check, and thinking about what numeric information you'd need to know to understand the claim (e.g., how many people are deported now (what's the baseline), and what are the estimates for the maximum number of unauthorized immigrants in the country?).
Simon Knight

Some of our best work from 2016 | FiveThirtyEight - 0 views

  •  
    This is a great list of evidence based stories by FiveThirtyEight (FiveThirtyEight uses statistical analysis - hard numbers - to tell compelling stories about elections, politics, sports, science, economics) They also produced a list of some great stories from other venues: http://fivethirtyeight.com/features/damn-we-wish-wed-written-these-11-stories/
Simon Knight

Facts are the reason science is losing during the current war on reason | Science | The... - 0 views

  •  
    Interesting perspective on communicating evidence. With controversy about science communication, facts and alternative facts hitting the headlines recently, I've been having a number of conversations with colleagues from all over the world about why science seems to be losing in the current war on reason. This isn't in the usual fringe battle fronts like creationism or flat-Earthers. It's on topics deep behind our lines, in areas like whether climate change exists or not, how many people were present at a given time at a given place and whether one man with a questionable grasp on reality should be the only source people get their news from.
Simon Knight

Journalists Need to Do the Math - Columbia Journalism Review - 0 views

  •  
    Journalists Need to Do the Math Numbers still make many watchdogs whimper
Simon Knight

How we edit science part 4: how to talk about risk, and words and images not to use - 0 views

  •  
    You may have heard the advice for pregnant women to avoid eating soft cheeses. This is because soft cheeses can sometimes carry the Listeria monocytogenes bacteria, which can cause a mild infection. In some cases, the infection can be serious, even fatal, for the unborn child. However, the infection is very rare, affecting only around 65 people out of 23.5 million in Australia in 2014. That's 0.0003% of the population. Of these, only around 10% are pregnant women. Of these, only 20% of infections prove fatal to the foetus. We're getting down to some very small numbers here. If we talked about every risk factor in our lives the way health authorities talk about soft cheeses, we'd likely don a helmet and kneepads every morning after we get out of bed. And we'd certainly never drive a car. The upshot of this example is to emphasise that our intuitions about risk are often out of step with the actualities. So journalists need to take great care when reporting risk so as not to exacerbate our intuitive deficits as a species.
Simon Knight

When the numbers aren't enough: how different data work together in research - 0 views

  •  
    As an epidemiologist, I am interested in disease - and more specifically, who in a population currently has or might get that disease. What is their age, sex, or socioeconomic status? Where do they live? What can people do to limit their chances of getting sick? Questions exploring whether something is likely to happen or not can be answered with quantitative research. By counting and measuring, we quantify (measure) a phenomenon in our world, and present the results through percentages and averages. We use statistics to help interpret the significance of the results. While this approach is very important, it can't tell us everything about a disease and peoples' experiences of it. That's where qualitative data becomes important.
Simon Knight

Who Should Recount Elections: People … Or Machines? | FiveThirtyEight - 0 views

  •  
    Interesting discussion of data on vote recounts and using electronic or hand counting methods (in America where they use electronic voting machines quite commonly). These numbers represent three main kinds of disputes, Foley told me. First, candidates (and their lawyers) argue over what ballots should be counted and which should be thrown out as ineligible. Then, they argue over which candidate specific ballots should count for. Finally, they argue over whether all the eligible votes were counted correctly - the actual recount. Humans are much better than machines at making decisions around the first two kinds of ambiguous disputes, Stewart said, but evidence suggests that the computers are better at counting. Michael Byrne, a psychology professor at Rice University who studies human-computer interaction, agreed. "That's kind of what they're for," he said.
1 - 20 of 45 Next › Last »
Showing 20 items per page