Skip to main content

Home/ UTS-AEI/ Group items tagged data-problem

Rss Feed Group items tagged

Simon Knight

Five ways tech is crowdsourcing women's empowerment | Global Development Professionals ... - 0 views

  •  
    Citizen-generated data is especially important for women's rights issues. In many countries the lack of women in positions of institutional power, combined with slow, bureaucratic systems and a lack of prioritisation of women's rights issues means data isn't gathered on relevant topics, let alone appropriately responded to by the state. Even when data is gathered by institutions, societal pressures may mean it remains inadequate. In the case of gender-based violence, for instance, women often suffer in silence, worrying nobody will believe them or that they will be blamed. Providing a way for women to contribute data anonymously or, if they so choose, with their own details, can be key to documenting violence and understanding the scale of a problem, and thus deciding upon appropriate responses.
Simon Knight

The way we train AI is fundamentally flawed - MIT Technology Review - 0 views

  •  
    Roughly put, building a machine-learning model involves training it on a large number of examples and then testing it on a bunch of similar examples that it has not yet seen. When the model passes the test, you're done. What the Google researchers point out is that this bar is too low. The training process can produce many different models that all pass the test but-and this is the crucial part-these models will differ in small, arbitrary ways, depending on things like the random values given to the nodes in a neural network before training starts, the way training data is selected or represented, the number of training runs, and so on. These small, often random, differences are typically overlooked if they don't affect how a model does on the test. But it turns out they can lead to huge variation in performance in the real world. In other words, the process used to build most machine-learning models today cannot tell which models will work in the real world and which ones won't.
Simon Knight

Shopping for Health Care Simply Doesn't Work. So What Might? - The New York Times - 0 views

  •  
    Interesting look at data around private healthcare and marketisation. Each year, for well over a decade, more people have faced higher health insurance deductibles. The theory goes like this: The more of your own money that you have to spend on health care, the more careful you will be - buying only necessary care, purging waste from the system. But that theory doesn't fully mesh with reality: High deductibles aren't working as intended. A body of research - including randomized studies - shows that people do in fact cut back on care when they have to spend more for it. The problem is that they don't cut only wasteful care. They also forgo the necessary kind. This, too, is well documented, including with randomized studies. People don't know what care they need, which is why they consult doctors.
Simon Knight

Working Where Statistics and Human Rights Meet | CHANCE - 0 views

  •  
    An introduction to a set of deep dive articles an important issue....When we tell people that we work at the intersection of statistics and human rights, the reaction is often surprise. Everyone knows that lawyers and journalists think about human rights problems … but statisticians? Yet, documenting and proving human rights abuses frequently involves the need for quantification. In the case of war crimes and genocide, guilt or innocence can hinge on questions of whether violence was systematic and widespread or one group was targeted at a differential rate compared to others. Similar issues can arise in assessing violations of civil, social, and economic rights. Sometimes the questions can be answered through simple tabulations, but often, more-complex methods of data collection and analysis are required.
Simon Knight

The Supreme Court Is Allergic To Math | FiveThirtyEight - 0 views

  •  
    The Supreme Court does not compute. Or at least some of its members would rather not. The justices, the most powerful jurists in the land, seem to have a reluctance - even an allergy - to taking math and statistics seriously. For decades, the court has struggled with quantitative evidence of all kinds in a wide variety of cases. Sometimes justices ignore this evidence. Sometimes they misinterpret it. And sometimes they cast it aside in order to hold on to more traditional legal arguments. (And, yes, sometimes they also listen to the numbers.) Yet the world itself is becoming more computationally driven, and some of those computations will need to be adjudicated before long. Some major artificial intelligence case will likely come across the court's desk in the next decade, for example. By voicing an unwillingness to engage with data-driven empiricism, justices - and thus the court - are at risk of making decisions without fully grappling with the evidence. This problem was on full display earlier this month, when the Supreme Court heard arguments in Gill v. Whitford, a case that will determine the future of partisan gerrymandering - and the contours of American democracy along with it. As my colleague Galen Druke has reported, the case hinges on math: Is there a way to measure a map's partisan bias and to create a standard for when a gerrymandered map infringes on voters' rights?
Simon Knight

Prepare for reanimation of the zombie myth 'no global warming since 2016' | Dana Nuccit... - 0 views

  •  
    Climate deniers have been peddling the myth 'no warming since [insert date]' for over a decade. It's a popular myth among those who benefit from maintaining the status quo because if the problem doesn't exist, obviously there's no need for action to solve it. And it's an incredibly easy argument that can be made at any time, using the telltale technique of climate denial known as cherry picking.
Simon Knight

Headline vs. study: Bait and switch? - HealthNewsReview.org - 0 views

  •  
    We all do it in journalism. We are taught to write a headline that a) captures what the story is about, and b) captures the reader's attention. Nothing wrong with that. Where the problem comes in is if the headline misleads or misinforms. And, as is so often the case with healthcare topics, that sort of disconnect has the potential to do more harm than good.
Simon Knight

Closing the gap in Indigenous literacy and numeracy? Not remotely - or in cities - 0 views

  •  
    Every year in Australia, the National Assessment Program - Literacy and Numeracy (NAPLAN) results show Indigenous school students are well behind their non-Indigenous peers. Reducing this disparity is a vital part of Australia's national Closing the Gap policy. ... Using an updated version of our equivalent year levels metric, introduced in Grattan Institute's 2016 report Widening Gaps, we estimate year nine Indigenous students in very remote areas are: five years behind in numeracy six years behind in reading, and seven to eight years behind in writing. In other words, the average year nine Indigenous student in a very remote area scores about the same in NAPLAN reading as the average year three non-Indigenous city student, and significantly lower in writing. But it would be a big mistake to see this only as a problem for isolated outback communities. Most Indigenous students live in cities or regional areas. So, even though learning outcomes are worse in remote and very remote areas, city and regional students account for more than two-thirds of the lost years of learning.
1 - 8 of 8
Showing 20 items per page