Skip to main content

Home/ UTS-AEI/ Group items tagged ai

Rss Feed Group items tagged

Simon Knight

Artificial intelligence will improve medical treatments - 0 views

  •  
    Interesting article discussing how ai is being used in medical diagnoses
Simon Knight

The way we train AI is fundamentally flawed - MIT Technology Review - 0 views

  •  
    Roughly put, building a machine-learning model involves training it on a large number of examples and then testing it on a bunch of similar examples that it has not yet seen. When the model passes the test, you're done. What the Google researchers point out is that this bar is too low. The training process can produce many different models that all pass the test but-and this is the crucial part-these models will differ in small, arbitrary ways, depending on things like the random values given to the nodes in a neural network before training starts, the way training data is selected or represented, the number of training runs, and so on. These small, often random, differences are typically overlooked if they don't affect how a model does on the test. But it turns out they can lead to huge variation in performance in the real world. In other words, the process used to build most machine-learning models today cannot tell which models will work in the real world and which ones won't.
Simon Knight

'Anonymised' data can never be totally anonymous, says study | Technology | The Guardian - 0 views

  •  
    "Anonymised" data lies at the core of everything from modern medical research to personalised recommendations and modern AI techniques. Unfortunately, according to a paper, successfully anonymising data is practically impossible for any complex dataset.
Simon Knight

Data journalism's AI opportunity: the 3 different types of machine learning & how they ... - 0 views

  •  
    some examples of how the 3 types of machine learning - supervised, unsupervised, and reinforcement - have already been used for journalistic purposes, and using those to explain what those are along the way. Examples include: supervised learning to investigate doctors and sex abuse; unsurprivsed learning to identify motifs in Wes Anderson films; reinforcement learning to create a rock-paper-scissors that can beat you...
Simon Knight

Design of Hiring Algorithms Impacts Diversity | IndustryWeek - 0 views

  •  
    the use of historical data to train the AI gives 'a leg-up to people from groups who have traditionally been successful and grants fewer opportunities to minorities and women'.
Simon Knight

How marketers use algorithms to (try to) read your mind - 0 views

  •  
    Have you ever you looked for a product online and then been recommended the exact thing you need to complement it? Or have you been thinking about a particular purchase, only to receive an email with that product on sale? All of this may give you a slightly spooky feeling, but what you're really experiencing is the result of complex algorithms used to predict, and in some cases, even influence your behaviour.
Simon Knight

Do computers make better bank managers than humans? - 0 views

  •  
    Algorithms are increasingly making decisions that affect ordinary people's lives. One example of this is so-called "algorithmic lending", with some companies claiming to have reduced the time it takes to approve a home loan to mere minutes. But can computers become better judges of financial risk than human bank tellers? Some computer scientists and data analysts certainly think so.
Simon Knight

What happens when misinformation is corrected? Understanding the labeling of content - 0 views

  •  
    What happens once misinformation is corrected? Is it effective at all? A major problem for social media platforms resides in the difficulty to reduce the spread of misinformation. In response, measures such as the labeling of false content and related articles have been created to correct users' perceptions and accuracy assessment. Although this may seem a clever initiative coming from social media platforms, helping users to understand which information can be trusted, restrictive measures also raise pivotal questions. What happens to those posts which are false, but do not display any tag flagging their untruthfulness? Will we be able to discern them?
Simon Knight

Opinion | The Legislation That Targets the Racist Impacts of Tech - The New York Times - 1 views

  •  
    When creating a machine-learning algorithm, designers have to make many choices: what data to train it on, what specific questions to ask, how to use predictions that the algorithm produces. These choices leave room for discrimination, particularly against people who have been discriminated against in the past. For example, training an algorithm to select potential medical students on a data set that reflects longtime biases against women and people of color may make these groups less likely to be admitted. In computing, the phrase "garbage in, garbage out" describes how poor-quality input leads to poor-quality output. In this case we might say, "White male doctors in, white male doctors out."
1 - 9 of 9
Showing 20 items per page