Skip to main content

Home/ Future of Museums/ Group items tagged machine learning

Rss Feed Group items tagged

Elizabeth Merritt

Are we witnessing the dawn of post-theory science? | Artificial intelligence (AI) | The... - 0 views

  • we’ve realised that artificial intelligences (AIs), particularly a form of machine learning called neural networks, which learn from data without having to be fed explicit instructions, are themselves fallible.
  • The second is that humans turn out to be deeply uncomfortable with theory-free science.
  • there may still be plenty of theory of the traditional kind – that is, graspable by humans – that usefully explains much but has yet to be uncovered.
  • ...4 more annotations...
  • The theories that make sense when you have huge amounts of data look quite different from those that make sense when you have small amounts
  • The bigger the dataset, the more inconsistencies the AI learns. The end result is not a theory in the traditional sense of a precise claim about how people make decisions, but a set of claims that is subject to certain constraints.
  • theory-free predictive engines embodied by Facebook or AlphaFold.
  • “Explainable AI”, which addresses how to bridge the interpretability gap, has become a hot topic. But that gap is only set to widen and we might instead be faced with a trade-off: how much predictability are we willing to give up for interpretability?
Elizabeth Merritt

Who Is Working to End the Threat of AI-Generated Deepfakes - 0 views

  • ata poisoning techniques to essentially disturb pixels within an image to create invisible noise, effectively making AI art generators incapable of generating realistic deepfakes based on the photos they’re fed.
  • Higher resolution images work even better, he said, since they include more pixels that can be minutely disturbed.
  • Google is creating its own AI image generator called Imagen, though few people have been able to put their system through its paces. The company is also working on a generative AI video system.
  • ...4 more annotations...
  • Salman said he could imagine a future where companies, even the ones who generate the AI models, could certify that uploaded images are immunized against AI models. Of course, that isn’t much good news for the millions of images already uploaded to the open source library like LAION, but it could potentially make a difference for any image uploaded in the future.
  • there are some AI systems that can detect deepfake videos, and there are ways to train people to detect the small inconsistencies that show a video is being faked. The question is: will there come a time when neither human nor machine can discern if a photo or video has been manipulated?
  • Back in September, OpenAI announced users could once again upload human faces to their system, but claimed they had built in ways to stop users from showing faces in violent or sexual contexts. It also asked users not to upload images of people without their consent
  • Noah asked Murati if there was a way to make sure AI programs don’t lead us to a world “where nothing is real, and everything that’s real, isn’t?”
1 - 5 of 5
Showing 20 items per page