Skip to main content

Home/ Future of Museums/ Group items tagged learning data

Rss Feed Group items tagged

Elizabeth Merritt

Who Is Working to End the Threat of AI-Generated Deepfakes - 0 views

  • ata poisoning techniques to essentially disturb pixels within an image to create invisible noise, effectively making AI art generators incapable of generating realistic deepfakes based on the photos they’re fed.
  • Higher resolution images work even better, he said, since they include more pixels that can be minutely disturbed.
  • Google is creating its own AI image generator called Imagen, though few people have been able to put their system through its paces. The company is also working on a generative AI video system.
  • ...4 more annotations...
  • Salman said he could imagine a future where companies, even the ones who generate the AI models, could certify that uploaded images are immunized against AI models. Of course, that isn’t much good news for the millions of images already uploaded to the open source library like LAION, but it could potentially make a difference for any image uploaded in the future.
  • there are some AI systems that can detect deepfake videos, and there are ways to train people to detect the small inconsistencies that show a video is being faked. The question is: will there come a time when neither human nor machine can discern if a photo or video has been manipulated?
  • Back in September, OpenAI announced users could once again upload human faces to their system, but claimed they had built in ways to stop users from showing faces in violent or sexual contexts. It also asked users not to upload images of people without their consent
  • Noah asked Murati if there was a way to make sure AI programs don’t lead us to a world “where nothing is real, and everything that’s real, isn’t?”
Elizabeth Merritt

Are we witnessing the dawn of post-theory science? | Artificial intelligence (AI) | The... - 0 views

  • we’ve realised that artificial intelligences (AIs), particularly a form of machine learning called neural networks, which learn from data without having to be fed explicit instructions, are themselves fallible.
  • The second is that humans turn out to be deeply uncomfortable with theory-free science.
  • there may still be plenty of theory of the traditional kind – that is, graspable by humans – that usefully explains much but has yet to be uncovered.
  • ...4 more annotations...
  • The theories that make sense when you have huge amounts of data look quite different from those that make sense when you have small amounts
  • The bigger the dataset, the more inconsistencies the AI learns. The end result is not a theory in the traditional sense of a precise claim about how people make decisions, but a set of claims that is subject to certain constraints.
  • theory-free predictive engines embodied by Facebook or AlphaFold.
  • “Explainable AI”, which addresses how to bridge the interpretability gap, has become a hot topic. But that gap is only set to widen and we might instead be faced with a trade-off: how much predictability are we willing to give up for interpretability?
Ruth Cuadra

Google Announces An Online Data Interpretation Class For The General Public - 0 views

  •  
    Businesses-as-schools has the potential to (further) disrupt the higher education and adult learning market. As companies edge into a role as teacher, how will they balance their own interests and the social goals of mass education?
Ruth Cuadra

Algorithms Rule the World - [INFOgraphic] | Futurist Foresight - 0 views

  •  
    Whatever the purpose, algorithms will continue to shake up the status quo.
  •  
    I think our opportunity is to learn how they can personalize the museum experience... remember the data value chain graph-- descriptive to predictive to prescriptive. If we need to learn from another sector= 'Adaptive Learning Platforms' like Knewton and LearnSmart (McGrawHill)--- what are analogs for guiding museum goers?
Elizabeth Merritt

Mastodon Isn't Just A Replacement For Twitter - 1 views

  • We need to learn how to become more like engaged democratic citizens in the life of our networks.
  • he challenge and the opportunity of spaces like the fediverse is that it is up to us which rules we want to follow and how we make rules for ourselves.
  • We believe that it is time to embrace the old idea of subsidiarity, which dates back to early Calvinist theology and Catholic social teaching. The European Union’s founding documents use the term, too. It means that in a large and interconnected system, people in a local community should have the power to address their own problems. Some decisions are made at higher levels, but only when necessary. Subsidiarity is about achieving the right balance between local units and the larger systems.
  • ...2 more annotations...
  • On Social.coop, we don’t just post and comment about what’s on our minds; we also decide on our moderation practices and enact them through committees. The Community Working Group handles conflict resolution through accountability processes. Its members are paid with funds from our sliding-scale member dues. The Tech Working Group maintains our servers, while the Finance Working Group keeps an eye on our budget. Any member can propose new activities and policies, and we can all vote on them according to the bylaws. We adjust Mastodon’s moderation settings as we see fit.
  • a number of servers organized to collectively ban those that harbored white supremacists, like Gab, from the rest of the fediverse — even if it remained active on the network, most people using Mastodon would never see Gab users’ posts.
1 - 8 of 8
Showing 20 items per page