Skip to main content

Home/ Literacy with ICT/ Group items tagged deepfakes

Rss Feed Group items tagged

John Evans

Deepfakes are coming. Is Big Tech ready? - 3 views

  •  
    "The word "deepfakes" refers to using deep learning, a type of machine learning, to add anyone's face and voice to video. It has been mostly found on the internet's dark corners, where some people have used it to insert ex-girlfriends and celebrities into pornography. But BuzzFeed provided a glimpse of a possible future in April when it created a video that supposedly showed Obama mocking Trump, but in reality, Obama's face was superimposed onto footage of Hollywood filmmaker Jordan Peele using deepfake technology. Deepfakes could pose a greater threat than the fake news and Photoshopped memes that littered the 2016 presidential election because they can be hard to spot and because people are -- for now -- inclined to believe that video is real. But it's not just about individual videos that will spread misinformation: it's also the possibility that videos like these will convince people that they simply can't trust anything they read, hear or see unless it supports the opinions they already hold. Experts say fake videos that will be all but impossible to identify as such are as little as 12 months away."
John Evans

Deepfake videos: Inside the Pentagon's race against disinformation - 0 views

  •  
    "When seeing is no longer believing Inside the Pentagon's race against deepfake videos Advances in artificial intelligence could soon make creating convincing fake audio and video - known as "deepfakes" - relatively easy. Making a person appear to say or do something they did not has the potential to take the war of disinformation to a whole new level. Scroll down for more on deepfakes and what the US government is doing to combat them."
John Evans

Deepfakes are getting better-but they're still easy to spot | Ars Technica - 0 views

  •  
    "Last week, Mona Lisa smiled. A big, wide smile, followed by what appeared to be a laugh and the silent mouthing of words that could only be an answer to the mystery that had beguiled her viewers for centuries. A great many people were unnerved. Ars Technica Join Ars Technica and Get Our Best Tech Stories DELIVERED STRAIGHT TO YOUR INBOX. SIGN ME UP Will be used in accordance with our Privacy Policy Mona's "living portrait," along with likenesses of Marilyn Monroe, Salvador Dali, and others, demonstrated the latest technology in deepfakes-seemingly realistic video or audio generated using machine learning. Developed by researchers at Samsung's AI lab in Moscow, the portraits display a new method to create credible videos from a single image. With just a few photographs of real faces, the results improve dramatically, producing what the authors describe as "photorealistic talking heads." The researchers (creepily) call the result "puppeteering," a reference to how invisible strings seem to manipulate the targeted face. And yes, it could, in theory, be used to animate your Facebook profile photo. But don't freak out about having strings maliciously pulling your visage anytime soon. "Nothing suggests to me that you'll just turnkey use this for generating deepfakes at home. Not in the short-term, medium-term, or even the long-term," says Tim Hwang, director of the Harvard-MIT Ethics and Governance of AI Initiative. The reasons have to do with the high costs and technical know-how of creating quality fakes-barriers that aren't going away anytime soon."
John Evans

New AI fake text generator may be too dangerous to release, say creators | Technology |... - 2 views

  •  
    "The creators of a revolutionary AI system that can write news stories and works of fiction - dubbed "deepfakes for text" - have taken the unusual step of not releasing their research publicly, for fear of potential misuse. OpenAI, an nonprofit research company backed by Elon Musk, Reid Hoffman, Sam Altman, and others, says its new AI model, called GPT2 is so good and the risk of malicious use so high that it is breaking from its normal practice of releasing the full research to the public in order to allow more time to discuss the ramifications of the technological breakthrough."
John Evans

From fake news to fabricated video, can we preserve our shared reality? - CSMonitor.com - 1 views

  •  
    "FEBRUARY 22, 2018 -From the instant replay that decides a game to the bodycam footage that clinches a conviction, people tend to trust video evidence as an arbiter of truth. But that faith could soon become quaint, as machine learning is enabling ordinary users to create fabricated videos of just about anyone doing just about anything. Earlier this month, the popular online forum Reddit shut down r/deepfakes, a subreddit discussion board devoted to using open-source machine-learning tools to insert famous faces into pornographic videos. Observers say this episode represents just one of the many ways that the this technology could fuel social problems, particularly in an age of political polarization. Combating the negative effects of fabricated video will require a shift among both news outlets and news consumers, say experts.  "Misinformation has been prevalent in our politics historically," says Brendan Nyhan, a political scientist at Dartmouth College in Hanover, N.H., who specializes in political misperceptions. "But it is true that technology can facilitate new forms of rumors and other kinds of misinformation and help them spread more rapidly than ever before." So-called fake news has been around long before Macedonian teenagers began enriching themselves by feeding false stories to social media users. In 1782, Benjamin Franklin printed a falsified supplement to the Boston Independent Chronicle maligning Seneca Indians in an attempt to influence public opinion during peace negotiations with Britain."
1 - 5 of 5
Showing 20 items per page