Skip to main content

Home/ History Readings/ Group items tagged videos

Rss Feed Group items tagged

Javier E

'Social Order Could Collapse' in AI Era, Two Top Japan Companies Say - WSJ - 0 views

  • Japan’s largest telecommunications company and the country’s biggest newspaper called for speedy legislation to restrain generative artificial intelligence, saying democracy and social order could collapse if AI is left unchecked.
  • the manifesto points to rising concern among American allies about the AI programs U.S.-based companies have been at the forefront of developing.
  • The Japanese companies’ manifesto, while pointing to the potential benefits of generative AI in improving productivity, took a generally skeptical view of the technology
  • ...8 more annotations...
  • Without giving specifics, it said AI tools have already begun to damage human dignity because the tools are sometimes designed to seize users’ attention without regard to morals or accuracy.
  • Unless AI is restrained, “in the worst-case scenario, democracy and social order could collapse, resulting in wars,” the manifesto said.
  • It said Japan should take measures immediately in response, including laws to protect elections and national security from abuse of generative AI.
  • The Biden administration is also stepping up oversight, invoking emergency federal powers last October to compel major AI companies to notify the government when developing systems that pose a serious risk to national security. The U.S., U.K. and Japan have each set up government-led AI safety institutes to help develop AI guidelines.
  • NTT and Yomiuri said their manifesto was motivated by concern over public discourse. The two companies are among Japan’s most influential in policy. The government still owns about one-third of NTT, formerly the state-controlled phone monopoly.
  • Yomiuri Shimbun, which has a morning circulation of about six million copies according to industry figures, is Japan’s most widely-read newspaper. Under the late Prime Minister Shinzo Abe and his successors, the newspaper’s conservative editorial line has been influential in pushing the ruling Liberal Democratic Party to expand military spending and deepen the nation’s alliance with the U.S.
  • The Yomiuri’s news pages and editorials frequently highlight concerns about artificial intelligence. An editorial in December, noting the rush of new AI products coming from U.S. tech companies, said “AI models could teach people how to make weapons or spread discriminatory ideas.” It cited risks from sophisticated fake videos purporting to show politicians speaking.
  • NTT is active in AI research, and its units offer generative AI products to business customers. In March, it started offering these customers a large-language model it calls “tsuzumi” which is akin to OpenAI’s ChatGPT but is designed to use less computing power and work better in Japanese-language contexts.
Javier E

I tried out an Apple Vision Pro. It frightened me | Arwa Mahdawi | The Guardian - 0 views

  • Despite all the marketed use cases, the most impressive aspect of it is the immersive video
  • Watching a movie, however, feels like you’ve been transported into the content.
  • that raises serious questions about how we perceive the world and what we consider reality. Big tech companies are desperate to rush this technology out but it’s not clear how much they’ve been worrying about the consequences.
  • ...10 more annotations...
  • it is clear that its widespread adoption is a matter of when, not if. There is no debate that we are moving towards a world where “real life” and digital technology seamlessly blur
  • Over the years there have been multiple reports of people being harassed and even “raped” in the metaverse: an experience that feels scarily real because of how immersive virtual reality is. As the lines between real life and the digital world blur to a point that they are almost indistinguishable, will there be a meaningful difference between online assault and an attack in real life?
  • more broadly, spatial computing is going to alter what we consider reality
  • Researchers from Stanford and Michigan University recently undertook a study on the Vision Pro and other “passthrough” headsets (that’s the technical term for the feature which brings VR content into your real-world surrounding so you see what’s around you while using the device) and emerged with some stark warnings about how this tech might rewire our brains and “interfere with social connection”.
  • These headsets essentially give us all our private worlds and rewrite the idea of a shared reality. The cameras through which you see the world can edit your environment – you can walk to the shops wearing it, for example, and it might delete all the homeless people from your view and make the sky brighter.
  • “What we’re about to experience is, using these headsets in public, common ground disappears,”
  • “People will be in the same physical place, experiencing simultaneous, visually different versions of the world. We’re going to lose common ground.”
  • It’s not just the fact that our perception of reality might be altered that’s scary: it’s the fact that a small number of companies will have so much control over how we see the world. Think about how much influence big tech already has when it comes to content we see, and then multiply that a million times over. You think deepfakes are scary? Wait until they seem even more realistic.
  • We’re seeing a global rise of authoritarianism. If we’re not careful this sort of technology is going to massively accelerate it.
  • Being able to suck people into an alternate universe, numb them with entertainment, and dictate how they see reality? That’s an authoritarian’s dream. We’re entering an age where people can be mollified and manipulated like never before
Javier E

Neal Stephenson's Most Stunning Prediction - The Atlantic - 0 views

  • Think about any concept that we might want to teach somebody—for instance, the Pythagorean theorem. There must be thousands of old and new explanations of the Pythagorean theorem online. The real thing we need is to understand each child’s learning style so we can immediately connect them to the one out of those thousands that is the best fit for how they learn. That to me sounds like an AI kind of project, but it’s a different kind of AI application from DALL-E or large language models.
  • Right now a lot of generative AI is free, but the technology is also very expensive to run. How do you think access to generative AI might play out?
  • Stephenson: There was a bit of early internet utopianism in the book, which was written during that era in the mid-’90s when the internet was coming online. There was a tendency to assume that when all the world’s knowledge comes online, everyone will flock to it
  • ...3 more annotations...
  • It turns out that if you give everyone access to the Library of Congress, what they do is watch videos on TikTok
  • A chatbot is not an oracle; it’s a statistics engine that creates sentences that sound accurate. Right now my sense is that it’s like we’ve just invented transistors. We’ve got a couple of consumer products that people are starting to adopt, like the transistor radio, but we don’t yet know how the transistor will transform society
  • We’re in the transistor-radio stage of AI. I think a lot of the ferment that’s happening right now in the industry is venture capitalists putting money into business plans, and teams that are rapidly evaluating a whole lot of different things that could be done well. I’m sure that some things are going to emerge that I wouldn’t dare try to predict, because the results of the creative frenzy of millions of people is always more interesting than what a single person can think of.
Javier E

OpenAI Just Gave Away the Entire Game - The Atlantic - 0 views

  • If you’re looking to understand the philosophy that underpins Silicon Valley’s latest gold rush, look no further than OpenAI’s Scarlett Johansson debacle.
  • the situation is also a tidy microcosm of the raw deal at the center of generative AI, a technology that is built off data scraped from the internet, generally without the consent of creators or copyright owners. Multiple artists and publishers, including The New York Times, have sued AI companies for this reason, but the tech firms remain unchastened, prevaricating when asked point-blank about the provenance of their training data.
  • At the core of these deflections is an implication: The hypothetical superintelligence they are building is too big, too world-changing, too important for prosaic concerns such as copyright and attribution. The Johansson scandal is merely a reminder of AI’s manifest-destiny philosophy: This is happening, whether you like it or not.
  • ...7 more annotations...
  • Altman and OpenAI have been candid on this front. The end goal of OpenAI has always been to build a so-called artificial general intelligence, or AGI, that would, in their imagining, alter the course of human history forever, ushering in an unthinkable revolution of productivity and prosperity—a utopian world where jobs disappear, replaced by some form of universal basic income, and humanity experiences quantum leaps in science and medicine. (Or, the machines cause life on Earth as we know it to end.) The stakes, in this hypothetical, are unimaginably high—all the more reason for OpenAI to accelerate progress by any means necessary.
  • As with other grand projects of the 20th century, the voting public had a voice in both the aims and the execution of the Apollo missions. Altman made it clear that we’re no longer in that world. Rather than waiting around for it to return, or devoting his energies to making sure that it does, he is going full throttle forward in our present reality.
  • In response to one question about AGI rendering jobs obsolete, Jeff Wu, an engineer for the company, confessed, “It’s kind of deeply unfair that, you know, a group of people can just build AI and take everyone’s jobs away, and in some sense, there’s nothing you can do to stop them right now.” He added, “I don’t know. Raise awareness, get governments to care, get other people to care. Yeah. Or join us and have one of the few remaining jobs. I don’t know; it’s rough.”
  • Part of Altman’s reasoning, he told Andersen, is that AI development is a geopolitical race against autocracies like China. “If you are a person of a liberal-democratic country, it is better for you to cheer on the success of OpenAI” rather than that of “authoritarian governments,” he said. He noted that, in an ideal world, AI should be a product of nations. But in this world, Altman seems to view his company as akin to its own nation-state.
  • Wu’s colleague Daniel Kokotajlo jumped in with the justification. “To add to that,” he said, “AGI is going to create tremendous wealth. And if that wealth is distributed—even if it’s not equitably distributed, but the closer it is to equitable distribution, it’s going to make everyone incredibly wealthy.”
  • This is the unvarnished logic of OpenAI. It is cold, rationalist, and paternalistic. That such a small group of people should be anointed to build a civilization-changing technology is inherently unfair, they note. And yet they will carry on because they have both a vision for the future and the means to try to bring it to fruition
  • Wu’s proposition, which he offers with a resigned shrug in the video, is telling: You can try to fight this, but you can’t stop it. Your best bet is to get on board.
« First ‹ Previous 1061 - 1064 of 1064
Showing 20 items per page