Skip to main content

Home/ History Readings/ Group items tagged tools

Rss Feed Group items tagged

Javier E

Silicon Valley's Trillion-Dollar Leap of Faith - The Atlantic - 0 views

  • Tech companies like to make two grand pronouncements about the future of artificial intelligence. First, the technology is going to usher in a revolution akin to the advent of fire, nuclear weapons, and the internet.
  • And second, it is going to cost almost unfathomable sums of money.
  • Silicon Valley has already triggered tens or even hundreds of billions of dollars of spending on AI, and companies only want to spend more.
  • ...22 more annotations...
  • Their reasoning is straightforward: These companies have decided that the best way to make generative AI better is to build bigger AI models. And that is really, really expensive, requiring resources on the scale of moon missions and the interstate-highway system to fund the data centers and related infrastructure that generative AI depends on
  • “If we’re going to justify a trillion or more dollars of investment, [AI] needs to solve complex problems and enable us to do things we haven’t been able to do before.” Today’s flagship AI models, he said, largely cannot.
  • Now a number of voices in the finance world are beginning to ask whether all of this investment can pay off. OpenAI, for its part, may lose up to $5 billion this year, almost 10 times more than what the company lost in 2022,
  • Over the past few weeks, analysts and investors at some of the world’s most influential financial institutions—including Goldman Sachs, Sequoia Capital, Moody’s, and Barclays—have issued reports that raise doubts about whether the enormous investments in generative AI will be profitable.
  • Dario Amodei, the CEO of the rival start-up Anthropic, has predicted that a single AI model (such as, say, GPT-6) could cost $100 billion to train by 2027. The global data-center buildup over the next few years could require trillions of dollars from tech companies, utilities, and other industries, according to a July report from Moody’s Ratings.
  • generative AI has already done extraordinary things, of course—advancing drug development, solving challenging math problems, generating stunning video clips. But exactly what uses of the technology can actually make money remains unclear
  • At present, AI is generally good at doing existing tasks—writing blog posts, coding, translating—faster and cheaper than humans can. But efficiency gains can provide only so much value, boosting the current economy but not creating a new one.
  • Right now, Silicon Valley might just functionally be replacing some jobs, such as customer service and form-processing work, with historically expensive software, which is not a recipe for widespread economic transformation.
  • McKinsey has estimated that generative AI could eventually add almost $8 trillion to the global economy every year
  • “Here, we can manufacture intelligence.”
  • Tony Kim, the head of technology investment at BlackRock, the world’s largest money manager, told me he believes that AI will trigger one of the most significant technological upheavals ever. “Prior industrial revolutions were never about intelligence,”
  • this future is not guaranteed. Many of the productivity gains expected from AI could be both greatly overestimated and very premature, Daron Acemoglu, an economist at MIT, has found
  • AI products’ key flaws, such as a tendency to invent false information, could make them unusable, or deployable only under strict human oversight, in certain settings—courts, hospitals, government agencies, schools
  • AI as a truly epoch-shifting technology, it may well be more akin to blockchain, a very expensive tool destined to fall short of promises to fundamentally transform society and the economy.
  • Researchers at Barclays recently calculated that tech companies are collectively paying for enough AI-computing infrastructure to eventually power 12,000 different ChatGPTs. Silicon Valley could very well produce a whole host of hit generative-AI products like ChatGPT, “but probably not 12,000 of them,
  • even if it did, there would be nowhere enough demand to use all those apps and actually turn a profit.
  • Some of the largest tech companies’ current spending on AI data centers will require roughly $600 billion of annual revenue to break even, of which they are currently about $500 billion short.
  • Tech proponents have responded to the criticism that the industry is spending too much, too fast, with something like religious dogma. “I don’t care” how much we spend, Altman has said. “I genuinely don’t.
  • the industry is asking the world to engage in something like a trillion-dollar tautology: AI’s world-transformative potential justifies spending any amount of resources, because its evangelists will spend any amount to make AI transform the world.
  • in the AI era in particular, a lack of clear evidence for a healthy return on investment may not even matter. Unlike the companies that went bust in the dot-com bubble in the early 2000s, Big Tech can spend exorbitant sums of money and be largely fine
  • perhaps even more important in Silicon Valley than a messianic belief in AI is a terrible fear of missing out. “In the tech industry, what drives part of this is nobody wants to be left behind. Nobody wants to be seen as lagging,
  • Go all in on AI, the thinking goes, or someone else will. Their actions evince “a sense of desperation,” Cahn writes. “If you do not move now, you will never get another chance.” Enormous sums of money are likely to continue flowing into AI for the foreseeable future, driven by a mix of unshakable confidence and all-consuming fear.
Javier E

Where Facebook's AI Slop Comes From - 0 views

  • For months, I have been documenting the incredible virality of bizarre AI-generated image spam on Facebook, now commonly referred to as “AI slop,” and Meta’s seeming complete apathy toward moderating this type of spam. 
  • My investigation reveals that the AI images we see on Facebook are an evolution of a Facebook spam economy that has existed for years, driven by social media influencers, guides, services, and businesses in places like India, Pakistan, Indonesia, Thailand, and Vietnam, where the payouts generated by this content, which seems marginal by U.S. standards, goes further
  • The spam comes from a mix of people manually creating images on their phones using off-the-shelf tools like Microsoft’s AI Image Creator to larger operations that use automated software to spam the platform. I also know that their methods work because I used them to flood Facebook with AI slop myself as a test. 
  • ...8 more annotations...
  • Want to make images of giant Quarans and Bibles? There’s a guide for that. Optical illusion AI Jesus? Poor children? Poor people making intricate things out of plastic bottles? There are guides for them. Tricking people into clicking offsite? Avoiding bans? Getting an account unbanned? Posting automatically? There’s always a guide that explains every single phenomenon that I have seen while wading through AI-generated Facebook slop
  • These influencers are teaching people to use Facebook as a job. They are essentially penetration-testing Facebook, finding ever-changing vulnerabilities in its content moderation systems and in its recommendation algorithms and then exploiting them and instructing others how to do so at scale.
  • Meta does not make its payment rates public, even to people in the program. But YouTube influencers regularly show their payment dashboards. Payments for single images that I have seen vary wildly, from a few cents per photo to hundreds of dollars per photo if it goes megaviral. The “$100 for 1,000 likes” that influencer Gyan Abishek mentioned in one of his videos seems to be greatly exaggerated, based on the various payment portals that I have seen.
  • “If you can figure out how to post content at scale, that means you can figure out how to exploit weaknesses at scale,” the former Meta employee said.
  • . A Meta spokesperson told me that the company has 40,000 employees working globally on security and trust and safety today, compared to 20,000 in 2018.
  • The most popular way to make money spamming Facebook is by being paid directly by Facebook to do so via its Creator Bonus Program, which pays people who post viral content. This means that the viral “shrimp Jesus” AI and many of the bizarre things that have become a hallmark of Zombie Facebook have become popular because Meta is directly incentivizing people to post this content.  
  • One former Meta employee with direct knowledge of its content moderation and ad approval systems, who spoke on the condition of anonymity because they signed an NDA at Meta, told me that Facebook is often aware of these loopholes but layoffs have left its content moderation teams so spread thin that they cannot actually keep up with how quickly people are exploiting them.
  • Much like similar programs at TikTok and Twitter, Facebook’s Creator Bonus Program makes direct payments to people who successfully go viral on Meta platforms, and is meant to incentivize influencers and content creators to post high-quality content on Facebook, Instagram, and Threads. Meta’s bonus program is “invite only,” but countless of the instructional videos I saw show that consistent posting over time will eventually get an account or page invited to the program. 
« First ‹ Previous 641 - 642 of 642
Showing 20 items per page