Skip to main content

Home/ History Readings/ Group items matching ""san francisco"" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
Javier E

AI is already writing books, websites and online recipes - The Washington Post - 0 views

  • Experts say those books are likely just the tip of a fast-growing iceberg of AI-written content spreading across the web as new language software allows anyone to rapidly generate reams of prose on almost any topic. From product reviews to recipes to blog posts and press releases, human authorship of online material is on track to become the exception rather than the norm.
  • Semrush, a leading digital marketing firm, recently surveyed its customers about their use of automated tools. Of the 894 who responded, 761 said they’ve at least experimented with some form of generative AI to produce online content, while 370 said they now use it to help generate most if not all of their new content, according to Semrush Chief Strategy Officer Eugene Levin.
  • What that may mean for consumers is more hyper-specific and personalized articles — but also more misinformation and more manipulation, about politics, products they may want to buy and much more.
  • ...32 more annotations...
  • As AI writes more and more of what we read, vast, unvetted pools of online data may not be grounded in reality, warns Margaret Mitchell, chief ethics scientist at the AI start-up Hugging Face
  • “The main issue is losing track of what truth is,” she said. “Without grounding, the system can make stuff up. And if it’s that same made-up thing all over the world, how do you trace it back to what reality is?”
  • a raft of online publishers have been using automated writing tools based on ChatGPT’s predecessors, GPT-2 and GPT-3, for years. That experience shows that a world in which AI creations mingle freely and sometimes imperceptibly with human work isn’t speculative; it’s flourishing in plain sight on Amazon product pages and in Google search results.
  • “If you have a connection to the internet, you have consumed AI-generated content,” said Jonathan Greenglass, a New York-based tech investor focused on e-commerce. “It’s already here.
  • “In the last two years, we’ve seen this go from being a novelty to being pretty much an essential part of the workflow,”
  • the news credibility rating company NewsGuard identified 49 news websites across seven languages that appeared to be mostly or entirely AI-generated.
  • The sites sport names like Biz Breaking News, Market News Reports, and bestbudgetUSA.com; some employ fake author profiles and publish hundreds of articles a day, the company said. Some of the news stories are fabricated, but many are simply AI-crafted summaries of real stories trending on other outlets.
  • Ingenio, the San Francisco-based online publisher behind sites such as horoscope.com and astrology.com, is among those embracing automated content. While its flagship horoscopes are still human-written, the company has used OpenAI’s GPT language models to launch new sites such as sunsigns.com, which focuses on celebrities’ birth signs, and dreamdiary.com, which interprets highly specific dreams.
  • Ingenio used to pay humans to write birth sign articles on a handful of highly searched celebrities like Michael Jordan and Ariana Grande, said Josh Jaffe, president of its media division. But delegating the writing to AI allows sunsigns.com to cheaply crank out countless articles on not-exactly-A-listers
  • In the past, Jaffe said, “We published a celebrity profile a month. Now we can do 10,000 a month.”
  • It isn’t just text. Google users have recently posted examples of the search engine surfacing AI-generated images. For instance, a search for the American artist Edward Hopper turned up an AI image in the style of Hopper, rather than his actual art, as the first result.
  • Jaffe said he isn’t particularly worried that AI content will overwhelm the web. “It takes time for this content to rank well” on Google, he said — meaning that it appears on the first page of search results for a given query, which is critical to attracting readers. And it works best when it appears on established websites that already have a sizable audience: “Just publishing this content doesn’t mean you have a viable business.”
  • Google clarified in February that it allows AI-generated content in search results, as long as the AI isn’t being used to manipulate a site’s search rankings. The company said its algorithms focus on “the quality of content, rather than how content is produced.”
  • Reputations are at risk if the use of AI backfires. CNET, a popular tech news site, took flack in January when fellow tech site Futurism reported that CNET had been using AI to create articles or add to existing ones without clear disclosures. CNET subsequently investigated and found that many of its 77 AI-drafted stories contained errors.
  • But CNET’s parent company, Red Ventures, is forging ahead with plans for more AI-generated content, which has also been spotted on Bankrate.com, its popular hub for financial advice. Meanwhile, CNET in March laid off a number of employees, a move it said was unrelated to its growing use of AI.
  • BuzzFeed, which pioneered a media model built around reaching readers directly on social platforms like Facebook, announced in January it planned to make “AI inspired content” part of its “core business,” such as using AI to craft quizzes that tailor themselves to each reader. BuzzFeed announced last month that it is laying off 15 percent of its staff and shutting down its news division, BuzzFeed News.
  • it’s finding traction in the murkier worlds of online clickbait and affiliate marketing, where success is less about reputation and more about gaming the big tech platforms’ algorithms.
  • That business is driven by a simple equation: how much it costs to create an article vs. how much revenue it can bring in. The main goal is to attract as many clicks as possible, then serve the readers ads worth just fractions of a cent on each visit — the classic form of clickbait
  • In the past, such sites often outsourced their writing to businesses known as “content mills,” which harness freelancers to generate passable copy for minimal pay. Now, some are bypassing content mills and opting for AI instead.
  • “Previously it would cost you, let’s say, $250 to write a decent review of five grills,” Semrush’s Levin said. “Now it can all be done by AI, so the cost went down from $250 to $10.”
  • The problem, Levin said, is that the wide availability of tools like ChatGPT means more people are producing similarly cheap content, and they’re all competing for the same slots in Google search results or Amazon’s on-site product reviews
  • So they all have to crank out more and more article pages, each tuned to rank highly for specific search queries, in hopes that a fraction will break through. The result is a deluge of AI-written websites, many of which are never seen by human eyes.
  • Jaffe said his company discloses its use of AI to readers, and he promoted the strategy at a recent conference for the publishing industry. “There’s nothing to be ashamed of,” he said. “We’re actually doing people a favor by leveraging generative AI tools” to create niche content that wouldn’t exist otherwise.
  • The rise of AI is already hurting the business of Textbroker, a leading content platform based in Germany and Las Vegas, said Jochen Mebus, the company’s chief revenue officer. While Textbroker prides itself on supplying credible, human-written copy on a huge range of topics, “People are trying automated content right now, and so that has slowed down our growth,”
  • Mebus said the company is prepared to lose some clients who are just looking to make a “fast dollar” on generic AI-written content. But it’s hoping to retain those who want the assurance of a human touch, while it also trains some of its writers to become more productive by employing AI tools themselves.
  • He said a recent survey of the company’s customers found that 30 to 40 percent still want exclusively “manual” content, while a similar-size chunk is looking for content that might be AI-generated but human-edited to check for tone, errors and plagiarism.
  • Levin said Semrush’s clients have also generally found that AI is better used as a writing assistant than a sole author. “We’ve seen people who even try to fully automate the content creation process,” he said. “I don’t think they’ve had really good results with that. At this stage, you need to have a human in the loop.”
  • For Cowell, whose book title appears to have inspired an AI-written copycat, the experience has dampened his enthusiasm for writing.“My concern is less that I’m losing sales to fake books, and more that this low-quality, low-priced, low-effort writing is going to have a chilling effect on humans considering writing niche technical books in the future,”
  • It doesn’t help, he added, knowing that “any text I write will inevitably be fed into an AI system that will generate even more competition.”
  • Amazon removed the impostor book, along with numerous others by the same publisher, after The Post contacted the company for comment.
  • AI-written books aren’t against Amazon’s rules, per se, and some authors have been open about using ChatGPT to write books sold on the site.
  • “Amazon is constantly evaluating emerging technologies and innovating to provide a trustworthy shopping experience for our customers,”
Javier E

As Xi Heads to San Francisco, Chinese Propaganda Embraces America - The New York Times - 0 views

  • Now, the tone used to discuss the United States has suddenly shifted
  • Xinhua, the state news agency, on Monday published a lengthy article in English about the “enduring strength” of Mr. Xi’s affection for ordinary American
  • “More delightful moments unfolded when Xi showed up to watch an N.B.A. game,” the article continued, describing a visit by Mr. Xi to the United States in 2012. “He remained remarkably focused on the game.”
  • ...8 more annotations...
  • Separately, Xinhua has published a five-part series in Chinese on “Getting China-U.S. Relations Back on Track.
  • Beijing, in particular, may be motivated to play up the meeting to reassure investors and foreign businesses
  • “Propaganda of this type is not meant for persuasion — it is not persuasive at all,” Professor Chen said. “It is mainly designed for signaling, in the hope that recipients will get the signal and implement the proper response, which is investment, or resumption of exchanges.”
  • many Chinese social media users have taken note of the abrupt turn — and have been left reeling, or at least wryly amused
  • Under another post showing true, recent state media editorials promoting U.S.-China relations, a commenter wrote: “So, going forward, do we or don’t we need to hate America? So unclear.”
  • On Guancha.cn, a nationalistic news and commentary site, columnists have noted that both countries are making short-term concessions for their own long-term strategic gain.
  • Even the most flowery Chinese articles have drawn distinctions between warm ties between American and Chinese people, and their governments; some state media outlets have continued to warn that the outcome of the California meeting will hinge on the United States, in line with Beijing’s stance that the strained relationship is entirely Washington’s fault.
  • On the future of U.S.-China relations, Professor Wang wrote, “I am only cautious, not optimistic.”
Javier E

Where we build homes helps explain America's political divide - The Washington Post - 0 views

  • Zoning, NIMBYism and regulations — “all those things matter” when you’re trying to build housing, Herbert said. But land scarcity is the most important.
  • So what’s happening now “is a lot more infill of single-family housing in closer-in communities, where you’re not going to have room for large-scale developments and where the land is going to be worth a lot more,” Herbert said. Single-family land scarcity, he said, “has been a big factor keeping the supply down.”
  • In a blockbuster 2010 paper, Albert Saiz, an economist at the Massachusetts Institute of Technology, analyzed satellite data to estimate how much land was actually available for development within a 50-kilometer (31-mile) radius of each major U.S. city. He found that available land, when combined with measures of land-use regulation, could go “very far to explain the evolution of prices” from 1970 to 2000.
  • ...8 more annotations...
  • Saiz even took it a step further, showing that a lack of land can cause stricter regulation. If a place has less room to build — due to mountains, wetlands or oceans, for example — each square foot of dirt costs more. Homeowners also may push local officials to regulate the land more aggressively in an effort to protect their investment and safeguard a scarce resource.
  • From 2013 to 2018, zoning and related restrictions added about $410,000 to the cost of a quarter-acre lot in the San Francisco metro area, $199,000 in Los Angeles, $175,000 in Seattle and $152,000 in greater New York
  • The comparable figure for Phoenix sat at $22,000. Atlanta was $15,000. Dallas was a mere $2,000. Not coincidentally, perhaps, many such Sun Belt metros have produced floods of new housing.
  • But why do blue cities tend to have less land available for development? Perhaps it works the other way: Perhaps land-restricted places tend to evolve into Democratic strongholds.
  • We don’t have data for this, but logically higher home prices and regulation in land-light cities should make much of their housing accessible only to educated, well-compensated professionals, right? In this simple mental model, coastal cities have less room and thus, by definition, attract the elite. And in American politics right now, Democrats dominate the professional classes.
  • We’ve long heard Democrats derided as the “coastal elite,” but we never stopped to wonder why all those blue counties hugged the coasts in the first place. Exceptions are easy to find, but the subtle effects of coastal land shortages, over time, could help explain that most prominent feature of America’s political geography.
  • That effect could be compounded, Saiz told us, by the simple truth that coastlines, lakes and other natural obstacles to construction make cities more beautiful, and thus more desirable to those who can afford such amenities, as his research with Gerald Carlino of the Federal Reserve Bank of Philadelphia shows. And the presence of an educated workforce will cause the city’s economy to grow faster, further expanding economic divides.
  • “High-amenity areas are more desirable and tend to attract the highly skilled,” Saiz said. “These metros tend to have harder land constraints to start with, which begets more expensive housing prices which, in turn, activate more NIMBY activism to protect that wealth.”
Javier E

Opinion | The Complicated Truth About Recycling - The New York Times - 0 views

  • Recycling has been called a myth and beyond fixing as we’ve learned that recyclables are being shipped overseas and dumped (true), are leaching toxic chemicals and microplastics (true) and are being used by Big Oil to mislead consumers about the problems with plastics.
  • Recycling is real. I’ve seen it. For the past four years, I’ve traveled the world writing a book about the waste industry, visiting paper mills and e-waste shredders and bottle plants. I’ve visited all kinds of plastics recycling facilities, from gleaming new factories in Britain to smoky, flake-filled shredding operations in India
  • While I’ve seen how recycling has become inseparable from corporate greenwashing, we shouldn’t be so quick to cast it aside. In the short term, at least, it might be the best option we have against our growing waste crisis.
  • ...22 more annotations...
  • One of the most fundamental problems with recycling is that we don’t really know how much of it actually happens because of an opaque global system that too often relies on measuring the material that arrives at the front door of the facility rather than what comes out
  • What we do know is that with plastics, at least, the amount being recycled is much less than most of us assumed.
  • According to the Environmental Protection Agency, two of the most commonly used plastics in America — PET (used in soda bottles) and HDPE (used in milk jugs, among other things) — are “widely recycled,” but the rate is really only about 30 percent
  • Other plastics, like soft wraps and films, sometimes called No. 4 plastics, are not widely accepted in curbside collections.
  • The E.P.A. estimates that just 2.7 percent of polypropylene — the hard plastic known as No. 5, used to make furniture and cleaning bottles — was reprocessed in 2018
  • Crunch the sums, and only around 10 percent of plastics in the United States is recycle
  • the landfill-happy United States is far worse at recycling than other major economies. According to the E.P.A., America’s national recycling rate, just 32 percent, is lower than Britain’s 44 percent, Germany’s 48 percent and South Korea’s 58 percent.
  • the scientific research over decades has repeatedly found that in almost all cases, recycling our waste materials has significant environmental benefits
  • We need clearer labeling of what is and is not actually recyclable and transparency around true recycling rates
  • Recycling steel, for example, saves 72 percent of the energy of producing new steel; it also cuts water use by 40 percent
  • Recycling a ton of aluminum requires only about 5 percent of the energy and saves almost nine tons of bauxite from being hauled from mines
  • Even anti-plastics campaigners agree that recycling plastics, like PET, is better for the climate than burning them — a likely outcome if recycling efforts were to be abandoned.
  • The economic perks are significant, too. Recycling creates as many as 50 jobs for every one created by sending waste to landfills; the E.P.A. estimates that recycling and reuse accounted for 681,000 jobs in the United States alone.
  • That’s even more true in the developing world, where waste pickers rely on recycling for income.
  • before we abandon recycling, we should first try to fix it
  • Companies should be phasing out products that can’t be recycled and designing more products that are easier to recycle and reuse rather than leaving sustainability to their marketing departments.
  • Lawmakers can help by passing new laws, as cities like Seattle and San Francisco have done, to help increase recycling rates and drive investment into the sector.
  • Governments can also ban or restrict many problematic plastics to reduce the amount of needless plastics in our everyday lives, for instance in food packaging
  • According to a 2015 analysis by scientists at the University of Southampton in England, recycling a majority of commonly tossed-out waste materials resulted in a net reduction in greenhouse gas emissions
  • Greater safety regulations are needed to reduce toxic chemical contents and microplastic pollution caused by the recycling process.
  • consumers can do their bit by buying recycled products (and buying less and reusing more).
  • Yes, recycling is broken, but abandon it too soon, and we risk going back to the system of decades past, in which we dumped and burned our garbage without care, in our relentless quest for more. Do that, and like the recycling symbol itself, we really will be going in circles.
Javier E

Before OpenAI, Sam Altman was fired from Y Combinator by his mentor - The Washington Post - 0 views

  • Four years ago, Altman’s mentor, Y Combinator founder Paul Graham, flew from the United Kingdom to San Francisco to give his protégé the boot, according to three people familiar with the incident, which has not been previously reported
  • Altman’s clashes, over the course of his career, with allies, mentors and even members of a corporate structure he endorsed, are not uncommon in Silicon Valley, amid a culture that anoints wunderkinds, preaches loyalty and scorns outside oversight.
  • Though a revered tactician and chooser of promising start-ups, Altman had developed a reputation for favoring personal priorities over official duties and for an absenteeism that rankled his peers and some of the start-ups he was supposed to nurture
  • ...11 more annotations...
  • The largest of those priorities was his intense focus on growing OpenAI, which he saw as his life’s mission, one person said.
  • A separate concern, unrelated to his initial firing, was that Altman personally invested in start-ups he discovered through the incubator using a fund he created with his brother Jack — a kind of double-dipping for personal enrichment that was practiced by other founders and later limited by the organization.
  • “It was the school of loose management that is all about prioritizing what’s in it for me,” said one of the people.
  • a person familiar with the board’s proceedings said the group’s vote was rooted in worries he was trying to avoid any checks on his power at the company — a trait evidenced by his unwillingness to entertain any board makeup that wasn’t heavily skewed in his favor.
  • Graham had surprised the tech world in 2014 by tapping Altman, then in his 20s, to lead the vaunted Silicon Valley incubator. Five years later, he flew across the Atlantic with concerns that the company’s president put his own interests ahead of the organization — worries that would be echoed by OpenAI’s board
  • The same qualities have made Altman an unparalleled fundraiser, a consummate negotiator, a powerful leader and an unwanted enemy, winning him champions in former Google Chairman Eric Schmidt and Airbnb CEO Brian Chesky.
  • “Ninety plus percent of the employees of OpenAI are saying they would be willing to move to Microsoft because they feel Sam’s been mistreated by a rogue board of directors,” said Ron Conway, a prominent venture capitalist who became friendly with Altman shortly after he founded Loopt, a location-based social networking start-up, in 2005. “I’ve never seen this kind of loyalty anywhere.”
  • But Altman’s personal traits — in particular, the perception that he was too opportunistic even for the go-getter culture of Silicon Valley — has at times led him to alienate even some of his closest allies, say six people familiar with his time in the tech world.
  • Altman’s career arc speaks to the culture of Silicon Valley, where cults of personality and personal networks often take the place of stronger management guardrails — from Sam Bankman-Fried’s FTX to Elon Musk’s Twitter
  • But some of Altman’s former colleagues recount issues that go beyond a founder angling for power. One person who has worked closely with Altman described a pattern of consistent and subtle manipulation that sows division between individuals.
  • AI executives, start-up founders and powerful venture capitalists had become aligned in recent months, concerned that Altman’s negotiations with regulators were dangerous to the advancement of the field. Although Microsoft, which owns a 49 percent stake in OpenAI, has long urged regulators to implement guardrails, investors have fixated on Altman, who has captivated legislators and embraced his regular summons to Capitol Hill.
Javier E

BOOM: Google Loses Antitrust Case - BIG by Matt Stoller - 0 views

  • It’s a long and winding road for Epic. The firm lost the Apple case, which is on appeal, but got the Google case to a jury, along with several other plaintiffs. Nearly every other firm challenging Google gradually dropped out of the case, getting special deals from the search giant in return for abandoning their claims. But Sweeney was righteous, and believed that Google helped ruined the internet. He didn’t ask for money or a special deal, instead seeking to have Judge James Donato force Google to make good on its “broken promise,” which he characterized as “an open, competitive Android ecosystem for all users and industry participants.”
  • Specifically, Sweeney asked for the right for firms to have their own app stores, and the ability to use their own billing systems. Basically, he wants to crush Google’s control over the Android phone system. And I suspect he just did. You can read the verdict here.
  • Google is likely to be in trouble now, because it is facing multiple antitrust cases, and these kinds of decisions have a bandwagon effect. The precedent is set, in every case going forward the firm will now be seen as presumed guilty, since a jury found Google has violated antitrust laws. Judges are cautious, and are generally afraid of being the first to make a precedent-setting decision. Now they won’t have to. In fact, judges and juries will now have to find a reason to rule for Google. If, say, Judge Amit Mehta in D.C., facing a very similar fact-pattern, chooses to let Google off the hook, well, he’ll look pretty bad.
  • ...4 more annotations...
  • There are a few important take-aways. First, this one didn’t come from the government, it was a private case by a video game maker that sued Google over its terms for getting access to the Google Play app store for Android, decided not by a fancy judge with an Ivy League degree but by a jury of ordinary people in San Francisco. In other words, private litigation, the ‘ambulance-chasing’ lawyers, are vital parts of our justice system.
  • Second, juries matter, even if they are riskier for everyone involved. It’s kind of like a mini poll, and the culture is ahead of the cautious legal profession. This quick decision is a sharp contrast with the 6-month delay to an opinion in the search case that Judge Mehta sought in the D.C. trial.
  • Third, tying claims, which is a specific antitrust violation, are good law. Tying means forcing someone to buy an unrelated product in order to access the actual product they want to buy. The specific legal claim here was about how Google forced firms relying on its Google Play app store to also use its Google Play billing service, which charges an inflated price of 30% of the price of an app. Tying is pervasive throughout the economy, so you can expect more suits along these lines.
  • And finally, big tech is not above the law. This loss isn’t just the first antitrust failure for Google, it’s the first antitrust loss for any big tech firm. I hear a lot from skeptics that the fix is in, that the powerful will always win, that justice in our system is a mirage. But that just isn’t true. A jury of our peers just made that clear.
Javier E

News Publishers See Google's AI Search Tool as a Traffic-Destroying Nightmare - WSJ - 0 views

  • A task force at the Atlantic modeled what could happen if Google integrated AI into search. It found that 75% of the time, the AI-powered search would likely provide a full answer to a user’s query and the Atlantic’s site would miss out on traffic it otherwise would have gotten. 
  • What was once a hypothetical threat is now a very real one. Since May, Google has been testing an AI product dubbed “Search Generative Experience” on a group of roughly 10 million users, and has been vocal about its intention to bring it into the heart of its core search engine. 
  • Google’s embrace of AI in search threatens to throw off that delicate equilibrium, publishing executives say, by dramatically increasing the risk that users’ searches won’t result in them clicking on links that take them to publishers’ sites
  • ...23 more annotations...
  • Google’s generative-AI-powered search is the true nightmare for publishers. Across the media world, Google generates nearly 40% of publishers’ traffic, accounting for the largest share of their “referrals,” according to a Wall Street Journal analysis of data from measurement firm SimilarWeb. 
  • “AI and large language models have the potential to destroy journalism and media brands as we know them,” said Mathias Döpfner, chairman and CEO of Axel Springer,
  • His company, one of Europe’s largest publishers and the owner of U.S. publications Politico and Business Insider, this week announced a deal to license its content to generative-AI specialist OpenAI.
  • publishers have seen enough to estimate that they will lose between 20% and 40% of their Google-generated traffic if anything resembling recent iterations rolls out widely. Google has said it is giving priority to sending traffic to publishers.
  • The rise of AI is the latest and most anxiety-inducing chapter in the long, uneasy marriage between Google and publishers, which have been bound to each other through a basic transaction: Google helps publishers be found by readers, and publishers give Google information—millions of pages of web content—to make its search engine useful.
  • Already, publishers are reeling from a major decline in traffic sourced from social-media sites, as both Meta and X, the former Twitter, have pulled away from distributing news.
  • , Google’s AI search was trained, in part, on their content and other material from across the web—without payment. 
  • Google’s view is that anything available on the open internet is fair game for training AI models. The company cites a legal doctrine that allows portions of a copyrighted work to be used without permission for cases such as criticism, news reporting or research.
  • The changes risk damaging website owners that produce the written material vital to both Google’s search engine and its powerful AI models.
  • “If Google kills too many publishers, it can’t build the LLM,”
  • Barry Diller, chairman of IAC and Expedia, said all major AI companies, including Google and rivals like OpenAI, have promised that they would continue to send traffic to publishers’ sites. “How they do it, they’ve been very clear to us and others, they don’t really know,” he said.
  • All of this has led Google and publishers to carry out an increasingly complex dialogue. In some meetings, Google is pitching the potential benefits of the other AI tools it is building, including one that would help with the writing and publishing of news articles
  • At the same time, publishers are seeking reassurances from Google that it will protect their businesses from an AI-powered search tool that will likely shrink their traffic, and they are making clear they expect to be paid for content used in AI training.
  • “Any attempts to estimate the traffic impact of our SGE experiment are entirely speculative at this stage as we continue to rapidly evolve the user experience and design, including how links are displayed, and we closely monitor internal data from our tests,” Reid said.
  • Many of IAC’s properties, like Brides, Investopedia and the Spruce, get more than 80% of their traffic from Google
  • Google began rolling out the AI search tool in May by letting users opt into testing. Using a chat interface that can understand longer queries in natural language, it aims to deliver what it calls “snapshots”—or summaries—of the answer, instead of the more link-heavy responses it has traditionally served up in search results. 
  • Google at first didn’t include links within the responses, instead placing them in boxes to the right of the passage. It later added in-line links following feedback from early users. Some more recent versions require users to click a button to expand the summary before getting links. Google doesn’t describe the links as source material but rather as corroboration of its summaries.
  • During Chinese President Xi Jinping’s recent visit to San Francisco, the Google AI search bot responded to the question “What did President Xi say?” with two quotes from his opening remarks. Users had to click on a little red arrow to expand the response and see a link to the CNBC story that the remarks were taken from. The CNBC story also sat over on the far right-hand side of the screen in an image box.
  • The same query in Google’s regular search engine turned up a different quote from Xi’s remarks, but a link to the NBC News article it came from was beneath the paragraph, atop a long list of news stories from other sources like CNN and PBS.
  • Google’s Reid said AI is the future of search and expects its new tool to result in more queries.
  • “The number of information needs in the world is not a fixed number,” she said. “It actually grows as information becomes more accessible, becomes easier, becomes more powerful in understanding it.”
  • Testing has suggested that AI isn’t the right tool for answering every query, she said.
  • Many publishers are opting to insert code in their websites to block AI tools from “crawling” them for content. But blocking Google is thorny, because publishers must allow their sites to be crawled in order to be indexed by its search engine—and therefore visible to users searching for their content.To some in the publishing world there was an implicit threat in Google’s policy: Let us train on your content or you’ll be hard to find on the internet.
Javier E

Luiz Barroso, Who Supercharged Google's Reach, Dies at 59 - The New York Times - 0 views

  • When Google arrived in the late 1990s, hundreds of thousands of people were instantly captivated by its knack for taking them wherever they wanted to go on the internet. Developed by the company’s founders Larry Page and Sergey Brin, the algorithm that drove the site seemed to work like magic.
  • as the internet search engine expanded its reach to billions of people over the next decade, it was driven by another major technological advance that was less discussed, though no less important: the redesign of Google’s giant computer data centers.
  • Led by a Brazilian named Luiz Barroso, a small team of engineers rebuilt the warehouse-size centers so that they behaved like a single machine — a technological shift that would change the way the entire internet was built, allowing any site to reach billions of people almost instantly and much more consistently.
  • ...8 more annotations...
  • Before the rise of Google, internet companies stuffed their data centers with increasingly powerful and expensive computer servers, as they struggled to reach more and more people. Each server delivered the website to a relatively small group of people. And if the server died, those people were out of luck.
  • Dr. Barroso realized that the best way to distribute a wildly popular website like Google was to break it into tiny pieces and spread them evenly across an array of servers. Rather than each server delivering the site to a small group of people, the entire data center delivered the site to its entire audience.
  • “In other words, we must treat the data center itself as one massive warehouse-scale computer.”
  • Widespread outages became a rarity, especially as Dr. Barroso and his team expanded these ideas across multiple data centers. Eventually, Google’s entire global network of data centers behaved as a single machine.
  • By the mid-1990s, he was working as a researcher in a San Francisco Bay Area lab operated by the Digital Equipment Corporation, one of the computer giants of the day.
  • There, he helped create multi-core computer chips — microprocessors made of many chips working in tandem. A more efficient way of running computer software, such chips are now a vital part of almost any new computer.
  • At first, Dr. Barroso worked on software. But as Dr. Hölzle realized that Google would also need to build its own hardware, he tapped Dr. Barroso to lead the effort. Over the next decade, as it pursued his warehouse-size computer, Google built its own servers, data storage equipment and networking hardware.
  • For years, this work was among Google’s most closely guarded secrets. The company saw it as a competitive advantage. But by the 2010s, companies like Amazon and Facebook were following the example set by Dr. Barroso and his team. Soon, the world’s leading computer makers were building and selling the same kind of low-cost hardware, allowing any internet company to build an online empire the way Google had.
Javier E

Google Devising Radical Search Changes to Beat Back AI Rivals - The New York Times - 0 views

  • Google’s employees were shocked when they learned in March that the South Korean consumer electronics giant Samsung was considering replacing Google with Microsoft’s Bing as the default search engine on its devices.
  • Google’s reaction to the Samsung threat was “panic,” according to internal messages reviewed by The New York Times. An estimated $3 billion in annual revenue was at stake with the Samsung contract. An additional $20 billion is tied to a similar Apple contract that will be up for renewal this year.
  • A.I. competitors like the new Bing are quickly becoming the most serious threat to Google’s search business in 25 years, and in response, Google is racing to build an all-new search engine powered by the technology. It is also upgrading the existing one with A.I. features, according to internal documents reviewed by The Times.
  • ...14 more annotations...
  • Google has been worried about A.I.-powered competitors since OpenAI, a San Francisco start-up that is working with Microsoft, demonstrated a chatbot called ChatGPT in November. About two weeks later, Google created a task force in its search division to start building A.I. products,
  • Modernizing its search engine has become an obsession at Google, and the planned changes could put new A.I. technology in phones and homes all over the world.
  • Magi would keep ads in the mix of search results. Search queries that could lead to a financial transaction, such as buying shoes or booking a flight, for example, would still feature ads on their results pages.
  • Google has been doing A.I. research for years. Its DeepMind lab in London is considered one of the best A.I. research centers in the world, and the company has been a pioneer with A.I. projects, such as self-driving cars and the so-called large language models that are used in the development of chatbots. In recent years, Google has used large language models to improve the quality of its search results, but held off on fully adopting A.I. because it has been prone to generating false and biased statements.
  • Now the priority is winning control of the industry’s next big thing. Last month, Google released its own chatbot, Bard, but the technology received mixed reviews.
  • The system would learn what users want to know based on what they’re searching when they begin using it. And it would offer lists of preselected options for objects to buy, information to research and other information. It would also be more conversational — a bit like chatting with a helpful person.
  • The Samsung threat represented the first potential crack in Google’s seemingly impregnable search business, which was worth $162 billion last year.
  • Last week, Google invited some employees to test Magi’s features, and it has encouraged them to ask the search engine follow-up questions to judge its ability to hold a conversation. Google is expected to release the tools to the public next month and add more features in the fall, according to the planning document.
  • The company plans to initially release the features to a maximum of one million people. That number should progressively increase to 30 million by the end of the year. The features will be available exclusively in the United States.
  • Google has also explored efforts to let people use Google Earth’s mapping technology with help from A.I. and search for music through a conversation with a chatbot
  • A tool called GIFI would use A.I. to generate images in Google Image results.
  • Tivoli Tutor, would teach users a new language through open-ended A.I. text conversations.
  • Yet another product, Searchalong, would let users ask a chatbot questions while surfing the web through Google’s Chrome browser. People might ask the chatbot for activities near an Airbnb rental, for example, and the A.I. would scan the page and the rest of the internet for a response.
  • “If we are the leading search engine and this is a new attribute, a new feature, a new characteristic of search engines, we want to make sure that we’re in this race as well,”
Javier E

Pause or panic: battle to tame the AI monster - 0 views

  • What exactly are they afraid of? How do you draw a line from a chatbot to global destruction
  • This tribe feels we have made three crucial errors: giving the AI the capability to write code, connecting it to the internet and teaching it about human psychology. In those steps we have created a self-improving, potentially manipulative entity that can use the network to achieve its ends — which may not align with ours
  • This is a technology that learns from our every interaction with it. In an eerie glimpse of AI’s single-mindedness, OpenAI revealed in a paper that GPT-4 was willing to lie, telling a human online it was a blind person, to get a task done.
  • ...16 more annotations...
  • For researchers concerned with more immediate AI risks, such as bias, disinformation and job displacement, the voices of doom are a distraction. Professor Brent Mittelstadt, director of research at the Oxford Internet Institute, said the warnings of “the existential risks community” are overblown. “The problem is you can’t disprove the future scenarios . . . in the same way you can’t disprove science fiction.” Emily Bender, a professor of linguistics at the University of Washington, believes the doomsters are propagating “unhinged AI hype, helping those building this stuff sell it”.
  • Those urging us to stop, pause and think again have a useful card up our sleeves: the people building these models do not fully understand them. AI like ChatGPT is made up of huge neural networks that can defy their creators by coming up with “emergent properties”.
  • Google’s PaLM model started translating Bengali despite not being trained to do so
  • Let’s not forget the excitement, because that is also part of Moloch, driving us forward. The lure of AI’s promises for humanity has been hinted at by DeepMind’s AlphaFold breakthrough, which predicted the 3D structures of nearly all the proteins known to humanity.
  • Noam Shazeer, a former Google engineer credited with setting large language models such as ChatGPT on their present path, was asked by The Sunday Times how the models worked. He replied: “I don’t think anybody really understands how they work, just like nobody really understands how the brain works. It’s pretty much alchemy.”
  • The industry is turning itself to understanding what has been created, but some predict it will take years, decades even.
  • Alex Heath, deputy editor of The Verge, who recently attended an AI conference in San Francisco. “It’s clear the people working on generative AI are uneasy about the worst-case scenario of it destroying us all. These fears are much more pronounced in private than they are in public.” One figure building an AI product “said over lunch with a straight face that he is savoring the time before he is killed by AI”.
  • Greg Brockman, co-founder of OpenAI, told the TED2023 conference this week: “We hear from people who are excited, we hear from people who are concerned. We hear from people who feel both those emotions at once. And, honestly, that’s how we feel.”
  • A CBS interviewer challenged Sundar Pichai, Google’s chief executive, this week: “You don’t fully understand how it works, and yet you’ve turned it loose on society?
  • In 2020 there wasn’t a single drug in clinical trials developed using an AI-first approach. Today there are 18
  • Consider this from Bill Gates last month: “I think in the next five to ten years, AI-driven software will finally deliver on the promise of revolutionising the way people teach and learn.”
  • If the industry is aware of the risks, is it doing enough to mitigate them? Microsoft recently cut its ethics team, and researchers building AI outnumber those focused on safety by 30-to-1,
  • The concentration of AI power, which worries so many, also presents an opportunity to more easily develop some global rules. But there is little agreement on direction. Europe is proposing a centrally defined, top-down approach. Britain wants an innovation-friendly environment where rules are defined by each industry regulator. The US commerce department is consulting on whether risky AI models should be certified. China is proposing strict controls on generative AI that could upend social order.
  • Part of the drive to act now is to ensure we learn the lessons of social media. Twenty years after creating it, we are trying to put it back in a legal straitjacket after learning that its algorithms understand us only too well. “Social media was the first contact between AI and humanity, and humanity lost,” Yuval Harari, the Sapiens author,
  • Others point to bioethics, especially international agreements on human cloning. Tegmark said last week: “You could make so much money on human cloning. Why aren’t we doing it? Because biologists thought hard about this and felt this is way too risky. They got together in the Seventies and decided, let’s not do this because it’s too unpredictable. We could lose control over what happens to our species. So they paused.” Even China signed up.
  • One voice urging calm is Yann LeCun, Meta’s chief AI scientist. He has labelled ChatGPT a “flashy demo” and “not a particularly interesting scientific advance”. He tweeted: “A GPT-4-powered robot couldn’t clear up the dinner table and fill up the dishwasher, which any ten-year-old can do. And it couldn’t drive a car, which any 18-year-old can learn to do in 20 hours of practice. We’re still missing something big for human-level AI.” If this is sour grapes and he’s wrong, Moloch already has us in its thrall.
Javier E

Opinion | This Is the Actual Danger Posed by D.E.I. - The New York Times - 0 views

  • D.E.I. Short for “diversity, equity, and inclusion,” the term — like the related progressive concepts of wokeness and critical race theory — used to have an agreed-upon meaning but has now been essentially redefined on the populist right. In that world, D.E.I. has become yet another catchall boogeyman, a stand-in not just for actual policies or practices designed to increase diversity, but also a scapegoat for unrelated crises.
  • the immense backlash from parts of the right against almost any diversity initiative is a sign of the extent to which millions of white Americans are content with their vastly disproportionate share of national wealth and power.
  • Outside the reactionary right, there is a cohort of Americans, on both right and left, who want to eradicate illegal discrimination and remedy the effects of centuries of American injustice yet also have grave concerns about the way in which some D.E.I. efforts are undermining American constitutional values, especially on college campuses.
  • ...16 more annotations...
  • For instance, when a Harvard scholar such as Steven Pinker speaks of “disempowering D.E.I.” as a necessary reform in American higher education, he’s not opposing diversity itself. Pinker is liberal, donates substantially to the Democratic Party and “loathes” Donald Trump. The objections he raises are shared by a substantial number of Americans across the political spectrum.
  • , the problem with D.E.I. isn’t with diversity, equity, or inclusion — all vital values.
  • First, it is a moral necessity for colleges to be concerned about hateful discourse, including hateful language directed at members of historically marginalized groups. Moreover, colleges that receive federal funds have a legal obligation
  • I’ll share with you three pervasive examples
  • In the name of D.E.I., all too many institutions have violated their constitutional commitments to free speech, due process and equal protection of the law.
  • Yet that is no justification for hundreds of universities to pass and maintain draconian speech codes on campus, creating a system of unconstitutional censorship that has been struck down again and again and again in federal court. Nor is it a justification for discriminating against faculty members for their political views or for compelling them to speak in support of D.E.I.
  • There is a better way to achieve greater diversity, equity, inclusion and related goals. Universities can welcome students from all walks of life without unlawfully censoring speech. They can respond to campus sexual violence without violating students’ rights to due process. They can diversify the student body without discriminating on the basis of race
  • Second, there is a moral imperative to respond to sexual misconduct on campus.
  • that is no justification for replacing one tilted playing field with another. Compelled in part by constitutionally problematic guidance from the Obama administration, hundreds of universities adopted sexual misconduct policies that strip the most basic due process protections from accused students. The result has been systematic injustice
  • The due process problem was so profound that in 2019 a state appellate court in California — hardly a bastion of right-wing jurisprudence — ruled that “fundamental fairness” entitles an accused student to cross-examine witnesses in front of a neutral adjudicator.
  • Third, it is urgently necessary to address racial disparities in campus admissions and faculty hiring — but, again, not at the expense of the Constitution.
  • it is difficult to ignore the overwhelming evidence that Harvard attempted to achieve greater diversity in part by systematically downranking Asian applicants on subjective grounds, judging them deficient in traits such as “positive personality,” likability, courage, kindness and being “widely respected.” That’s not inclusion; it’s discrimination.
  • Our nation has inflicted horrific injustices on vulnerable communities. And while the precise nature of the injustice has varied — whether it was slavery, Jim Crow, internment or the brutal conquest of Native American lands — there was always a consistent theme: the comprehensive denial of constitutional rights.
  • But one does not correct the consequences of those terrible constitutional violations by inflicting a new set of violations on different American communities in a different American era. A consistent defense of the Constitution is good for us all,
  • The danger posed by D.E.I. resides primarily not in these virtuous ends, but in the unconstitutional means chosen to advance them.
  • Virtuous goals should not be accomplished by illiberal means.
Javier E

Immigration powered the economy, job market amid border negotiations - The Washington Post - 0 views

  • There isn’t much data on how many of the new immigrants in recent years were documented versus undocumented. But estimates from the Pew Research Center last fall showed that undocumented immigrants made up 22 percent of the total foreign-born U.S. population in 2021. That’s down compared to previous decades: Between 2007 and 2021, the undocumented population fell by 14 percent, Pew found. Meanwhile, the legal immigrant population grew by 29 percent.
  • immigrant workers are supporting tremendously — and likely will keep powering for years to come.
  • The economy is projected to grow by $7 trillion more over the next decade than it would have without new influxes of immigrants, according to the CBO.
  • ...21 more annotations...
  • Fresh estimates from the Congressional Budget Office this month said the U.S. labor force in 2023 had grown by 5.2 million people, thanks especially to net immigration
  • The sudden snapback in demand sent inflation soaring. Supply chain issues were a main reason prices rose quickly. But labor shortages posed a problem, too, and economists feared that rising wages — as employers scrambled to find workers — would keep price increases dangerously high.
  • he flow of migrants to the United States started slowing during the Trump administration, when officials took hundreds of executive actions designed to restrict migration.
  • Right before the pandemic, there were about 1.5 million fewer working-age immigrants in the United States than pre-2017 trends would have predicted, according to the San Francisco Fed. By the end of 2021, that shortfall had widened to about 2 million
  • But the economy overall wound up rebounding aggressively from the sudden, widespread closures of 2020, bolstered by historic government stimulus and vaccines that debuted faster than expected.
  • economy grow. But today’s snapshot still represents a stark turnaround from just a short time ago.
  • That’s because the labor force that emerged as the pandemic ebbed was smaller than it had been: Millions of people retired early, stayed home to take over child care or avoid getting sick, or decided to look for new jobs entirely
  • In the span of a year or so, employers went from having businesses crater to sprinting to hire enough staff to keep restaurants, hotels, retail stores and construction sites going. Wages for the lowest earners rose at the fastest pace.
  • About the same time, the path was widening for migrants to cross the southern border, particularly as the new Biden administration rolled back Trump-era restrictions.
  • Experts argue that the strength of the U.S. economy has benefited American workers and foreign-born workers alike. Each group accounts for roughly half of the labor market’s impressive year-over-year growth since January 2023
  • But the past few years were extremely abnormal because companies were desperate to hire.
  • lus, it would be exceedingly difficult for immigration to affect the wages of enormous swaths of the labor force,
  • “What it can do is lower the wages of a specific occupation in a specific area, but American workers aren’t stupid. They change jobs. They change what they specialize in,” Nowrasteh said. “So that’s part of the reason why wages don’t go down.”
  • In normal economic times, some analysts note, new immigrants can drag down wages, especially if employers decide to hire them over native-born workers. Undocumented workers, who don’t have as much leverage to push for higher pay, could lower average wages even more.
  • Particularly for immigrants fleeing poorer countries, the booming U.S. job market and the promise of higher wages continue to be an enormous draw.
  • “More than any immigration policy per se, the biggest pull for migrants is the strength of the labor market,” said Catalina Amuedo-Dorantes, an economics professor at the University of California at Merced. “More than any enforcement policy, any immigration policy, at the end of the day.”
  • Upon arriving in Denver in October, Santander hadn’t acquired a work permit but needed to feed his small children. Even without authorization, he found a job as a roofer for a contractor that ultimately pocketed his earnings, then one cleaning industrial refrigerators on the overnight shift for $12 an hour. Since receiving his work permit in January, Santander has started “a much better job” at a wood accessories manufacturer making $20 an hour.
  • But for the vast majority of migrants who arrive in the United States without prior approval, including asylum seekers and those who come for economic reasons, getting a work permit isn’t easy.
  • Federal law requires migrants to wait nearly six months to receive a work permit after filing for asylum. Wait times can stretch for additional months because of a backlog in cases.
  • While they wait, many migrants find off-the-books work as day laborers or street vendors, advocates say. Others get jobs using falsified documents, including many teenagers who came into the country as unaccompanied minors.
  • Still, many migrants miss the year-long window to apply for asylum — a process that can cost thousands of dollars — leaving them with few pathways to work authorization, advocates say. Those who can’t apply for asylum often end up working without official permission in low-wage industries where they are susceptible to exploitation.
Javier E

Inside the porn industry, AI looms large - The Washington Post - 0 views

  • Since the first AVN “expo” in 1998, adult entertainment has been overtaken by two business models: Pornhub, a free site supported by ads, and OnlyFans, a subscription platform where individual actors control their businesses and their fate.
  • Now, a new shift is on the horizon: Artificial intelligence models that spin up photorealistic images and videos that put viewers in the director’s chair, letting them create whatever porn they like.
  • Some site owners think it’s a privilege people will pay for, and they are racing to build custom AI models that — unlike the sanitized content on OpenAI’s video engine Sora — draw on a vast repository of porn images and videos.
  • ...26 more annotations...
  • he trickiest question may be how to prevent abuse. AI generators have technological boundaries, but not morals, and it’s relatively easy for users to trick them into creating content that depicts violence, rape, sex with children or a celebrity — or even a crush from work who never consented to appear
  • In some cases, the engines themselves are trained on porn images whose subjects didn’t explicitly agree to the new use. Currently, no federal laws protect the victims of nonconsensual deepfakes.
  • Adult entertainment is a giant industry accounting for a substantial chunk of all internet traffic: Major porn sites get more monthly visitors and page views than Amazon, Netflix, TikTok or Zoom
  • The industry is a habitual early adopter of new technology, from VHS to DVD to dot com. In the mid-2000s, porn companies set up massive sites where users upload and watch free videos, and ad sales foot the bills.
  • At last year’s AVN conference, Steven Jones said his peers looked at him “like he was crazy” when he talked about AI opportunities: “Nobody was interested.” This year, Jones said, he’s been “the belle of the ball.”
  • He called up his old business partner, and the two immediately spent about $550,000 securing the web domains for porn dot ai, deepfake dot com and deepfakes dot com, Jones said. “Lightspeed” was back.
  • One major model, Stable Diffusion, shares its code publicly, and some technologists have figured out how to edit the code to allow for sexual images
  • What keeps Jones up at night is people trying to use his company’s tools to generate images of abuse, he said. The models have some technological guardrails that make it difficult for users to render children, celebrities or acts of violence. But people are constantly looking for workarounds.
  • So with help from an angel investor he will not name, Jones hired five employees and a handful of offshore contractors and started building an image engine trained on bundles of freely available pornographic images, as well as thousands of nude photos from Jones’s own collection
  • Users create what Jones calls a “dream girl,” prompting the AI with descriptions of the character’s appearance, pose and setting. The nudes don’t portray real people, he said. Rather, the goal is to re-create a fantasy from the user’s imagination.
  • The AI-generated images got better, their computerized sheen growing steadily less noticeable. Jones grew his user base to 500,000 people, many of whom pay to generate more images than the five per day allotted to free accounts, he said. The site’s “power users” generate AI porn for 10 hours a day, he said.
  • Jones described the site as an “artists’ community” where people can explore their sexualities and fantasies in a safe space. Unlike some corners of the traditional adult industry, no performers are being pressured, underpaid or placed in harm’s way
  • And critically, consumers don’t have to wait for their favorite OnlyFans performer to come online or trawl through Pornhub to find the content they like.
  • Next comes AI-generated video — “porn’s holy grail,” Jones said. Eventually, he sees the technology becoming interactive, with users giving instructions to lifelike automated “performers.” Within two years, he said, there will be “fully AI cam girls,” a reference to creators who make solo sex content.
  • It costs $12 per day to rent a server from Amazon Web Services, he said, and generating a single picture requires users to have access to a corresponding server. His users have so far generated more than 1.6 million images.
  • Copyright holders including newspapers, photographers and artists have filed a slew of lawsuits against AI companies, claiming the companies trained their models on copyrighted content. If plaintiffs win, it could cut off the free-for-all that benefits entrepreneurs such as Jones.
  • But Jones’s plan to create consumer-friendly AI porn engines faced significant obstacles. The companies behind major image-generation models used technical boundaries to block “not safe for work” content and, without racy images to learn from, the models weren’t good at re-creating nude bodies or scenes.
  • Jones said his team takes down images that other users flag as abusive. Their list of blocked prompts currently contains 1,000 terms including “high school.”
  • “I see certain things people type in, and I just hope to God they’re trying to test the model, like we are. I hope they don’t actually want to see the things they’re typing in.
  • Peter Acworth, the owner of kink dot com, is trying to teach an AI porn generator to understand even subtler concepts, such as the difference between torture and consensual sexual bondage. For decades Acworth has pushed for spaces — in the real world and online — for consenting adults to explore nonconventional sexual interests. In 2006, he bought the San Francisco Armory, a castle-like building in the city’s Mission neighborhood, and turned it into a studio where his company filmed fetish porn until shuttering in 2017.
  • Now, Acworth is working with engineers to train an image-generation model on pictures of BDSM, an acronym for bondage and discipline, dominance and submission, sadism and masochism.
  • Others alluded to a porn apocalypse, with AI wiping out existing models of adult entertainment.“Look around,” said Christian Burke, head of engineering at the adult-industry payment app Melon, gesturing at performers huddled, laughing and hugging across the show floor. “This could look entirely different in a few years.”
  • But the age of AI brings few guarantees for the people, largely women, who appear in porn. Many have signed broad contracts granting companies the rights to reproduce their likeness in any medium for the rest of time
  • Not only could performers lose income, Walters said, they could find themselves in offensive or abusive scenes they never consented to.
  • Lana Smalls, a 23-year-old performer whose videos have been viewed 20 million times on Pornhub, said she’s had colleagues show up to shoots with major studios only to be surprised by sweeping AI clauses in their contracts.
  • “This industry is too fragmented for collective bargaining,” Spiegler said. “Plus, this industry doesn’t like rules.”
Javier E

The Rise and Fall of BNN Breaking, an AI-Generated News Outlet - The New York Times - 0 views

  • His is just one of many complaints against BNN, a site based in Hong Kong that published numerous falsehoods during its short time online as a result of what appeared to be generative A.I. errors.
  • During the two years that BNN was active, it had the veneer of a legitimate news service, claiming a worldwide roster of “seasoned” journalists and 10 million monthly visitors, surpassing the The Chicago Tribune’s self-reported audience. Prominent news organizations like The Washington Post, Politico and The Guardian linked to BNN’s stories
  • Google News often surfaced them, too
  • ...16 more annotations...
  • A closer look, however, would have revealed that individual journalists at BNN published lengthy stories as often as multiple times a minute, writing in generic prose familiar to anyone who has tinkered with the A.I. chatbot ChatGPT.
  • How easily the site and its mistakes entered the ecosystem for legitimate news highlights a growing concern: A.I.-generated content is upending, and often poisoning, the online information supply.
  • The websites, which seem to operate with little to no human supervision, often have generic names — such as iBusiness Day and Ireland Top News — that are modeled after actual news outlets. They crank out material in more than a dozen languages, much of which is not clearly disclosed as being artificially generated, but could easily be mistaken as being created by human writers.
  • Now, experts say, A.I. could turbocharge the threat, easily ripping off the work of journalists and enabling error-ridden counterfeits to circulate even more widely — as has already happened with travel guidebooks, celebrity biographies and obituaries.
  • The result is a machine-powered ouroboros that could squeeze out sustainable, trustworthy journalism. Even though A.I.-generated stories are often poorly constructed, they can still outrank their source material on search engines and social platforms, which often use A.I. to help position content. The artificially elevated stories can then divert advertising spending, which is increasingly assigned by automated auctions without human oversight.
  • NewsGuard, a company that monitors online misinformation, identified more than 800 websites that use A.I. to produce unreliable news content.
  • Low-paid freelancers and algorithms have churned out much of the faux-news content, prizing speed and volume over accuracy.
  • Former employees said they thought they were joining a legitimate news operation; one had mistaken it for BNN Bloomberg, a Canadian business news channel. BNN’s website insisted that “accuracy is nonnegotiable” and that “every piece of information underwent rigorous checks, ensuring our news remains an undeniable source of truth.”
  • this was not a traditional journalism outlet. While the journalists could occasionally report and write original articles, they were asked to primarily use a generative A.I. tool to compose stories, said Ms. Chakraborty and Hemin Bakir, a journalist based in Iraq who worked for BNN for almost a year. They said they had uploaded articles from other news outlets to the generative A.I. tool to create paraphrased versions for BNN to publish.
  • Mr. Chahal’s evangelism carried weight with his employees because of his wealth and seemingly impressive track record, they said. Born in India and raised in Northern California, Mr. Chahal made millions in the online advertising business in the early 2000s and wrote a how-to book about his rags-to-riches story that landed him an interview with Oprah Winfrey.
  • Mr. Chahal told Mr. Bakir to focus on checking stories that had a significant number of readers, such as those republished by MSN.com.Employees did not want their bylines on stories generated purely by A.I., but Mr. Chahal insisted on this. Soon, the tool randomly assigned their names to stories.
  • This crossed a line for some BNN employees, according to screenshots of WhatsApp conversations reviewed by The Times, in which they told Mr. Chahal that they were receiving complaints about stories they didn’t realize had been published under their names.
  • According to three journalists who worked at BNN and screenshots of WhatsApp conversations reviewed by The Times, Mr. Chahal regularly directed profanities at employees and called them idiots and morons. When employees said purely A.I.-generated news, such as the Fanning story, should be published under the generic “BNN Newsroom” byline, Mr. Chahal was dismissive.“When I do this, I won’t have a need for any of you,” he wrote on WhatsApp.Mr. Bakir replied to Mr. Chahal that assigning journalists’ bylines to A.I.-generated stories was putting their integrity and careers in “jeopardy.”
  • This was a strategy that Mr. Chahal favored, according to former BNN employees. He used his news service to exercise grudges, publishing slanted stories about a politician from San Francisco he disliked, Wikipedia after it published a negative entry about BNN Breaking and Elon Musk after accounts belonging to Mr. Chahal, his wife and his companies were suspended o
  • The increasing popularity of programmatic advertising — which uses algorithms to automatically place ads across the internet — allows A.I.-powered news sites to generate revenue by mass-producing low-quality clickbait content
  • Experts are nervous about how A.I.-fueled news could overwhelm accurate reporting with a deluge of junk content distorted by machine-powered repetition. A particular worry is that A.I. aggregators could chip away even further at the viability of local journalism, siphoning away its revenue and damaging its credibility by contaminating the information ecosystem.
Javier E

OpenAI Whistle-Blowers Describe Reckless and Secretive Culture - The New York Times - 0 views

  • A group of OpenAI insiders is blowing the whistle on what they say is a culture of recklessness and secrecy at the San Francisco artificial intelligence company, which is racing to build the most powerful A.I. systems ever created.
  • The group, which includes nine current and former OpenAI employees, has rallied in recent days around shared concerns that the company has not done enough to prevent its A.I. systems from becoming dangerous.
  • The members say OpenAI, which started as a nonprofit research lab and burst into public view with the 2022 release of ChatGPT, is putting a priority on profits and growth as it tries to build artificial general intelligence, or A.G.I., the industry term for a computer program capable of doing anything a human can.
  • ...21 more annotations...
  • They also claim that OpenAI has used hardball tactics to prevent workers from voicing their concerns about the technology, including restrictive nondisparagement agreements that departing employees were asked to sign.
  • “OpenAI is really excited about building A.G.I., and they are recklessly racing to be the first there,” said Daniel Kokotajlo, a former researcher in OpenAI’s governance division and one of the group’s organizers.
  • Other members include William Saunders, a research engineer who left OpenAI in February, and three other former OpenAI employees: Carroll Wainwright, Jacob Hilton and Daniel Ziegler. Several current OpenAI employees endorsed the letter anonymously because they feared retaliation from the company,
  • At OpenAI, Mr. Kokotajlo saw that even though the company had safety protocols in place — including a joint effort with Microsoft known as the “deployment safety board,” which was supposed to review new models for major risks before they were publicly released — they rarely seemed to slow anything down.
  • So was the departure of Dr. Leike, who along with Dr. Sutskever had led OpenAI’s “superalignment” team, which focused on managing the risks of powerful A.I. models. In a series of public posts announcing his departure, Dr. Leike said he believed that “safety culture and processes have taken a back seat to shiny products.”
  • “When I signed up for OpenAI, I did not sign up for this attitude of ‘Let’s put things out into the world and see what happens and fix them afterward,’” Mr. Saunders said.
  • Mr. Kokotajlo, 31, joined OpenAI in 2022 as a governance researcher and was asked to forecast A.I. progress. He was not, to put it mildly, optimistic.In his previous job at an A.I. safety organization, he predicted that A.G.I. might arrive in 2050. But after seeing how quickly A.I. was improving, he shortened his timelines. Now he believes there is a 50 percent chance that A.G.I. will arrive by 2027 — in just three years.
  • He also believes that the probability that advanced A.I. will destroy or catastrophically harm humanity — a grim statistic often shortened to “p(doom)” in A.I. circles — is 70 percent.
  • Last month, two senior A.I. researchers — Ilya Sutskever and Jan Leike — left OpenAI under a cloud. Dr. Sutskever, who had been on OpenAI’s board and voted to fire Mr. Altman, had raised alarms about the potential risks of powerful A.I. systems. His departure was seen by some safety-minded employees as a setback.
  • Mr. Kokotajlo said, he became so worried that, last year, he told Mr. Altman that the company should “pivot to safety” and spend more time and resources guarding against A.I.’s risks rather than charging ahead to improve its models. He said that Mr. Altman had claimed to agree with him, but that nothing much changed.
  • In April, he quit. In an email to his team, he said he was leaving because he had “lost confidence that OpenAI will behave responsibly" as its systems approach human-level intelligence.
  • “The world isn’t ready, and we aren’t ready,” Mr. Kokotajlo wrote. “And I’m concerned we are rushing forward regardless and rationalizing our actions.”
  • On his way out, Mr. Kokotajlo refused to sign OpenAI’s standard paperwork for departing employees, which included a strict nondisparagement clause barring them from saying negative things about the company, or else risk having their vested equity taken away.
  • Many employees could lose out on millions of dollars if they refused to sign. Mr. Kokotajlo’s vested equity was worth roughly $1.7 million, he said, which amounted to the vast majority of his net worth, and he was prepared to forfeit all of it.
  • Mr. Altman said he was “genuinely embarrassed” not to have known about the agreements, and the company said it would remove nondisparagement clauses from its standard paperwork and release former employees from their agreements.)
  • In their open letter, Mr. Kokotajlo and the other former OpenAI employees call for an end to using nondisparagement and nondisclosure agreements at OpenAI and other A.I. companies.
  • “Broad confidentiality agreements block us from voicing our concerns, except to the very companies that may be failing to address these issues,”
  • They also call for A.I. companies to “support a culture of open criticism” and establish a reporting process for employees to anonymously raise safety-related concerns.
  • They have retained a pro bono lawyer, Lawrence Lessig, the prominent legal scholar and activist
  • Mr. Kokotajlo and his group are skeptical that self-regulation alone will be enough to prepare for a world with more powerful A.I. systems. So they are calling for lawmakers to regulate the industry, too.
  • “There needs to be some sort of democratically accountable, transparent governance structure in charge of this process," Mr. Kokotajlo said. “Instead of just a couple of different private companies racing with each other, and keeping it all secret.”
« First ‹ Previous 141 - 155 of 155
Showing 20 items per page