Skip to main content

Home/ History Readings/ Group items tagged position

Rss Feed Group items tagged

6More

Opinion | Belgium Shows What Europe Has Become - The New York Times - 0 views

  • In Brussels, the seat of the European Union, rising crime, pollution and decaying infrastructure symbolize a continent in decline. With unusual clarity, Belgium shows what Europe has become in the 21st century: a continent subject to history rather than driving it.
  • For a long time, Belgian politicians and citizens hoped that European integration would release them from their own tribal squabbles. Who needed intricate federal coalitions if the behemoth in Brussels would soon take over? Except for the army and the national museums, all other levers of policy could comfortably be transferred,
  • The upward absorption has not come to pass. The European Union remains a halfway house between national government and continental superstate. There is no E.U. army or capacious fiscal apparatus. Consequently, Belgium has been put in an awkward position. Unable to collapse itself into Europe, it is stuck with a ramshackle federal state
  • ...3 more annotations...
  • As the ideological glue that allows Belgians to cohabit has come unstuck, the traditional parties of government have found it difficult to retain public backing. Amid a wider fracturing of the vote, Flemish and Walloon voters are now lured by adventurers on right and left
  • Belgium serves as a stern reminder that there are few bulwarks against the trends that ail European nations. The country is no Italy or Netherlands, where the far right is already in government, and party democracy and its postwar prosperity survive only as faint memories.
  • Yet even with Belgium’s lower inequality rates, higher union membership and comparatively stronger party infrastructure, the march of the far right has also proved eerily unstoppable.
19More

The Rise and Fall of BNN Breaking, an AI-Generated News Outlet - The New York Times - 0 views

  • His is just one of many complaints against BNN, a site based in Hong Kong that published numerous falsehoods during its short time online as a result of what appeared to be generative A.I. errors.
  • During the two years that BNN was active, it had the veneer of a legitimate news service, claiming a worldwide roster of “seasoned” journalists and 10 million monthly visitors, surpassing the The Chicago Tribune’s self-reported audience. Prominent news organizations like The Washington Post, Politico and The Guardian linked to BNN’s stories
  • Google News often surfaced them, too
  • ...16 more annotations...
  • A closer look, however, would have revealed that individual journalists at BNN published lengthy stories as often as multiple times a minute, writing in generic prose familiar to anyone who has tinkered with the A.I. chatbot ChatGPT.
  • How easily the site and its mistakes entered the ecosystem for legitimate news highlights a growing concern: A.I.-generated content is upending, and often poisoning, the online information supply.
  • The websites, which seem to operate with little to no human supervision, often have generic names — such as iBusiness Day and Ireland Top News — that are modeled after actual news outlets. They crank out material in more than a dozen languages, much of which is not clearly disclosed as being artificially generated, but could easily be mistaken as being created by human writers.
  • Now, experts say, A.I. could turbocharge the threat, easily ripping off the work of journalists and enabling error-ridden counterfeits to circulate even more widely — as has already happened with travel guidebooks, celebrity biographies and obituaries.
  • The result is a machine-powered ouroboros that could squeeze out sustainable, trustworthy journalism. Even though A.I.-generated stories are often poorly constructed, they can still outrank their source material on search engines and social platforms, which often use A.I. to help position content. The artificially elevated stories can then divert advertising spending, which is increasingly assigned by automated auctions without human oversight.
  • NewsGuard, a company that monitors online misinformation, identified more than 800 websites that use A.I. to produce unreliable news content.
  • Low-paid freelancers and algorithms have churned out much of the faux-news content, prizing speed and volume over accuracy.
  • Former employees said they thought they were joining a legitimate news operation; one had mistaken it for BNN Bloomberg, a Canadian business news channel. BNN’s website insisted that “accuracy is nonnegotiable” and that “every piece of information underwent rigorous checks, ensuring our news remains an undeniable source of truth.”
  • this was not a traditional journalism outlet. While the journalists could occasionally report and write original articles, they were asked to primarily use a generative A.I. tool to compose stories, said Ms. Chakraborty and Hemin Bakir, a journalist based in Iraq who worked for BNN for almost a year. They said they had uploaded articles from other news outlets to the generative A.I. tool to create paraphrased versions for BNN to publish.
  • Mr. Chahal’s evangelism carried weight with his employees because of his wealth and seemingly impressive track record, they said. Born in India and raised in Northern California, Mr. Chahal made millions in the online advertising business in the early 2000s and wrote a how-to book about his rags-to-riches story that landed him an interview with Oprah Winfrey.
  • Mr. Chahal told Mr. Bakir to focus on checking stories that had a significant number of readers, such as those republished by MSN.com.Employees did not want their bylines on stories generated purely by A.I., but Mr. Chahal insisted on this. Soon, the tool randomly assigned their names to stories.
  • This crossed a line for some BNN employees, according to screenshots of WhatsApp conversations reviewed by The Times, in which they told Mr. Chahal that they were receiving complaints about stories they didn’t realize had been published under their names.
  • According to three journalists who worked at BNN and screenshots of WhatsApp conversations reviewed by The Times, Mr. Chahal regularly directed profanities at employees and called them idiots and morons. When employees said purely A.I.-generated news, such as the Fanning story, should be published under the generic “BNN Newsroom” byline, Mr. Chahal was dismissive.“When I do this, I won’t have a need for any of you,” he wrote on WhatsApp.Mr. Bakir replied to Mr. Chahal that assigning journalists’ bylines to A.I.-generated stories was putting their integrity and careers in “jeopardy.”
  • This was a strategy that Mr. Chahal favored, according to former BNN employees. He used his news service to exercise grudges, publishing slanted stories about a politician from San Francisco he disliked, Wikipedia after it published a negative entry about BNN Breaking and Elon Musk after accounts belonging to Mr. Chahal, his wife and his companies were suspended o
  • The increasing popularity of programmatic advertising — which uses algorithms to automatically place ads across the internet — allows A.I.-powered news sites to generate revenue by mass-producing low-quality clickbait content
  • Experts are nervous about how A.I.-fueled news could overwhelm accurate reporting with a deluge of junk content distorted by machine-powered repetition. A particular worry is that A.I. aggregators could chip away even further at the viability of local journalism, siphoning away its revenue and damaging its credibility by contaminating the information ecosystem.
12More

The Infantile Style in American Politics - The Atlantic - 0 views

  • Too many on the left wing of American politics have become inured to the effect of their overheated rhetoric and histrionic displays of fealty to in-group norms
  • Bowman’s supporters sought easy explanations for his defeat, including that redistricting had shifted his district northward out of much of the Bronx and into Westchester. But Bowman in fact had fared well in the predominantly Democratic suburban county in his 2020 and 2022 campaigns.
  • claims that the massive spending of pro-Israel groups was responsible for Bowman’s defeat warrant skepticism
  • ...9 more annotations...
  • Bowman also indulged a penchant—again shared broadly on the anti-Israel, anti-Zionist left—for performative and self-righteous politics.
  • For weeks, prominent left-wing organizers on social media slammed Latimer, a centrist liberal, as a reactionary white man backed by billionaires. The New York City chapter of the Democratic Socialists of America, which endorsed Bowman, decried Latimer as an AIPAC-picked, MAGA-bought racist.
  • using the term genocide has become de rigueur for candidates seeking an endorsement from DSA and Justice Democrats. Bowman obliged, repeatedly.
  • Bowman’s rhetoric was undisciplined and incendiary, while Latimer was a popular local politician whose internal polls showed him leading by double digits before AIPAC spent anything.
  • “The Paranoid Style in American Politics.” A modern left-wing update might be titled “The Infantile Style in American Politics”—as the conspiratorial mixes with obstinacy and braggadocio.
  • The Bronx rally offered a glimpse, too, of the sectarianism that routinely afflicts the left. Pro-Palestine protesters from Within Our Lifetime showed up and beat drums and chanted throughout the rally, doing their best to disrupt the proceedings. They denounced Bowman, Ocasio-Cortez, and Sanders as “Zionists” who backed “Genocide Joe” for president.
  • victories coexist with a growing shrillness and insistence by many on the left upon political purity. So longtime liberal Democratic politicians find themselves denounced as pro-genocide for supporting Israel and Biden’s position on the Gaza conflict.
  • Many mainstream Democrats seem less and less patient with the activist left. Hakeem Jeffries, the House minority leader and possible future speaker, would have none of the Bowman camp’s talk of martyrdom.
  • Jeffries took a noticeably removed and dispassionate view of that loss. “The results speak for themselves. The voters have spoken,” he said, sounding less than distraught. A senior Jeffries adviser later noted on social media that the minority leader has now supported six candidates challenged by DSA, and his candidates have won all six races.
14More

AI Has Become a Technology of Faith - The Atlantic - 0 views

  • Altman told me that his decision to join Huffington stemmed partly from hearing from people who use ChatGPT to self-diagnose medical problems—a notion I found potentially alarming, given the technology’s propensity to return hallucinated information. (If physicians are frustrated by patients who rely on Google or Reddit, consider how they might feel about patients showing up in their offices stuck on made-up advice from a language model.)
  • I noted that it seemed unlikely to me that anyone besides ChatGPT power users would trust a chatbot in this way, that it was hard to imagine people sharing all their most intimate information with a computer program, potentially to be stored in perpetuity.
  • “I and many others in the field have been positively surprised about how willing people are to share very personal details with an LLM,” Altman told me. He said he’d recently been on Reddit reading testimonies of people who’d found success by confessing uncomfortable things to LLMs. “They knew it wasn’t a real person,” he said, “and they were willing to have this hard conversation that they couldn’t even talk to a friend about.”
  • ...11 more annotations...
  • That willingness is not reassuring. For example, it is not far-fetched to imagine insurers wanting to get their hands on this type of medical information in order to hike premiums. Data brokers of all kinds will be similarly keen to obtain people’s real-time health-chat records. Altman made a point to say that this theoretical product would not trick people into sharing information.
  • . Neither Altman nor Huffington had an answer to my most basic question—What would the product actually look like? Would it be a smartwatch app, a chatbot? A Siri-like audio assistant?—but Huffington suggested that Thrive’s AI platform would be “available through every possible mode,” that “it could be through your workplace, like Microsoft Teams or Slack.
  • This led me to propose a hypothetical scenario in which a company collects this information and stores it inappropriately or uses it against employees. What safeguards might the company apply then? Altman’s rebuttal was philosophical. “Maybe society will decide there’s some version of AI privilege,” he said. “When you talk to a doctor or a lawyer, there’s medical privileges, legal privileges. There’s no current concept of that when you talk to an AI, but maybe there should be.”
  • So much seems to come down to: How much do you want to believe in a future mediated by intelligent machines that act like humans? And: Do you trust these people?
  • A fundamental question has loomed over the world of AI since the concept cohered in the 1950s: How do you talk about a technology whose most consequential effects are always just on the horizon, never in the present? Whatever is built today is judged partially on its own merits, but also—perhaps even more important—on what it might presage about what is coming next.
  • the models “just want to learn”—a quote attributed to the OpenAI co-founder Ilya Sutskever that means, essentially, that if you throw enough money, computing power, and raw data into these networks, the models will become capable of making ever more impressive inferences. True believers argue that this is a path toward creating actual intelligence (many others strongly disagree). In this framework, the AI people become something like evangelists for a technology rooted in faith: Judge us not by what you see, but by what we imagine.
  • I found it outlandish to invoke America’s expensive, inequitable, and inarguably broken health-care infrastructure when hyping a for-profit product that is so nonexistent that its founders could not tell me whether it would be an app or not.
  • Thrive AI Health is profoundly emblematic of this AI moment precisely because it is nothing, yet it demands that we entertain it as something profound.
  • you don’t have to get apocalyptic to see the way that AI’s potential is always muddying people’s ability to evaluate its present. For the past two years, shortcomings in generative-AI products—hallucinations; slow, wonky interfaces; stilted prose; images that showed too many teeth or couldn’t render fingers; chatbots going rogue—have been dismissed by AI companies as kinks that will eventually be worked out
  • Faith is not a bad thing. We need faith as a powerful motivating force for progress and a way to expand our vision of what is possible. But faith, in the wrong context, is dangerous, especially when it is blind. An industry powered by blind faith seems particularly troubling.
  • The greatest trick of a faith-based industry is that it effortlessly and constantly moves the goal posts, resisting evaluation and sidestepping criticism. The promise of something glorious, just out of reach, continues to string unwitting people along. All while half-baked visions promise salvation that may never come.
18More

Defeated by A.I., a Legend in the Board Game Go Warns: Get Ready for What's Next - The ... - 0 views

  • Lee Saedol was the finest Go player of his generation when he suffered a decisive loss, defeated not by a human opponent but by artificial intelligence.
  • The stunning upset, in 2016, made headlines around the world and looked like a clear sign that artificial intelligence was entering a new, profoundly unsettling era.
  • By besting Mr. Lee, an 18-time world champion revered for his intuitive and creative style of play, AlphaGo had solved one of computer science’s greatest challenges: teaching itself the abstract strategy needed to win at Go, widely considered the world’s most complex board game.
  • ...15 more annotations...
  • AlphaGo’s victory demonstrated the unbridled potential of A.I. to achieve superhuman mastery of skills once considered too complicated for machines.
  • Mr. Lee, now 41, retired three years later, convinced that humans could no longer compete with computers at Go. Artificial intelligence, he said, had changed the very nature of a game that originated in China more than 2,500 years ago.
  • As society wrestles with what A.I. holds for humanity’s future, Mr. Lee is now urging others to avoid being caught unprepared, as he was, and to become familiar with the technology now. He delivers lectures about A.I., trying to give others the advance notice he wishes he had received before his match.
  • “I faced the issues of A.I. early, but it will happen for others,” Mr. Lee said recently at a community education fair in Seoul to a crowd of students and parents. “It may not be a happy ending.”
  • Mr. Lee is not a doomsayer. In his view, A.I. may replace some jobs, but it may create some, too. When considering A.I.’s grasp of Go, he said it was important to remember that humans both created the game and designed the A.I. system that mastered it.
  • What he worries about is that A.I. may change what humans value.
  • His immense talent was apparent from the start. He quickly became the best player of his age not only locally but across all of South Korea, Japan and China. He turned pro at 12.
  • “People used to be in awe of creativity, originality and innovation,” he said. “But since A.I. came, a lot of that has disappeared.”
  • By the time he was 20, Mr. Lee had reached 9-dan, the highest level of mastery in Go. Soon, he was among the best players in the world, described by some as the Roger Federer of the game.
  • Go posed a tantalizing challenge for A.I. researchers. The game is exponentially more complicated than chess, with it often being said that there are more possible positions on a Go board (10 with more than 100 zeros after it, by many mathematical estimates) than there are atoms in the universe.
  • The breakthrough came from DeepMind, which built AlphaGo using so-called neural networks: mathematical systems that can learn skills by analyzing enormous amounts of data. It started by feeding the network 30 million moves from high-level players. Then the program played game after game against itself until it learned which moves were successful and developed new strategies.
  • Mr. Lee said not having a true human opponent was disconcerting. AlphaGo played a style he had never seen, and it felt odd to not try to decipher what his opponent was thinking and feeling. The world watched in awe as AlphaGo pushed Mr. Lee into corners and made moves unthinkable to a human player.“I couldn’t get used to it,” he said. “I thought that A.I. would beat humans someday. I just didn’t think it was here yet.”
  • AlphaGo’s victory “was a watershed moment in the history of A.I.” said Demis Hassabis, DeepMind’s chief executive, in a written statement. It showed what computers that learn on their own from data “were really capable of,” he said.
  • Mr. Lee had a hard time accepting the defeat. What he regarded as an art form, an extension of a player’s own personality and style, was now cast aside for an algorithm’s ruthless efficiency.
  • His 17-year-old daughter is in her final year of high school. When they discuss what she should study at university, they often consider a future shaped by A.I.“We often talk about choosing a job that won’t be easily replaceable by A.I. or less impacted by A.I.,” he said. “It’s only a matter of time before A.I. is present everywhere.”
« First ‹ Previous 1961 - 1965 of 1965
Showing 20 items per page