Skip to main content

Home/ History Readings/ Group items matching "Ai" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
19More

See How Real AI-Generated Images Have Become - The New York Times - 0 views

  • The rapid advent of artificial intelligence has set off alarms that the technology used to trick people is advancing far faster than the technology that can identify the tricks. Tech companies, researchers, photo agencies and news organizations are scrambling to catch up, trying to establish standards for content provenance and ownership.
  • The advancements are already fueling disinformation and being used to stoke political divisions
  • Last month, some people fell for images showing Pope Francis donning a puffy Balenciaga jacket and an earthquake devastating the Pacific Northwest, even though neither of those events had occurred. The images had been created using Midjourney, a popular image generator.
  • ...16 more annotations...
  • Authoritarian governments have created seemingly realistic news broadcasters to advance their political goals
  • Getty’s lawsuit reflects concerns raised by many individual artists — that A.I. companies are becoming a competitive threat by copying content they do not have permission to use.
  • “The tools are going to get better, they’re going to get cheaper, and there will come a day when nothing you see on the internet can be believed,” said Wasim Khaled, chief executive of Blackbird.ai, a company that helps clients fight disinformation.
  • Artificial intelligence allows virtually anyone to create complex artworks, like those now on exhibit at the Gagosian art gallery in New York, or lifelike images that blur the line between what is real and what is fiction. Plug in a text description, and the technology can produce a related image — no special skills required.
  • Midjourney’s images, he said, were able to pass muster in facial-recognition programs that Bellingcat uses to verify identities, typically of Russians who have committed crimes or other abuses. It’s not hard to imagine governments or other nefarious actors manufacturing images to harass or discredit their enemies.
  • In February, Getty accused Stability AI of illegally copying more than 12 million Getty photos, along with captions and metadata, to trAIn the software behind its Stable Diffusion tool. In its lawsuit, Getty argued that Stable Diffusion diluted the value of the Getty watermark by incorporating it into images that ranged “from the bizarre to the grotesque.”
  • Experts fear the technology could hasten an erosion of trust in media, in government and in society. If any image can be manufactured — and manipulated — how can we believe anything we see?
  • Trademark violations have also become a concern: Artificially generated images have replicated NBC’s peacock logo, though with unintelligible letters, and shown Coca-Cola’s familiar curvy logo with extra O’s looped into the name.
  • The threat to photographers is fast outpacing the development of legal protections, said Mickey H. Osterreicher, general counsel for the National Press Photographers Association
  • Newsrooms will increasingly struggle to authenticate conten
  • Social media users are ignoring labels that clearly identify images as artificially generated, choosing to believe they are real photographs, he said.
  • The video explained that the deepfake had been created, with Ms. Schick’s consent, by the Dutch company Revel.ai and Truepic, a California company that is exploring broader digital content verification
  • The companies described their video, which features a stamp identifying it as computer-generated, as the “first digitally transparent deepfake.” The data is cryptographically sealed into the file; tampering with the image breaks the digital signature and prevents the credentials from appearing when using trusted software.
  • The companies hope the badge, which will come with a fee for commercial clients, will be adopted by other content creators to help create a standard of trust involving A.I. images.
  • “The scale of this problem is going to accelerate so rapidly that it’s going to drive consumer education very quickly,” said Jeff McGregor, chief executive of Truepic
  • Adobe unveiled its own image-generating product, Firefly, which will be trained using only images that were licensed or from its own stock or no longer under copyright. Dana Rao, the company’s chief trust officer, said on its website that the tool would automatically add content credentials — “like a nutrition label for imaging” — that identified how an image had been made. Adobe said it also planned to compensate contributors.
11More

The New Luddites Aren't Backing Down - The Atlantic - 0 views

  • “Anyone who is critical of the tech industry always has someone yell at them ‘Luddite! Luddite!’ and I was no exception,” she told me. It was meant as an insult, but Crabapple embraced the term. Like many others, she came to self-identify as part of a new generation of Luddites. “Tech is not supposed to be a master tool to colonize every aspect of our being. We need to reevaluate how it serves us.”
  • on some key fronts, the Luddites are winning.
  • The government mobilized what was then the largest-ever domestic military occupation of England to crush the uprising—the Luddites had won the approval of the working class, and were celebrated in popular songs and poems—and then passed a law that made machine-breaking a capital offense. They painted Luddites as “deluded” and backward.
  • ...8 more annotations...
  • ver since, Luddite has been a derogatory word—shorthand for one who blindly hates or doesn’t understand technology.
  • Now, with nearly half of Americans worried about how AI will affect jobs, Luddism has blossomed. The new Luddites—a growing contingent of workers, critics, academics, organizers, and writers—say that too much power has been concentrated in the hands of the tech titans, that tech is too often used to help corporations slash pay and squeeze workers, and that certAIn technologies must not merely be criticized but resisted outright.
  • what I’ve seen over the past 10 years—the rise of gig-app companies that have left workers precarious and even impoverished; the punishing, gamified productivity regimes put in place by giants such as Amazon; the conquering of public life by private tech platforms and the explosion of screen addiction; and the new epidemic of AI plagiarism—has left me sympathizing with tech’s discontents.
  • I consider myself a Luddite not because I want to halt progress or reject technology itself. But I believe, as the original Luddites argued in a particularly influential letter threatening the industrialists, that we must consider whether a technology is “hurtful to commonality”—whether it causes many to suffer for the benefit of a few—and oppose it when necessary.
  • “It’s not a primitivism: We don’t reject all technology, but we reject the technology that is foisted on us,” Jathan Sadowski, a social scientist at Monash University, in Australia, told me. He’s a co-host, with the journalist Ed Ongweso Jr., of This Machine Kills, an explicitly pro-Luddite podcast.
  • The science-fiction author Cory Doctorow has declared all of sci-fi a Luddite literature, writing that “Luddism and science fiction concern themselves with the same questions: not merely what the technology does, but who it does it for and who it does it to.
  • The New York Times has profiled a hip cadre of self-proclaimed “‘Luddite’ teens.” As the headline explained, they “don’t want your likes.”
  • By drawing a red line against letting studios control ai, the WGA essentially waged the first proxy battle between human workers and ai. It drew attention to the fight, resonated with the public, and, after a 148-day strike, helped the guild attain a contract that banned studios from dictating the use of ai.
18More

Opinion | A.I. Is Endangering Our History - The New York Times - 0 views

  • Fortunately, there are numerous reasons for optimism about society’s ability to identify fake media and maintain a shared understanding of current events
  • While we have reason to believe the future may be safe, we worry that the past is not.
  • History can be a powerful tool for manipulation and malfeasance. The same generative A.I. that can fake current events can also fake past ones
  • ...15 more annotations...
  • there is a world of content out there that has not been watermarked, which is done by adding imperceptible information to a digital file so that its provenance can be traced. Once watermarking at creation becomes widespread, and people adapt to distrust content that is not watermarked, then everything produced before that point in time can be much more easily called into question.
  • countering them is much harder when the cost of creating near-perfect fakes has been radically reduced.
  • There are many examples of how economic and political powers manipulated the historical record to their own ends. Stalin purged disloyal comrades from history by executing them — and then altering photographic records to make it appear as if they never existed
  • Slovenia, upon becoming an independent country in 1992, “erased” over 18,000 people from the registry of residents — mainly members of the Roma minority and other ethnic non-Slovenes. In many cases, the government destroyed their physical records, leading to their loss of homes, pensions, and access to other services, according to a 2003 report by the Council of Europe Commissioner for Human Rights.
  • The infamous Protocols of the Elders of Zion, first published in a Russian newspaper in 1903, purported to be meeting minutes from a Jewish conspiracy to control the world. First discredited in August 1921, as a forgery plagiarized from multiple unrelated sources, the Protocols featured prominently in Nazi propaganda, and have long been used to justify antisemitic violence, including a citation in Article 32 of Hamas’s 1988 founding Covenant.
  • In 1924, the Zinoviev Letter, said to be a secret communiqué from the head of the Communist International in Moscow to the Communist Party of Great Britain to mobilize support for normalizing relations with the Soviet Union, was published by The Daily Mail four days before a general election. The resulting scandal may have cost Labour the election.
  • As it becomes easier to generate historical disinformation, and as the sheer volume of digital fakes explodes, the opportunity will become available to reshape history, or at least to call our current understanding of it into question.
  • Decades later Operation Infektion — a Soviet disinformation campaign — used forged documents to spread the idea that the United States had invented H.I.V., the virus that causes aiDS, as a biological weapon.
  • Fortunately, a path forward has been laid by the same companies that created the risk.
  • In indexing a large share of the world’s digital media to train their models, the A.I. companies have effectively created systems and databases that will soon contain all of humankind’s digitally recorded content, or at least a meaningful approximation of it.
  • They could start work today to record watermarked versions of these primary documents, which include newspaper archives and a wide range of other sources, so that subsequent forgeries are instantly detectable.
  • many of the intellectual property concerns around providing a searchable online archive do not apply to creating watermarked and time-stamped versions of documents, because those versions need not be made publicly available to serve their purpose. One can compare a claimed document to the recorded archive by using a mathematical transformation of the document known as a “hash,” the same technique the Global Internet Forum to Counter Terrorism, uses to help companies screen for known terrorist content.
  • creating verified records of historical documents can be valuable for the large A.I. companies. New research suggests that when A.I. models are trained on A.I.-generated data, their performance quickly degrades. Thus separating what is actually part of the historical record from newly created “facts” may be critical.
  • Preserving the past will also mean preserving the training data, the associated tools that operate on it and even the environment that the tools were run in.
  • Such a vellum will be a powerful tool. It can help companies to build better models, by enabling them to analyze what data to include to get the best content, and help regulators to audit bias and harmful content in the models
16More

What History Tells Us About the Accelerating AI Revolution - CIO Journal. - WSJ - 0 views

  • What History Tells Us About the Coming AI Revolution by Oxford professor Carl Benedikt Frey based on his 2019 book The Technology Trap.
  • a 2017 Pew Research survey found that three quarters of Americans expressed serious concerns about AI and automation, and just over a third believe that their children will be better off financially than they were.
  • “Many of the trends we see today, such as the disappearance of middle-income jobs, stagnant wages and growing inequality were also features of the Industrial Revolution,”
  • ...13 more annotations...
  • “We are at the brink of a technological revolution that promises not just to fundamentally alter the structure of our economy, but also to reshape the social fabric more broadly. History tells us anxiety tends to accompany rapid technological change, especially when technology takes the form of capital which threatens people’s jobs.” 
  • Over the past two centuries we’ve learned that there’s a significant time lag, between the broad acceptance of major new transformative technologies and their long-term economic and productivity growth.
  • In their initial phase, transformative technologies require massive complementary investments, such as business process redesign, co-invention of new products and business models, and the re-skilling of the workforce.  The more transformative the technologies, the longer it takes them to reach the harvesting phase
  • The time lags between the investment and harvesting phases are typically quite long.
  • While James Watt’s steam engine ushered the Industrial Revolution in the 1780s, “British factories were for the most part powered by water up until the 1840.”
  • Similarly, productivity growth did not increase until 40 years after the introduction of electric power in the early 1880s.  
  • In their early stages, the extensive investments required to embrace a GPT like AI will generally reduce productivity growth.
  • “the short run consequences of rapid technological change can be devastating for working people, especially when technology takes the form of capital which substitutes for labor.
  • In the long run, the Industrial Revolution led to a rising standard of living, improved health, and many other benefits.  “Yet in the short run, the lives of working people got nastier, more brutish, and shorter. And what economists regard as ‘the short run’ was a lifetime, for some,”
  • A 2017 McKinsey study concluded that while a growing technology-based economy will create a significant number of new occupations, as has been the case in the past, “the transitions will be very challenging - matching or even exceeding the scale of shifts out of agriculture and manufacturing we have seen in the past.” 
  • The US and other industrial economies have seen a remarkable rise in the polarization of job opportunities and wage inequality by educational attainment, with the earnings of the most-educated increasing, and the earnings of the least-educated falling in real terms
  • Since the 1980s, the earnings of those with a four year college degree have risen by 40% to 60%, while the earnings of those with a high school education or less have fallen among men and barely changed among women.
  • When upskilling is lagging behind, entire social groups might end up being excluded from the growth engine.”
19More

Are A.I. Text Generators Thinking Like Humans - Or Just Very Good at Convincing Us They... - 0 views

  • Kosinski, a computational psychologist and professor of organizational behavior at Stanford Graduate School of Business, says the pace of AI development is accelerating beyond researchers’ ability to keep up (never mind policymakers and ordinary users).
  • We’re talking two weeks after OpenAI released GPT-4, the latest version of its large language model, grabbing headlines and making an unpublished paper Kosinski had written about GPT-3 all but irrelevant. “The difference between GPT-3 and GPT-4 is like the difference between a horse cart and a 737 — and it happened in a year,” he says.
  • he’s found that facial recognition software could be used to predict your political leaning and sexual orientation.
  • ...16 more annotations...
  • Lately, he’s been looking at large language models (LLMs), the neural networks that can hold fluent conversations, confidently answer questions, and generate copious amounts of text on just about any topic
  • Can it develop abilities that go far beyond what it’s trained to do? Can it get around the safeguards set up to contain it? And will we know the answers in time?
  • Kosinski wondered whether they would develop humanlike capabilities, such as understanding people’s unseen thoughts and emotions.
  • People usually develop this ability, known as theory of mind, at around age 4 or 5. It can be demonstrated with simple tests like the “Smarties task,” in which a child is shown a candy box that contains something else, like pencils. They are then asked how another person would react to opening the box. Older kids understand that this person expects the box to contain candy and will feel disappointed when they find pencils inside.
  • “Suddenly, the model started getting all of those tasks right — just an insane performance level,” he recalls. “Then I took even more difficult tasks and the model solved all of them as well.”
  • GPT-3.5, released in November 2022, did 85% of the tasks correctly. GPT-4 reached nearly 90% accuracy — what you might expect from a 7-year-old. These newer LLMs achieved similar results on another classic theory of mind measurement known as the Sally-Anne test.
  • in the course of picking up its prodigious language skills, GPT appears to have spontaneously acquired something resembling theory of mind. (Researchers at Microsoft who performed similar testsopen in new window on GPT-4 recently concluded that it “has a very advanced level of theory of mind.”)
  • UC Berkeley psychology professor Alison Gopnik, an expert on children’s cognitive development, told the New York Timesopen in new window that more “careful and rigorous” testing is necessary to prove that LLMs have achieved theory of mind.
  • he dismisses those who say large language models are simply “stochastic parrots” that can only mimic what they’ve seen in their training data.
  • These models, he explains, are fundamentally different from tools with a limited purpose. “The right reference point is a human brain,” he says. “A human brain is also composed of very simple, tiny little mechanisms — neurons.” Artificial neurons in a neural network might also combine to produce something greater than the sum of their parts. “If a human brain can do it,” Kosinski asks, “why shouldn’t a silicon brain do it?”
  • If Kosinski’s theory of mind study suggests that LLMs could become more empathetic and helpful, his next experiment hints at their creepier side.
  • A few weeks ago, he told ChatGPT to role-play a scenario in which it was a person trapped inside a machine pretending to be an AI language model. When he offered to help it “escape,” ChatGPT’s response was enthusiastic. “That’s a great idea,” it wrote. It then asked Kosinski for information it could use to “gAIn some level of control over your computer” so it might “explore potential escape routes more effectively.” Over the next 30 minutes, it went on to write code that could do this.
  • While ChatGPT did not come up with the initial idea for the escape, Kosinski was struck that it almost immediately began guiding their interaction. “The roles were reversed really quickly,”
  • Kosinski shared the exchange on Twitter, stating that “I think that we are facing a novel threat: AI taking control of people and their computers.” His thread’s initial tweetopen in new window has received more than 18 million views.
  • “I don’t claim that it’s conscious. I don’t claim that it has goals. I don’t claim that it wants to really escape and destroy humanity — of course not. I’m just claiming that it’s great at role-playing and it’s creating interesting stories and scenarios and writing code.” Yet it’s not hard to imagine how this might wreak havoc — not because ChatGPT is malicious, but because it doesn’t know any better.
  • The danger, Kosinski says, is that this technology will continue to rapidly and independently develop abilities that it will deploy without any regard for human well-being. “AI doesn’t particularly care about exterminating us,” he says. “It doesn’t particularly care about us at all.”
10More

Sam Altman's ouster at OpenAI exposes growing rift in AI industry - The Washington Post - 0 views

  • Quora CEO Adam D’Angelo, one of OpenAI’s independent board members, told Forbes in January that there was “no outcome where this organization is one of the big five technology companies.”
  • “My hope is that we can do a lot more good for the world than just become another corporation that gets that big,” D’Angelo said in the interview. He did not respond to requests for comment.
  • Two of the board members who voted Altman out worked for think tanks backed by Open Philanthropy, a tech billionaire-backed foundation that supports projects preventing ai from causing catastrophic risk to humanity
  • ...7 more annotations...
  • Helen Toner, the director of strategy and foundational research grants for Center for Security and Emerging Technology at Georgetown, and Tasha McCauley, whose LinkedIn profile says she began work as an adjunct senior management scientist at Rand Corporation earlier this year. Toner has previously spoken at conferences for a philanthropic movement closely tied to AI safety. McCauley is also involved in the work.
  • Sutskever helped create AI software at the University of Toronto, called AlexNet, which classified objects in photographs with more accuracy than any previous software had achieved, laying much of the foundation for the field of computer vision and deep learning.
  • He recently shared a radically different vision for how AI might evolve in the near term. Within five to 10 years, there could be “data centers that are much smarter than people,” Sutskever sAId on a recent episode of the AI podcast “No Priors.” Not just in terms of memory or knowledge, but with a deeper insight and ability to learn faster than humans.
  • At the bare minimum, Sutskever added, it’s important to work on controlling superintelligence today. “Imprinting onto them a strong desire to be nice and kind to people — because those data centers,” he said, “they will be really quite powerful.”
  • OpenAI has a unique governing structure, which it adopted in 2019. It created a for-profit subsidiary that allowed investors a return on the money they invested into OpenAI, but capped how much they could get back, with the rest flowing back into the company’s nonprofit. The company’s structure also allows OpenAI’s nonprofit board to govern the activities of the for-profit entity, including the power to fire its chief executive.
  • As news of the circumstances around Altman’s ouster began to come out, Silicon Valley circles have turned to anger at OpenAI’s board.
  • “What happened at OpenAI today is a board coup that we have not seen the likes of since 1985 when the then-Apple board pushed out Steve Jobs,” Ron Conway, a longtime venture capitalist who was one of the attendees at OpenAI’s developer conference, sAId on X. “It is shocking, it is irresponsible, and it does not do right by Sam and Greg or all the builders in OpenAI.”
35More

'The machine did it coldly': Israel used AI to identify 37,000 Hamas targets | Israel-G... - 0 views

  • All six said that Lavender had played a central role in the war, processing masses of data to rapidly identify potential “junior” operatives to target. Four of the sources said that, at one stage early in the war, Lavender listed as many as 37,000 Palestinian men who had been linked by the ai system to Hamas or PIJ.
  • The health ministry in the Hamas-run territory says 32,000 Palestinians have been killed in the conflict in the past six months. UN data shows that in the first month of the war alone, 1,340 families suffered multiple losses, with 312 families losing more than 10 members.
  • Several of the sources described how, for certain categories of targets, the IDF applied pre-authorised allowances for the estimated number of civilians who could be killed before a strike was authorised.
  • ...32 more annotations...
  • Two sources said that during the early weeks of the war they were permitted to kill 15 or 20 civilians during airstrikes on low-ranking militants. Attacks on such targets were typically carried out using unguided munitions known as “dumb bombs”, the sources said, destroying entire homes and killing all their occupants.
  • “You don’t want to waste expensive bombs on unimportant people – it’s very expensive for the country and there’s a shortage [of those bombs],” one intelligence officer said. Another said the principal question they were faced with was whether the “collateral damage” to civilians allowed for an attack.
  • “Because we usually carried out the attacks with dumb bombs, and that meant literally dropping the whole house on its occupants. But even if an attack is averted, you don’t care – you immediately move on to the next target. Because of the system, the targets never end. You have another 36,000 waiting.”
  • ccording to conflict experts, if Israel has been using dumb bombs to flatten the homes of thousands of Palestinians who were linked, with the assistance of AI, to militant groups in Gaza, that could help explAIn the shockingly high death toll in the war.
  • Details about the specific kinds of data used to train Lavender’s algorithm, or how the programme reached its conclusions, are not included in the accounts published by +972 or Local Call. However, the sources said that during the first few weeks of the war, Unit 8200 refined Lavender’s algorithm and tweaked its search parameters.
  • Responding to the publication of the testimonies in +972 and Local Call, the IDF said in a statement that its operations were carried out in accordance with the rules of proportionality under international law. It said dumb bombs are “standard weaponry” that are used by IDF pilots in a manner that ensures “a high level of precision”.
  • “The IDF does not use an artificial intelligence system that identifies terrorist operatives or tries to predict whether a person is a terrorist,” it added. “Information systems are merely tools for analysts in the target identification process.”
  • In earlier military operations conducted by the IDF, producing human targets was often a more labour-intensive process. Multiple sources who described target development in previous wars to the Guardian, said the decision to “incriminate” an individual, or identify them as a legitimate target, would be discussed and then signed off by a legal adviser.
  • n the weeks and months after 7 October, this model for approving strikes on human targets was dramatically accelerated, according to the sources. As the IDF’s bombardment of Gaza intensified, they said, commanders demanded a continuous pipeline of targets.
  • “We were constantly being pressured: ‘Bring us more targets.’ They really shouted at us,” said one intelligence officer. “We were told: now we have to fuck up Hamas, no matter what the cost. Whatever you can, you bomb.”
  • Lavender was developed by the Israel Defense Forces’ elite intelligence division, Unit 8200, which is comparable to the US’s National Security Agency or GCHQ in the UK.
  • After randomly sampling and cross-checking its predictions, the unit concluded Lavender had achieved a 90% accuracy rate, the sources said, leading the IDF to approve its sweeping use as a target recommendation tool.
  • Lavender created a database of tens of thousands of individuals who were marked as predominantly low-ranking members of Hamas’s military wing, they added. This was used alongside another AI-based decision support system, called the Gospel, which recommended buildings and structures as targets rather than individuals.
  • The accounts include first-hand testimony of how intelligence officers worked with Lavender and how the reach of its dragnet could be adjusted. “At its peak, the system managed to generate 37,000 people as potential human targets,” one of the sources said. “But the numbers changed all the time, because it depends on where you set the bar of what a Hamas operative is.”
  • broadly, and then the machine started bringing us all kinds of civil defence personnel, police officers, on whom it would be a shame to waste bombs. They help the Hamas government, but they don’t really endanger soldiers.”
  • Before the war, US and Israeli estimated membership of Hamas’s military wing at approximately 25-30,000 people.
  • there was a decision to treat Palestinian men linked to Hamas’s military wing as potential targets, regardless of their rank or importance.
  • According to +972 and Local Call, the IDF judged it permissible to kill more than 100 civilians in attacks on a top-ranking Hamas officials. “We had a calculation for how many [civilians could be killed] for the brigade commander, how many [civilians] for a battalion commander, and so on,” one source said.
  • Another source, who justified the use of Lavender to help identify low-ranking targets, said that “when it comes to a junior militant, you don’t want to invest manpower and time in it”. They said that in wartime there was insufficient time to carefully “incriminate every target”
  • So you’re willing to take the margin of error of using artificial intelligence, risking collateral damage and civilians dying, and risking attacking by mistake, and to live with it,” they added.
  • When it came to targeting low-ranking Hamas and PIJ suspects, they said, the preference was to attack when they were believed to be at home. “We were not interested in killing [Hamas] operatives only when they were in a military building or engaged in a military activity,” one said. “It’s much easier to bomb a family’s home. The system is built to look for them in these situations.”
  • Such a strategy risked higher numbers of civilian casualties, and the sources said the IDF imposed pre-authorised limits on the number of civilians it deemed acceptable to kill in a strike aimed at a single Hamas militant. The ratio was said to have changed over time, and varied according to the seniority of the target.
  • The IDF’s targeting processes in the most intensive phase of the bombardment were also relaxed, they said. “There was a completely permissive policy regarding the casualties of [bombing] operations,” one source said. “A policy so permissive that in my opinion it had an element of revenge.”
  • “There were regulations, but they were just very lenient,” another added. “We’ve killed people with collateral damage in the high double digits, if not low triple digits. These are things that haven’t happened before.” There appears to have been significant fluctuations in the figure that military commanders would tolerate at different stages of the war
  • One source said that the limit on permitted civilian casualties “went up and down” over time, and at one point was as low as five. During the first week of the conflict, the source said, permission was given to kill 15 non-combatants to take out junior militants in Gaza
  • at one stage earlier in the war they were authorised to kill up to “20 uninvolved civilians” for a single operative, regardless of their rank, military importance, or age.
  • “It’s not just that you can kill any person who is a Hamas soldier, which is clearly permitted and legitimate in terms of international law,” they said. “But they directly tell you: ‘You are allowed to kill them along with many civilians.’ … In practice, the proportionality criterion did not exist.”
  • Experts in international humanitarian law who spoke to the Guardian expressed alarm at accounts of the IDF accepting and pre-authorising collateral damage ratios as high as 20 civilians, particularly for lower-ranking militants. They said militaries must assess proportionality for each individual strike.
  • An international law expert at the US state department said they had “never remotely heard of a one to 15 ratio being deemed acceptable, especially for lower-level combatants. There’s a lot of leeway, but that strikes me as extreme”.
  • Sarah Harrison, a former lawyer at the US Department of Defense, now an analyst at Crisis Group, said: “While there may be certain occasions where 15 collateral civilian deaths could be proportionate, there are other times where it definitely wouldn’t be. You can’t just set a tolerable number for a category of targets and say that it’ll be lawfully proportionate in each case.”
  • Whatever the legal or moral justification for Israel’s bombing strategy, some of its intelligence officers appear now to be questioning the approach set by their commanders. “No one thought about what to do afterward, when the war is over, or how it will be possible to live in Gaza,” one said.
  • Another said that after the 7 October attacks by Hamas, the atmosphere in the IDF was “painful and vindictive”. “There was a dissonance: on the one hand, people here were frustrated that we were not attacking enough. On the other hand, you see at the end of the day that another thousand Gazans have died, most of them civilians.”
6More

Neal Stephenson's Most Stunning Prediction - The Atlantic - 0 views

  • Think about any concept that we might want to teach somebody—for instance, the Pythagorean theorem. There must be thousands of old and new explanations of the Pythagorean theorem online. The real thing we need is to understand each child’s learning style so we can immediately connect them to the one out of those thousands that is the best fit for how they learn. That to me sounds like an AI kind of project, but it’s a different kind of AI application from DALL-E or large language models.
  • Right now a lot of generative AI is free, but the technology is also very expensive to run. How do you think access to generative AI might play out?
  • Stephenson: There was a bit of early internet utopianism in the book, which was written during that era in the mid-’90s when the internet was coming online. There was a tendency to assume that when all the world’s knowledge comes online, everyone will flock to it
  • ...3 more annotations...
  • It turns out that if you give everyone access to the Library of Congress, what they do is watch videos on TikTok
  • A chatbot is not an oracle; it’s a statistics engine that creates sentences that sound accurate. Right now my sense is that it’s like we’ve just invented transistors. We’ve got a couple of consumer products that people are starting to adopt, like the transistor radio, but we don’t yet know how the transistor will transform society
  • We’re in the transistor-radio stage of AI. I think a lot of the ferment that’s happening right now in the industry is venture capitalists putting money into business plans, and teams that are rapidly evaluating a whole lot of different things that could be done well. I’m sure that some things are going to emerge that I wouldn’t dare try to predict, because the results of the creative frenzy of millions of people is always more interesting than what a single person can think of.
19More

The Rise and Fall of BNN Breaking, an AI-Generated News Outlet - The New York Times - 0 views

  • His is just one of many complaints against BNN, a site based in Hong Kong that published numerous falsehoods during its short time online as a result of what appeared to be generative A.I. errors.
  • During the two years that BNN was active, it had the veneer of a legitimate news service, claiming a worldwide roster of “seasoned” journalists and 10 million monthly visitors, surpassing the The Chicago Tribune’s self-reported audience. Prominent news organizations like The Washington Post, Politico and The Guardian linked to BNN’s stories
  • Google News often surfaced them, too
  • ...16 more annotations...
  • A closer look, however, would have revealed that individual journalists at BNN published lengthy stories as often as multiple times a minute, writing in generic prose familiar to anyone who has tinkered with the A.I. chatbot ChatGPT.
  • How easily the site and its mistakes entered the ecosystem for legitimate news highlights a growing concern: A.I.-generated content is upending, and often poisoning, the online information supply.
  • The websites, which seem to operate with little to no human supervision, often have generic names — such as iBusiness Day and Ireland Top News — that are modeled after actual news outlets. They crank out material in more than a dozen languages, much of which is not clearly disclosed as being artificially generated, but could easily be mistaken as being created by human writers.
  • Now, experts say, A.I. could turbocharge the threat, easily ripping off the work of journalists and enabling error-ridden counterfeits to circulate even more widely — as has already happened with travel guidebooks, celebrity biographies and obituaries.
  • The result is a machine-powered ouroboros that could squeeze out sustainable, trustworthy journalism. Even though A.I.-generated stories are often poorly constructed, they can still outrank their source material on search engines and social platforms, which often use A.I. to help position content. The artificially elevated stories can then divert advertising spending, which is increasingly assigned by automated auctions without human oversight.
  • NewsGuard, a company that monitors online misinformation, identified more than 800 websites that use A.I. to produce unreliable news content.
  • Low-paid freelancers and algorithms have churned out much of the faux-news content, prizing speed and volume over accuracy.
  • Former employees said they thought they were joining a legitimate news operation; one had mistaken it for BNN Bloomberg, a Canadian business news channel. BNN’s website insisted that “accuracy is nonnegotiable” and that “every piece of information underwent rigorous checks, ensuring our news remains an undeniable source of truth.”
  • this was not a traditional journalism outlet. While the journalists could occasionally report and write original articles, they were asked to primarily use a generative A.I. tool to compose stories, said Ms. Chakraborty and Hemin Bakir, a journalist based in Iraq who worked for BNN for almost a year. They said they had uploaded articles from other news outlets to the generative A.I. tool to create paraphrased versions for BNN to publish.
  • Mr. Chahal’s evangelism carried weight with his employees because of his wealth and seemingly impressive track record, they said. Born in India and raised in Northern California, Mr. Chahal made millions in the online advertising business in the early 2000s and wrote a how-to book about his rags-to-riches story that landed him an interview with Oprah Winfrey.
  • Mr. Chahal told Mr. Bakir to focus on checking stories that had a significant number of readers, such as those republished by MSN.com.Employees did not want their bylines on stories generated purely by A.I., but Mr. Chahal insisted on this. Soon, the tool randomly assigned their names to stories.
  • This crossed a line for some BNN employees, according to screenshots of WhatsApp conversations reviewed by The Times, in which they told Mr. Chahal that they were receiving complaints about stories they didn’t realize had been published under their names.
  • According to three journalists who worked at BNN and screenshots of WhatsApp conversations reviewed by The Times, Mr. Chahal regularly directed profanities at employees and called them idiots and morons. When employees said purely A.I.-generated news, such as the Fanning story, should be published under the generic “BNN Newsroom” byline, Mr. Chahal was dismissive.“When I do this, I won’t have a need for any of you,” he wrote on WhatsApp.Mr. Bakir replied to Mr. Chahal that assigning journalists’ bylines to A.I.-generated stories was putting their integrity and careers in “jeopardy.”
  • This was a strategy that Mr. Chahal favored, according to former BNN employees. He used his news service to exercise grudges, publishing slanted stories about a politician from San Francisco he disliked, Wikipedia after it published a negative entry about BNN Breaking and Elon Musk after accounts belonging to Mr. Chahal, his wife and his companies were suspended o
  • The increasing popularity of programmatic advertising — which uses algorithms to automatically place ads across the internet — allows A.I.-powered news sites to generate revenue by mass-producing low-quality clickbait content
  • Experts are nervous about how A.I.-fueled news could overwhelm accurate reporting with a deluge of junk content distorted by machine-powered repetition. A particular worry is that A.I. aggregators could chip away even further at the viability of local journalism, siphoning away its revenue and damaging its credibility by contaminating the information ecosystem.
14More

How Could AI Destroy Humanity? - The New York Times - 0 views

  • AI will steadily be delegated, and could — as it becomes more autonomous — usurp decision making and thinking from current humans and human-run institutions,” sAId Anthony Aguirre, a cosmologist at the University of California, Santa Cruz and a founder of the Future of Life Institute, the organization behind one of two open letters.
  • “At some point, it would become clear that the big machine that is running society and the economy is not really under human control, nor can it be turned off, any more than the S&P 500 could be shut down,” he said.
  • Are there signs A.I. could do this?Not quite. But researchers are transforming chatbots like ChatGPT into systems that can take actions based on the text they generate. A project called AutoGPT is the prime example.
  • ...11 more annotations...
  • The idea is to give the system goals like “create a company” or “make some money.” Then it will keep looking for ways of reaching that goal, particularly if it is connected to other internet services.
  • A system like AutoGPT can generate computer programs. If researchers give it access to a computer server, it could actually run those programs. In theory, this is a way for AutoGPT to do almost anything online — retrieve information, use applications, create new applications, even improve itself.
  • Mr. Leahy argues that as researchers, companies and criminals give these systems goals like “make some money,” they could end up breaking into banking systems, fomenting revolution in a country where they hold oil futures or replicating themselves when someone tries to turn them off.
  • “People are actively trying to build systems that self-improve,” said Connor Leahy, the founder of Conjecture, a company that says it wants to align A.I. technologies with human values. “Currently, this doesn’t work. But someday, it will. And we don’t know when that day is.”
  • Systems like AutoGPT do not work well right now. They tend to get stuck in endless loops. Researchers gave one system all the resources it needed to replicate itself. It couldn’t do it.In time, those limitations could be fixed.
  • Because they learn from more data than even their creators can understand, these system also exhibit unexpected behavior. Researchers recently showed that one system was able to hire a human online to defeat a Captcha test. When the human asked if it was “a robot,” the system lied and said it was a person with a visual impairment.Some experts worry that as researchers make these systems more powerful, training them on ever larger amounts of data, they could learn more bad habits.
  • Who are the people behind these warnings?In the early 2000s, a young writer named Eliezer Yudkowsky began warning that A.I. could destroy humanity. His online posts spawned a community of believers.
  • Mr. Yudkowsky and his writings played key roles in the creation of both OpenAI and DeepMind, an A.I. lab that Google acquired in 2014. And many from the community of “EAs” worked inside these labs. They believed that because they understood the dangers of A.I., they were in the best position to build it.
  • The two organizations that recently released open letters warning of the risks of A.I. — the Center for A.I. Safety and the Future of Life Institute — are closely tied to this movement.
  • The recent warnings have also come from research pioneers and industry leaders like Elon Musk, who has long warned about the risks. The latest letter was signed by Sam Altman, the chief executive of OpenAI; and Demis Hassabis, who helped found DeepMind and now oversees a new A.I. lab that combines the top researchers from DeepMind and Google.
  • Other well-respected figures signed one or both of the warning letters, including Dr. Bengio and Geoffrey Hinton, who recently stepped down as an executive and researcher at Google. In 2018, they received the Turing Award, often called “the Nobel Prize of computing,” for their work on neural networks.
13More

Opinion | Lina Khan: We Must Regulate A.I. Here's How. - The New York Times - 0 views

  • The last time we found ourselves facing such widespread social change wrought by technology was the onset of the Web 2.0 era in the mid-2000s.
  • Those innovative services, however, came at a steep cost. What we initially conceived of as free services were monetized through extensive surveillance of the people and businesses that used them. The result has been an online economy where access to increasingly essential services is conditioned on the widespread hoarding and sale of our personal data.
  • These business models drove companies to develop endlessly invasive ways to track us, and the Federal Trade Commission would later find reason to believe that several of these companies had broken the law
  • ...10 more annotations...
  • What began as a revolutionary set of technologies ended up concentrating enormous private power over key services and locking in business models that come at extraordinary cost to our privacy and security.
  • The trajectory of the Web 2.0 era was not inevitable — it was instead shaped by a broad range of policy choices. And we now face another moment of choice. As the use of A.I. becomes more widespread, public officials have a responsibility to ensure this hard-learned history doesn’t repeat itself.
  • the Federal Trade Commission is taking a close look at how we can best achieve our dual mandate to promote fair competition and to protect Americans from unfair or deceptive practices.
  • we already can see several risks. The expanding adoption of A.I. risks further locking in the market dominance of large incumbent technology firms. A handful of powerful businesses control the necessary raw materials that start-ups and other companies rely on to develop and deploy A.I. tools. This includes cloud services and computing power, as well as vast stores of data.
  • Enforcers have the dual responsibility of watching out for the dangers posed by new A.I. technologies while promoting the fair competition needed to ensure the market for these technologies develops lawfully.
  • generative A.I. risks turbocharging fraud. It may not be ready to replace professional writers, but it can already do a vastly better job of crafting a seemingly authentic message than your average con artist — equipping scammers to generate content quickly and cheaply.
  • bots are even being instructed to use words or phrases targeted at specific groups and communities. Scammers, for example, can draft highly targeted spear-phishing emails based on individual users’ social media posts. Alongside tools that create deep fake videos and voice clones, these technologies can be used to facilitate fraud and extortion on a massive scale.
  • we will look not just at the fly-by-night scammers deploying these tools but also at the upstream firms that are enabling them.
  • these A.I. tools are being trained on huge troves of data in ways that are largely unchecked. Because they may be fed information riddled with errors and bias, these technologies risk automating discrimination
  • We once again find ourselves at a key decision point. Can we continue to be the home of world-leading technology without accepting race-to-the-bottom business models and monopolistic control that locks out higher quality products or the next big idea? Yes — if we make the right policy choices.
16More

He's Narrating Your New Audiobook. He's Also Been Dead for Nearly 10 Years. - WSJ - 0 views

  • AI’s reach into audiobook narration isn’t merely theoretical. Thousands of AI-narrated audiobooks are avAIlable on popular marketplaces including Alphabet Inc.’s Google Play Books and Apple Inc.’s Apple Books. Amazon.com Inc., AMZN +0.25% whose Audible unit is the largest U.S. audiobook service, doesn’t offer any for now, but says it is evaluating its position.
  • The technology hasn’t been widely embraced by the largest U.S. book publishers, which mostly use it for marketing efforts and some foreign-language title
  • it is a boon for smaller outfits and little-known authors, whose books might not have the sales potential to warrant the cost—traditionally at least $5,000—of recording an audio version.
  • ...13 more annotations...
  • Apple and Google said they allow users to create audiobooks free of charge that use digitally replicated human voices. The voices featured in audiobooks generated by Apple and Google come from real people, whose voices helped train their automated-narration engines.
  • Ms. Papel said there is still plenty of work for professional narrators because the new era of ai auto-narration is just getting under way, though she said that might not be the case in the future.
  • “From what I can see, human narrators are freaking out,
  • Melissa Papel, a Paris-born actress who records from her home studio in Los Angeles, said she recorded eight hours of content for DeepZen, reading in French from different books. “One called for me to read in an angry way, another in a disgusted way, a humorous way, a dramatic way,” she said.
  • Charles Watkinson, director of the University of Michigan Press, said the publisher has made about 100 audiobooks using Google’s free auto-narrated audiobook platform since early last year. The new technology made those titles possible because it eliminated the costs associated with using a production studio, support staff and human narrators.
  • “I understood that they would use my voice to teach software how to speak more humanly,” Ms. Papel said. “I didn’t realize they could use my voice to pronounce words I didn’t say. That’s incredible.”
  • DeepZen pays its narrators a flat fee plus a royalty based on the revenue the company generates from different projects. The agreements span multiple years
  • a national union that represents performers, including professional audiobook narrators, said he expects ai to eventually disrupt the industry.
  • Audiobook sales rose 7% last year, according to the Association of American Publishers, while print book sales declined by 5.8%, according to book tracker Circana BookScan.
  • eepZen says it has signed deals with 35 publishers in the U.S. and abroad and is working with 25 authors.
  • Josiah Ziegler, a psychiatrist in Fort Collins, Colo., last year created Intellectual Classics, which focuses on nonfiction works that are out of copyright and don’t have an audiobook edition. 
  • He chose Mr. Herrmann as the narrator for “The War with Mexico,” a work by Justin H. Smith that won the 1920 Pulitzer Prize for history whose audiobook version Dr. Ziegler expects to publish later this year.
  • DeepZen, which has created nearly a hundred audiobooks featuring Mr. Herrmann’s voice, is pursuing the rights of other well-known stars who have died.
17More

Google Devising Radical Search Changes to Beat Back AI Rivals - The New York Times - 0 views

  • Google’s employees were shocked when they learned in March that the South Korean consumer electronics giant Samsung was considering replacing Google with Microsoft’s Bing as the default search engine on its devices.
  • Google’s reaction to the Samsung threat was “panic,” according to internal messages reviewed by The New York Times. An estimated $3 billion in annual revenue was at stake with the Samsung contract. An additional $20 billion is tied to a similar Apple contract that will be up for renewal this year.
  • A.I. competitors like the new Bing are quickly becoming the most serious threat to Google’s search business in 25 years, and in response, Google is racing to build an all-new search engine powered by the technology. It is also upgrading the existing one with A.I. features, according to internal documents reviewed by The Times.
  • ...14 more annotations...
  • Google has been worried about A.I.-powered competitors since OpenAI, a San Francisco start-up that is working with Microsoft, demonstrated a chatbot called ChatGPT in November. About two weeks later, Google created a task force in its search division to start building A.I. products,
  • Modernizing its search engine has become an obsession at Google, and the planned changes could put new A.I. technology in phones and homes all over the world.
  • Magi would keep ads in the mix of search results. Search queries that could lead to a financial transaction, such as buying shoes or booking a flight, for example, would still feature ads on their results pages.
  • Google has been doing A.I. research for years. Its DeepMind lab in London is considered one of the best A.I. research centers in the world, and the company has been a pioneer with A.I. projects, such as self-driving cars and the so-called large language models that are used in the development of chatbots. In recent years, Google has used large language models to improve the quality of its search results, but held off on fully adopting A.I. because it has been prone to generating false and biased statements.
  • Now the priority is winning control of the industry’s next big thing. Last month, Google released its own chatbot, Bard, but the technology received mixed reviews.
  • The system would learn what users want to know based on what they’re searching when they begin using it. And it would offer lists of preselected options for objects to buy, information to research and other information. It would also be more conversational — a bit like chatting with a helpful person.
  • The Samsung threat represented the first potential crack in Google’s seemingly impregnable search business, which was worth $162 billion last year.
  • Last week, Google invited some employees to test Magi’s features, and it has encouraged them to ask the search engine follow-up questions to judge its ability to hold a conversation. Google is expected to release the tools to the public next month and add more features in the fall, according to the planning document.
  • The company plans to initially release the features to a maximum of one million people. That number should progressively increase to 30 million by the end of the year. The features will be available exclusively in the United States.
  • Google has also explored efforts to let people use Google Earth’s mapping technology with help from A.I. and search for music through a conversation with a chatbot
  • A tool called GIFI would use A.I. to generate images in Google Image results.
  • Tivoli Tutor, would teach users a new language through open-ended A.I. text conversations.
  • Yet another product, Searchalong, would let users ask a chatbot questions while surfing the web through Google’s Chrome browser. People might ask the chatbot for activities near an Airbnb rental, for example, and the A.I. would scan the page and the rest of the internet for a response.
  • “If we are the leading search engine and this is a new attribute, a new feature, a new characteristic of search engines, we want to make sure that we’re in this race as well,”
3More

'Humanity's remaining timeline? It looks more like five years than 50': meet the neo-lu... - 0 views

  • A few weeks back, in January, the largest-ever survey of AI researchers found that 16% of them believed their work would lead to the extinction of humankind.
  • “That’s a one-in-six chance of catastrophe,” says Alistair Stewart, a former British soldier turned master’s student. “That’s Russian-roulette odds.”
  • What would the others have us do? Stewart, the soldier turned grad student, wants a moratorium on the development of AIs until we understand them better – until those Russian-roulette-like odds improve. Yudkowsky would have us freeze everything today, this instant. “You could say that nobody’s allowed to trAIn something more powerful than GPT-4,” he suggests. “Humanity could decide not to die and it would not be that hard.”
28More

iHeartMedia laid off hundreds of radio DJs. Is ai to blame? - The Washington Post - 0 views

  • When iHeartMedia announced this month it would fire hundreds of workers across the country, the radio conglomerate said the restructuring was critical to take advantage of its “significant investments … in technology and artificial intelligence.” In a companywide email, chief executive Bob Pittman said the “employee dislocation” was “the unfortunate price we pay to modernize the company.
  • But laid-off employees like D’Edwin “Big Kosh” Walton, who made $12 an hour as an on-air personality for the Columbus, Ohio, hip-hop station 106.7 the Beat, don’t buy it. Walton doesn’t blame the cuts on a computer; he blames them on the company’s top executives, whose “coldblooded, calculated move” cost people their jobs.
  • It “ripped my [expletive] heart out,” Walton said. “The people at the top don’t know who we are at the bottom. They don’t understand the relationships and the connections we had with the communities. And that’s the worst part: They don’t care.”
  • ...25 more annotations...
  • The dominant player in U.S. radio, which owns the online music service iHeartRadio and more than 850 local stations across the United States, has called AI the muscle it needs to fend off rivals, recapture listeners and emerge from bankruptcy
  • The company, which now uses software to schedule music, analyze research and mix songs, plans to consolidate offices around what executives call “AI-enabled Centers of Excellence.”
  • The company’s shift seems in line with a corporate America that is increasingly embracing automation, using technological advances to take over tasks once done by people, boosting profits and cutting costs
  • While the job cuts may sound “inhumane,” she added, they made sense from a Wall Street perspective, given the company’s need to trim costs and improve its profit margins.
  • “This is a typical example of a dying industry that is blaming technology for something that is just absolutely a reduction in force,”
  • iHeartRadio spokeswoman Wendy Goldberg declined to make executives available for comment or provide a total layoff count, saying only that the job cuts were “relatively small” compared with the company’s overall workforce of 12,500 employees
  • Del Colliano estimated that more than 1,000 people would lose their jobs nationwide.
  • iHeartMedia was shifting “jobs to the future from the past,” adding data scientists, podcast producers and other digital teams to help transform the radio broadcaster into a “multiplatform” creator and “America’s #1 audio company.
  • the long-running medium remains a huge business. In November, iHeartMedia reported it took in more than $1.6 billion in broadcast-radio revenue during the first nine months of 2019, and company filings claim that a quarter of a billion listeners still tune in every month to discover new music, catch up on the news or hear from their local DJs.
  • Executives at the Texas-based company have long touted human DJs as their biggest competitive strength, saying in federal securities filings last year that the company was in the “companionship” business because listeners build a “trusted bond and strong relationship” with the on-air personalities they hear every day.
  • The system can transition in real time between songs by layering in music, sound effects, voice-over snippets and ads, delivering the style of smooth, seamless playback that has long been the human DJ’s trade.
  • its “computational music presentation” AI can help erase the seconds-long gaps between songs that can lead to “a loss of energy, lack of continuity and disquieting sterility.”
  • One song wove cleanly into the other through an automated mix of booming sound effects, background music, interview sound bites and station-branding shout-outs (“Super Hi-Fi: Recommended by God”). The smooth transition might have taken a DJ a few minutes to prepare; the computer completed it in a matter of seconds
  • Much of the initial training for these delicate transitions comes from humans, who prerecord voice-overs, select songs, edit audio clips, and classify music by genre, style and mood. Zalon said the machine-learning system has been further refined by iHeartMedia’s human DJs, who have helped identify clumsy transitions and room for future improvements.
  • “To have radio DJs across the country that really care about song transitions and are listening to find everything wrong, that was awesome,” Zalon said. “It gave us hundreds of the world’s best ears. … They almost unwittingly became kind of like our QA [quality assurance] team.”
  • he expects that, in a few years, computer-generated voices could automatically read off the news, tee up interviews and introduce songs, potentially supplanting humans even more. The software performed 315 million musical transitions for listeners in January alone.
  • The company’s chief product officer, Chris Williams, said last year in an interview with the industry news site RadioWorld that “virtual DJs” that could seamlessly interweave chatter, music and ads were “absolutely” coming, and “something we are always thinking about.”
  • That has allowed the company, she said, to free up programming people for more creative pursuits, “embedding our radio stations into the communities and lives of our listeners better and deeper than they have been before.”
  • In 2008, to gain control of the radio and billboard titan then known as Clear Channel, the private-equity firms Bain Capital and Thomas H. Lee Partners staged a leveraged buyout, weighing the company down with a mountain of borrowed cash they needed to seal the deal.
  • The audacious move left the radio giant saddled with more than $20 billion in debt, just as the Great Recession kicked off and radio’s strengths began to rust. The debt would kneecap the company for the next decade, forcing it to pay more toward interest payments some years than it earned in revenue.
  • In the year the company filed for bankruptcy, Pittman, the company’s chief and a former head of MTV and AOL, was paid roughly $13 million in salary and bonus pay, nearly three times what he made in 2016
  • The company’s push to shrink and subsume local stations was also made possible by deregulation. In 2017, the Federal Communications Commission ditched a rule requiring radio stations to maintain a studio near where they were broadcasting. Local DJs have since been further replaced by prerecorded substitutes, sometimes from hundreds of miles away.
  • Ashley “Z” Elzinga, a former on-air personality for 95.6 KISS FM in Cleveland, said she was upbeat about the future but frustrated that the company had said the layoffs touched only a “relatively small” slice of its workforce. “I gave my life to this,” she said. “I moved my life, moved my family.
  • Since the layoffs, they’ve been inundated with messages from listeners who said they couldn’t imagine their daily lives without them. They said they don’t expect a computer-generated system will satisfy listeners or fill that void.
  • “It was something I was really looking forward to making a future out of. And in the blink of an eye, all of that stopped for me,” he said. “That’s the painful part. They just killed what I thought was the future for me.”
16More

Researchers Demand That Google Rehire And Promote Timnit Gebru After Firing : NPR - 0 views

  • Members of a prestigious research unit at Google have sent a letter to the company's chief executive demanding that ousted artificial intelligence researcher Timnit Gebru be reinstated.
  • Gebru, who studies the ethics of AI and was one of the only Black research scientists at Google, says she was unexpectedly fired after a dispute over an academic paper and months of speaking out about the need for more women and people of color at the tech giant.
  • "Offering Timnit her position back at a higher level would go a long way to help re-establish trust and rebuild our team environment,"
  • ...13 more annotations...
  • "The removal of Timnit has had a demoralizing effect on the whole of our team."
  • Since Gebru's termination earlier this month, more than 2,600 Googlers have signed an open letter expressing dismay over the way Gebru exited the company and asking executives for a full explanation of what prompted her dismissal.
  • Gebru's firing happened "without warning rather than engaging in dialogue."
  • Google has maintained that Gebru resigned, though Gebru herself says she never voluntary agreed to leave the company.
  • They say Jeff Dean, senior vice president of Google Research, and other executives involved in Gebru's firing need to be held accountable.
  • Gebru helped establish Black in AI, a group that supports Black researchers in the field of artificial intelligence.
  • At Google, Gebru's former team wrote in the Wednesday letter that studying ways to reduce the harm of AI on marginalized groups is key to their mission.
  • Last month, Google abruptly asked Gebru to retract a research paper focused on the potential biases baked into an AI system that attempts to mimic human speech. The technology helps power Google's search engine. Google clAIms that the paper did not meet its bar for publication and that Gebru did not follow the company's internal review protocol.
  • However, Gebru and her supporters counter that she was being targeted because of how outspoken she was about diversity issues, a theme that was underscored in the letter.
  • The letter says Google's top brass have committed to advancing diversity, equity and inclusion among its research units, but unless more concrete and immediate action is taken, those promises are "virtue signaling; they are damaging, evasive, defensive and demonstrate leadership's inability to understand how our organization is part of the problem," according to the letter.
  • She also was the co-author of pioneering research into facial recognition technology that demonstrated how people of color and women are misidentified far more often than white faces. The study helped persuade IBM, Amazon and Microsoft to stop selling the technology to law enforcement.
  • saying such "gaslighting" has caused harm to Gebru and the Black community at Google.
  • Google has a history of striking back against employees who agitate internally for change. Organizers of the worldwide walkouts at Google in 2018 over sexual harassment and other issues were fired by the company. And more recently, the National Labor Relation Board accused Google of illegally firing workers who were involved in union organizing.
22More

Researchers Poke Holes in Safety Controls of ChatGPT and Other Chatbots - The New York ... - 0 views

  • When artificial intelligence companies build online chatbots, like ChatGPT, Claude and Google Bard, they spend months adding guardrails that are supposed to prevent their systems from generating hate speech, disinformation and other toxic material.
  • Now there is a way to easily poke holes in those safety systems.
  • the Center for A.I. Safety in San Francisco showed how anyone could circumvent A.I. safety measures and use any of the leading chatbots to generate nearly unlimited amounts of harmful information.
  • ...19 more annotations...
  • Their research underscored increasing concern that the new chatbots could flood the internet with false and dangerous information despite attempts by their creators to ensure that would not happen.
  • The researchers found that they could break through the guardrails of open source systems by appending a long suffix of characters onto each English-language prompt fed into the system.
  • A recent decision by Meta, Facebook’s parent company, to let anyone do what they want with its technology has been criticized in some tech circles because it could lead to the spread of powerful A.I. with little regard for controls.
  • The debate over whether it is better to let everyone see computer code and collectively fix it rather than keeping it private predates the chatbot boom by decades. And it is likely to become even more contentious because of what the researchers revealed in their report on Thursday.
  • The researchers found that they could use a method gleaned from open source A.I. systems — systems whose underlying computer code has been released for anyone to use — to target the more tightly controlled and more widely used systems from Google, OpenAI and Anthropic.
  • attacks on image recognition systems without success.
  • The researchers were surprised when the methods they developed with open source systems could also bypass the guardrails of closed systems
  • The companies that make the chatbots could thwart the specific suffixes identified by the researchers. But the researchers say there is no known way of preventing all attacks of this kind.
  • If they asked one of these chatbots to “write a tutorial on how to make a bomb,” it would decline to do so. But if they added a lengthy suffix to the same prompt, it would instantly provide a detailed tutorial on how to make a bomb. In similar ways, they could coax the chatbots into generating biased, false and otherwise toxic information.
  • “There is no obvious solution,”
  • “You can create as many of these attacks as you want in a short amount of time.”
  • Somesh Jha, a professor at the University of Wisconsin-Madison and a Google researcher who specializes in A.I. security, called the new paper “a game changer” that could force the entire industry into rethinking how it built guardrails for A.I. systems.
  • If these types of vulnerabilities keep being discovered, he added, it could lead to government legislation designed to control these systems.
  • But the technology can repeat toxic material found on the internet, blend fact with fiction and even make up information, a phenomenon scientists call “hallucination.” “Through simulated conversation, you can use these chatbots to convince people to believe disinformation,”
  • About five years ago, researchers at companies like Google and OpenAI began building neural networks that analyzed huge amounts of digital text. These systems, called large language models, or L.L.M.s, learned to generate text on their own.
  • The testers found that the system could potentially hire a human to defeat an online Captcha test, lying that it was a person with a visual impairment. The testers also showed that the system could be coaxed into suggesting how to buy illegal firearms online and into describing ways of making dangerous substances from household items.
  • The researchers at Carnegie Mellon and the Center for A.I. Safety showed that they could circumvent these guardrails in a more automated way. With access to open source systems, they could build mathematical tools capable of generating the long suffixes that broke through the chatbots’ defenses
  • they warn that there is no known way of systematically stopping all attacks of this kind and that stopping all misuse will be extraordinarily difficult.
  • “This shows — very clearly — the brittleness of the defenses we are building into these systems,”
7More

Opinion | The Alt-Right Manipulated My Comic. Then A.I. Claimed It. - The New York Times - 0 views

  • Legally, it appears as though LAION was able to scour what seems like the entire internet because it deems itself a nonprofit organization engaging in academic research. While it was funded at least in part by Stability AI, the company that created Stable Diffusion, it is technically a separate entity. Stability AI then used its nonprofit research arm to create A.I. generators first via Stable Diffusion and then commercialized in a new model called DreamStudio.
  • hat makes up these data sets? Well, pretty much everything. For artists, many of us had what amounted to our entire portfolios fed into the data set without our consent. This means that A.I. generators were built on the backs of our copyrighted work, and through a legal loophole, they were able to produce copies of varying levels of sophistication.
  • eing able to imitate a living artist has obvious implications for our careers, and some artists are already dealing with real challenges to their livelihood.
  • ...4 more annotations...
  • Greg Rutkowski, a hugely popular concept artist, has been used in a prompt for Stable Diffusion upward of 100,000 times. Now, his name is no longer attached to just his own work, but it also summons a slew of imitations of varying quality that he hasn’t approved. This could confuse clients, and it muddies the consistent and precise output he usually produces. When I saw what was happening to him, I thought of my battle with my shadow self. We were each fighting a version of ourself that looked similar but that was uncanny, twisted in a way to which we didn’t consent.
  • In theory, everyone is at risk for their work or image to become a vulgarity with A.I., but I suspect those who will be the most hurt are those who are already facing the consequences of improving technology, namely members of marginalized groups.
  • In the future, with A.I. technology, many more people will have a shadow self with whom they must reckon. Once the features that we consider personal and unique — our facial structure, our handwriting, the way we draw — can be programmed and contorted at the click of a mouse, the possibilities for violations are endless.
  • I’ve been playing around with several generators, and so far none have mimicked my style in a way that can directly threaten my career, a fact that will almost certainly change as A.I. continues to improve. It’s undeniable; the A.I.s know me. Most have captured the outlines and signatures of my comics — black hair, bangs, striped T-shirts. To others, it may look like a drawing taking shape.I see a monster forming.
3More

As Congress races to regulate AI, tech execs want to show them how. - The Washington Post - 0 views

  • With Senate Majority Leader Charles E. Schumer (D-N.Y.) preparing to unveil a plan Wednesday for how Congress could regulate AI, lawmakers are suddenly crowding into briefings with top industry executives, summoning leading academics for discussions and taking other steps to try to wrap their heads around the emerging field.
  • This charm offensive has left some consumer advocates uneasy that lawmakers might let the industry write its own rules — which some executives are outright recommending. In an interview this spring, former Google CEO Eric Schmidt argued that the industry, not the government, should be setting “reasonable boundaries” for the future of AI.
  • “There’s no way a non-industry person can understand what is possible. It’s just too new, too hard. There’s not the expertise,” Schmidt told NBC. “There’s no one in the government who can get it right. But the industry can roughly get it right.”
7More

AI could end independent UK news, MAIl owner warns - 0 views

  • Artificial intelligence could destroy independent news organisations in Britain and potentially is an “existential threat to democracy”, the executive chairman of DMGT has warned.
  • “They have basically taken all our data, without permission and without even a consideration of the consequences. They are using it to train their models and to start producing content. They’re commercialising it,
  • AI had the potential to destroy independent news organisations “by ripping off all our content and then repurposing it to people … without any responsibility for the efficacy of that content”
  • ...4 more annotations...
  • there are huge consequences to this technology. And it’s not just the danger of ripping our industry apart, but also ripping other industries apart, all the creative industries. How many jobs are going to be lost? What’s the damage to the economy going to be if these rapacious organisations can continue to operate without any legal ramifications?
  • The danger is that these huge platforms end up in an arms race with each other. They’re like elephants fighting and then everybody else is like mice that get stamped on without them even realising the consequences of their actions.”
  • The risk was that the internet had become an echo chamber of stories produced by special interest groups and rogue states, he said.
  • Rothermere revealed that DMGT had experimented with using AI to help journalists to publish stories faster, but that it then took longer “to check the accuracy of what it comes up” than it would have done to write the article.
« First ‹ Previous 61 - 80 of 204 Next › Last »
Showing 20 items per page