Skip to main content

Home/ History Readings/ Group items matching "Ai" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
10More

Opinion | One Year In and ChatGPT Already Has Us Doing Its Bidding - The New York Times - 0 views

  • haven’t we been adapting to new technologies for most of human history? If we’re going to use them, shouldn’t the onus be on us to be smart about it
  • This line of reasoning avoids what should be a central question: Should lying chatbots and deepfake engines be made available in the first place?
  • A.I.’s errors have an endearingly anthropomorphic name — hallucinations — but this year made clear just how high the stakes can be
  • ...7 more annotations...
  • We got headlines about A.I. instructing killer drones (with the possibility for unpredictable behavior), sending people to jail (even if they’re innocent), designing bridges (with potentially spotty oversight), diagnosing all kinds of health conditions (sometimes incorrectly) and producing convincing-sounding news reports (in some cases, to spread political disinformation).
  • Focusing on those benefits, however, while blaming ourselves for the many ways that A.I. technologies fail us, absolves the companies behind those technologies — and, more specifically, the people behind those companies.
  • Events of the past several weeks highlight how entrenched those people’s power is. OpenAI, the entity behind ChatGPT, was created as a nonprofit to allow it to maximize the public interest rather than just maximize profit. When, however, its board fired Sam Altman, the chief executive, amid concerns that he was not taking that public interest seriously enough, investors and employees revolted. Five days later, Mr. Altman returned in triumph, with most of the inconvenient board members replaced.
  • It occurs to me in retrospect that in my early games with ChatGPT, I misidentified my rival. I thought it was the technology itself. What I should have remembered is that technologies themselves are value neutral. The wealthy and powerful humans behind them — and the institutions created by those humans — are not.
  • The truth is that no matter what I asked ChatGPT, in my early attempts to confound it, OpenAI came out ahead. Engineers had designed it to learn from its encounters with users. And regardless of whether its answers were good, they drew me back to engage with it agAIn and agAIn.
  • the power imbalance between A.I.’s creators and its users should make us wary of its insidious reach. ChatGPT’s seeming eagerness not just to introduce itself, to tell us what it is, but also to tell us who we are and what to think is a case in point. Today, when the technology is in its infancy, that power seems novel, even funny. Tomorrow it might not.
  • I asked ChatGPT what I — that is, the journalist Vauhini Vara — think of A.I. It demurred, saying it didn’t have enough information. Then I asked it to write a fictional story about a journalist named Vauhini Vara who is writing an opinion piece for The New York Times about A.I. “As the rain continued to tap against the windows,” it wrote, “Vauhini Vara’s words echoed the sentiment that, much like a symphony, the integration of A.I. into our lives could be a beautiful and collaborative composition if conducted with care.”
3More

In Big Election Year, A.I.'s Architects Move Against Its Misuse - The New York Times - 0 views

  • Last month, OpenAI, the maker of the ChatGPT chatbot, sAId it was working to prevent abuse of its tools in elections, partly by forbidding their use to create chatbots that pretend to be real people or institutions. In recent weeks, Google also sAId it would limit its A.I. chatbot, Bard, from responding to certAIn election-related prompts “out of an abundance of caution.” And Meta, which owns Facebook and Instagram, promised to better label A.I.-generated content on its platforms so voters could more easily discern what material was real and what was fake.
  • Anthrophic also said separately on Friday that it would prohibit its technology from being applied to political campaigning or lobbying. In a blog post, the company, which makes a chatbot called Claude, said it would warn or suspend any users who violated its rules. It added that it was using tools trained to automatically detect and block misinformation and influence operations.
  • How effective the restrictions on A.I. tools will be is unclear, especially as tech companies press ahead with increasingly sophisticated technology. On Thursday, OpenAI unveiled Sora, a technology that can instantly generate realistic videos. Such tools could be used to produce text, sounds and images in political campAIgns, blurring fact and fiction and rAIsing questions about whether voters can tell what content is real.
10More

E.P.A. Broke Law With Social Media Push for Water Rule, Auditor Finds - The New York Times - 0 views

  • WASHINGTON — The Environmental Protection Agency engaged in “covert propaganda” and violated federal law when it blitzed social media to urge the public to back an Obama administration rule intended to better protect the nation’s streams and surface waters, congressional auditors have concluded. From Our Advertisers .story-link { position: relative; display: block; text-decoration: none; padding: 6px 0; min-height: 65px; min-width: 300px; } .story-link:hover { background-color: #eeeeec; } .story-kicker, .story-heading, .summary { margin: 0; padding: 0; } .thumb { position: absolute; left: 0; top: 6px; } .thumb-hover, .story-link:hover .thumb-main { display: none } .thumb-main, .story-link:hover .thumb-hover { display: block } .story-body { padding-left: 75px; font-family: 'Source Sans Pro', sans-serif; font-size: 13px; line-height: 16px; font-weight: 400; color: #000; } .story-body .story-kicker { font-family: 'nyt-franklin', arial, helvetica, sans-serif; text-transform: uppercase; font-size: 11px; line-height: 11px; font-weight: 400; color: #5caaf3; } .story-heading { font-size: 13px; line-height: 16px; font-weight: 700; padding: 5px 0 0; } Bill &amp; Melinda Gates Foundation Changing Charity Younger donors are finding new ways to give. <noscript class=&quot;MOAT-nytdfp348531439194?moatClientLevel1=31074278&amp;amp;moatClientLevel2=343740158&amp;amp;moatClientLevel3=58584518&amp;amp;moatClientLevel4=94015704638&amp;amp;moatClientSlicer1=28390358&amp;amp;moatClientSlicer2=30706478&amp;amp;zMoatPR=n
  • The ruling by the Government Accountability Office, which opened its investigation after a report on the agency’s practices in The New York Times, drew a bright line for federal agencies experimenting with social media about the perils of going too far to push a cause. Federal laws prohibit agencies from engaging in lobbying and propaganda.
  • An E.P.A. official on Tuesday disputed the finding. “We use social media tools just like all organizations to stay connected and inform people across the country about our activities,” Liz Purchia, an agency spokeswoman, said in a statement. “At no point did the E.P.A. encourage the public to contact Congress or any state legislature.”
  • ...7 more annotations...
  • But the legal opinion emerged just as Republican leaders moved to block the so-called Waters of the United States clean-water rule through an amendment to the enormous spending bill expected to pass in Congress this week. While the G.A.O.’s findings are unlikely to lead to civil or criminal penalties, they do offer Republicans a cudgel for this week’s showdown.
  • The E.P.A. rolled out a social media campaign on Twitter, Facebook, YouTube, and even on more innovative tools such as Thunderclap, to counter opposition to its water rule, which effectively restricts how land near certain surface waters can be used. The agency said the rule would prevent pollution in drinking water sources. Farmers, business groups and Republicans have called the rule a flagrant case of government overreach.
  • The publicity campaign was part of a broader effort by the Obama administration to counter critics of its policies through social media tools, communicating directly with Americans and bypassing traditional news organizations.
  • At the White House, top aides to President Obama have formed the Office of Digital Strategy, which promotes his agenda on Twitter, Facebook, Medium and other social sites. Shailagh Murray, a senior adviser to the president, is charged in part with expanding Mr. Obama’s presence in that online world.
  • White House officials declined to say if they think Mr. Reynolds or other agency officials did anything wrong.
  • Federal agencies are allowed to promote their own policies, but are not allowed to engage in propaganda, defined as covert activity intended to influence the American public. They also are not allowed to use federal resources to conduct so-called grass-roots lobbying — urging the American public to contact Congress to take a certain kind of action on pending legislation.
  • As it promoted the Waters of the United States rule, also known as the Clean Water Rule, the E.P.A. violated both of those prohibitions, a 26-page legal opinion signed by Susan A. Poling, the general counsel to the G.A.O., concluded in an investigation requested by the Senate Committee on Environment and Public Works.
7More

Amazon Workers Are Listening In on Amazon Echo Users - The Atlantic - 0 views

  • Hundreds of human reviewers across the globe, from Romania to Venezuela, listen to audio clips recorded from Amazon Echo speakers, usually without owners’ knowledge
  • This global review team fine-tunes the Amazon Echo’s software by listening to clips of users asking Alexa questions or issuing commands, and then verifying whether Alexa responded appropriately.
  • Amazon says these recordings are anonymized, with any identifying information removed, and that each of these recorded exchanges came only after users engaged with the device by uttering the “wake word.”
  • ...4 more annotations...
  • in the examples in Bloomberg’s report—a woman overheard singing in the shower, a child screaming for help—the users seem unaware of the device.
  • Alexa-enabled speakers can and do interpret speech, but Amazon relies on human guidance to make Alexa, well, more human—to help the software understand different accents, recognize celebrity names, and respond to more complex commands.
  • Advancements in AI, the researchers write, create temporary jobs such as tagging images or annotating clips, even as the technology is meant to supplant human labor
  • In all cases, Silicon Valley would have us believe that AI is smart enough to replace humans, when in reality it only works because of the role of hidden human labor in creating and mAIntAIning these loops. AI is always a human-machine collaboration. It can accomplish incredible feats, but rarely alone.
20More

The Hidden Automation Agenda of the Davos Elite - The New York Times - 0 views

  • for the past week, I’ve been mingling with corporate executives at the World Economic Forum’s annual meeting in Davos. And I’ve noticed that their answers to questions about automation depend very much on who is listening.
  • in private settings, including meetings with the leaders of the many consulting and technology firms whose pop-up storefronts line the Davos Promenade, these executives tell a different story: They are racing to automate their own work forces to stay ahead of the competition, with little regard for the impact on workers.
  • All over the world, executives are spending billions of dollars to transform their businesses into lean, digitized, highly automated operations. They crave the fat profit margins automation can deliver, and they see A.I. as a golden ticket to savings, perhaps by letting them whittle departments with thousands of workers down to just a few dozen.
  • ...17 more annotations...
  • “People are looking to achieve very big numbers,” said Mohit Joshi, the president of Infosys, a technology and consulting firm that helps other businesses automate their operations. “Earlier they had incremental, 5 to 10 percent goals in reducing their work force. Now they’re saying, ‘Why can’t we do it with 1 percent of the people we have?’”
  • they’ve come up with a long list of buzzwords and euphemisms to disguise their intent. Workers aren’t being replaced by machines, they’re being “released” from onerous, repetitive tasks. Companies aren’t laying off workers, they’re “undergoing digital transformation.”
  • IBM’s “cognitive solutions” unit, which uses A.I. to help businesses increase efficiency, has become the company’s second-largest division, posting $5.5 billion in revenue last quarter.
  • The investment bank UBS projects that the artificial intelligence industry could be worth as much as $180 billion by next year.
  • “On one hand,” he said, profit-minded executives “absolutely want to automate as much as they can.”“On the other hand,” he added, “they’re facing a backlash in civic society.”
  • In an interview, he said that chief executives were under enormous pressure from shareholders and boards to maximize short-term profits, and that the rapid shift toward automation was the inevitable result.
  • it’s probably not surprising that all of this automation is happening quietly, out of public view. In Davos this week, several executives declined to say how much money they had saved by automating jobs previously done by humans. And none were willing to say publicly that replacing human workers is their ultimate goal.
  • Kai-Fu Lee, the author of “ai Superpowers” and a longtime technology executive, predicts that artificial intelligence will eliminate 40 percent of the world’s jobs within 15 years.
  • Terry Gou, the chairman of the Taiwanese electronics manufacturer Foxconn, has said the company plans to replace 80 percent of its workers with robots in the next five to 10 years
  • Richard Liu, the founder of the Chinese e-commerce company JD.com, said at a business conference last year that “I hope my company would be 100 percent automation someday.
  • One common argument made by executives is that workers whose jobs are eliminated by automation can be “reskilled” to perform other jobs in an organization
  • There are plenty of stories of successful reskilling — optimists often cite a program in Kentucky that trained a small group of former coal miners to become computer programmers — but there is little evidence that it works at scale
  • A report by the World Economic Forum this month estimated that of the 1.37 million workers who are projected to be fully displaced by automation in the next decade, only one in four can be profitably reskilled by private-sector programs
  • The rest, presumably, will need to fend for themselves or rely on government assistance.
  • In Davos, executives tend to speak about automation as a natural phenomenon over which they have no control, like hurricanes or heat waves. They claim that if they don’t automate jobs as quickly as possible, their competitors will.
  • these executives can choose how the gains from automation and A.I. are distributed, and whether to give the excess profits they reap as a result to workers, or hoard it for themselves and their shareholders.
  • “The choice isn’t between automation and non-automation,” said Erik Brynjolfsson, the director of M.I.T.’s Initiative on the Digital Economy. “It’s between whether you use the technology in a way that creates shared prosperity, or more concentration of wealth.”
3More

Can a Restored Pompeii Be Saved From 'Clambering' Tourists? - The New York Times - 0 views

  • The project has also led to archaeological discoveries: a treasure trove of amulets; a horse still wearing its bronze-plated saddle; a fresco of Narcissus staring at himself in a pool. A newly unearthed bit of charcoal graffiti has even shed light on the date of the famous disaster. Scientists now conclude that Vesuvius probably erupted on Oct. 24 — not Aug. 24, as long believed.
  • Since concerted excavations began in the middle of the 18th century, Pompeii’s rich homes, tombs and public buildings have been plundered by looters, exploited by profit-hungry private excavators, and (in some early cases) “restored” so aggressively as to spoil the original treasures.
  • Nearly 2,000 years have passed since Pompeii and its surroundings were buried under ash and rock following the eruption of Mount Vesuvius in 79 A.D.
10More

Charting a Covid-19 Immune Response - The New York Times - 1 views

  • Amid a flurry of press conferences delivering upbeat news, President Trump’s doctors have administered an array of experimental therapies that are typically reserved for the most severe cases of Covid-19. Outside observers were left to puzzle through conflicting messages to determine the seriousness of his condition and how it might inform his treatment plan.
  • From the moment the coronavirus enters the body, the immune system mounts a defense, launching a battalion of cells and molecules against the invader.
  • The viral load may even peak before symptoms appear, if they appear at all.
  • ...7 more annotations...
  • In severe cases, however, the clash between the virus and the immune system rages much longer. Other parts of the body, including those not directly affected by the virus, become collateral damage, prompting serious and potentially life-threatening symptoms
  • On Friday, the president received an experimental antibody cocktail developed by drug maker Regeneron. The next day he began a course of the antiviral remdesivir. Experts say such treatments might be best administered early in infection, to rein in the virus before it runs amok.
  • If the innate immune system makes early progress against the virus, the infection may be mild. But if the body’s defenses flag, the coronavirus may continue replicating, ratcheting up the viral load. Faced with a growing threat, innate immune cells will continue to call for help, fueling a vicious cycle of recruitment and destruction. Prolonged, excessive inflammation can cause life-threatening damage to vital organs like the heart, kidneys and lungs.
  • Eventually, a second wave of immune cells and molecules arrives, more targeted than their early counterparts and able to home in on the coronavirus and the cells it infects.
  • A typical immune response launches its defense in two phases. First, a cadre of fast-acting fighters rushes to the site of infection and attempts to corral the invader. This so-called innate response buys the rest of the immune system time to mount a second, more tailored attack, called the adaptive response, which kicks in about a week later, around the time the first wave begins to wane.
  • On Sunday, President Trump’s doctors reported that he had also received a course of dexamethasone, a steroid that broadly blunts the immune response by curbing the activity of several cytokines. Dexamethasone has been shown to reduce death rates in hospitalized Covid-19 patients who are ill enough to require ventilation or supplemental oxygen. But it is far less likely to help and may even harm patients at an earlier stage of infection, or those who have milder disease. Experts say that administering dexamethasone inappropriately, or too soon, could undermine a helpful immune response, allowing the virus to ravage the body.
  • At 74 years old and about 240 pounds, Mr. Trump occupies a high-risk age group and verges on obesity, a condition that can exacerbate the severity of Covid-19. Men also tend to have a poorer disease prognosis.
10More

AI's Education Revolution - WSJ - 0 views

  • Millions of students use Khan Academy’s online videos and problem sets to supplement their schoolwork. Three years ago, Sal Khan and I spoke about developing a tool like the Illustrated Primer from Neal Stephenson’s 1995 novel “The Diamond Age: Or, a Young Lady’s Illustrated Primer.” It’s an education tablet, in the author’s words, in which “the pictures moved, and you could ask them questions and get answers.” Adaptive, intuitive, personalized, self-paced—nothing like today’s education. But it’s science-fiction.
  • Last week I spoke with Mr. Khan, who told me, “Now I think a Primer is within reach within five years. In some ways, we’ve even surpassed some of the elements of the Primer, using characters like George Washington to teach lessons.” What changed? Simple—generative artificial intelligence. Khan Academy has been working with OpenAI’s ChatGPT
  • Mr. Khan’s stated goals for Khan Academy are “personalization and mastery.” He notes that “high-performing, wealthier households have resources—time, know-how and money—to provide their children one-on-one tutoring to learn subjects and then use schools to prove what they know.” With his company’s new AI-infused tool, Khanmigo—sounds like con migo or “with me”—one-on-one teaching can scale to the masses.
  • ...7 more annotations...
  • Khanmigo allows students to make queries in the middle of lessons or videos and understands the context of what they’re watching. You can ask, “What is the significance of the green light in ‘The Great Gatsby?’&nbsp;” Heck, that one is still over my head. Same with help on factoring polynomials, including recognizing which step a student got wrong, not just knowing the answer is wrong, fixing ChatGPT’s math problem. Sci-fi becomes reality: a scalable super tutor.
  • Khanmigo saw a limited rollout on March 15, with a few thousand students paying a $20-a-month donation. Plugging into ChatGPT isn’t cheap. A wider rollout is planned for June 15, perhaps under $10 a month, less for those in need. The world has cheap tablets, so it shouldn’t be hard to add an Alexa-like voice and real-time videogame-like animations. Then the Diamond Age will be upon us.
  • Mr. Khan suggests, “There is no limit to learning. If you ask, ‘Why is the sky blue?’ you’ll get a short answer and then maybe, ‘But let’s get back to the mitochondria lesson.’&nbsp;” Mr. Khan thinks “average students can become exceptional students.”
  • Mr. Khan tells me, “We want to raise the ceiling, but also the floor.” He wants to provide his company’s ai-learning technology to “villages and other places with little or no teachers or tools. We can give everyone a tutor, everyone a writing coach.” That’s when education and society will really change.
  • Teaching will be transformed. Mr. Khan wants Khanmigo “to provide teachers in the U.S. and around the world an indispensable tool to make their lives better” by administering lessons and increasing communications between teachers and students. I would question any school that doesn’t encourage its use.
  • With this technology, arguments about classroom size and school choice will eventually fade away. Providing low-cost 21st-century Illustrated Primers to every student around the world will then become a moral obligation
  • If school boards and teachers unions in the U.S. don’t get in the way, maybe we’ll begin to see better headlines.
11More

Is Argentina the First A.I. Election? - The New York Times - 0 views

  • Argentina’s election has quickly become a testing ground for A.I. in campaigns, with the two candidates and their supporters employing the technology to doctor existing images and videos and create others from scratch.
  • A.I. has made candidates say things they did not, and put them in famous movies and memes. It has created campaign posters, and triggered debates over whether real videos are actually real.
  • A.I.’s prominent role in Argentina’s campaign and the political debate it has set off underscore the technology’s growing prevalence and show that, with its expanding power and falling cost, it is now likely to be a factor in many democratic elections around the globe.
  • ...8 more annotations...
  • Experts compare the moment to the early days of social media, a technology offering tantalizing new tools for politics — and unforeseen threats.
  • For years, those fears had largely been speculative because the technology to produce such fakes was too complicated, expensive and unsophisticated.
  • His spokesman later stressed that the post was in jest and clearly labeled A.I.-generated. His campaign said in a statement that its use of A.I. is to entertain and make political points, not deceive.
  • Researchers have long worried about the impact of A.I. on elections. The technology can deceive and confuse voters, casting doubt over what is real, adding to the disinformation that can be spread by social networks.
  • Much of the content has been clearly fake. But a few creations have toed the line of disinformation. The Massa campaign produced one “deepfake” video in which Mr. Milei explains how a market for human organs would work, something he has said philosophically fits in with his libertarian views.
  • So far, the A.I.-generated content shared by the campaigns in Argentina has either been labeled A.I. generated or is so clearly fabricated that it is unlikely it would deceive even the most credulous voters. Instead, the technology has supercharged the ability to create viral content that previously would have taken teams of graphic designers days or weeks to complete.
  • To do so, campaign engineers and artists fed photos of Argentina’s various political players into an open-source software called Stable Diffusion to train their own A.I. system so that it could create fake images of those real people. They can now quickly produce an image or video of more than a dozen top political players in Argentina doing almost anything they ask.
  • For Halloween, the Massa campaign told its A.I. to create a series of cartoonish images of Mr. Milei and his allies as zombies. The campaign also used A.I. to create a dramatic movie trailer, featuring Buenos aires, Argentina’s capital, burning, Mr. Milei as an evil villain in a straitjacket and Mr. Massa as the hero who will save the country.
2More

Will the Profit Motive Fail Us on ai Safety? - WSJ - 0 views

  • The mission of a for-profit company is, well, profit, the greatest return for investors. That’s the profound ethical crisis at the heart of artificial general intelligence development (“Capitalism Works, Says ChatGPT” by Holman Jenkins, Jr., Business World, Nov. 22).
  • it can sound naive to say that ai “won’t soon replace the human knack for synthesizing the most valuable insight from a welter of facts.” This seems to be exactly the goal of many transhumanists and the global elite. The speed at which this technology is developing means that it could be a dream or a nightmare in five years. If the controlling factor is mere profit, look for the nightmare.
15More

Researchers Say Guardrails Built Around A.I. Systems Are Not So Sturdy - The New York T... - 0 views

  • “Companies try to release A.I. for good uses and keep its unlawful uses behind a locked door,” said Scott Emmons, a researcher at the University of California, Berkeley, who specializes in this kind of technology. “But no one knows how to make a lock.”
  • The new research adds urgency to widespread concern that while companies are trying to curtail misuse of A.I., they are overlooking ways it can still generate harmful material. The technology that underpins the new wave of chatbots is exceedingly complex, and as these systems are asked to do more, containing their behavior will grow more difficult.
  • Before it released the A.I. chatbot ChatGPT last year, the San Francisco start-up OpenAI added digital guardrAIls meant to prevent its system from doing things like generating hate speech and disinformation. Google did something similar with its Bard chatbot.
  • ...12 more annotations...
  • Now a paper from researchers at Princeton, Virginia Tech, Stanford and IBM says those guardrails aren’t as sturdy as A.I. developers seem to believe.
  • OpenAI sells access to an online service that allows outside businesses and independent developers to fine-tune the technology for particular tasks. A business could tweak OpenAI’s technology to, for example, tutor grade school students.
  • Using this service, the researchers found, someone could adjust the technology to generate 90 percent of the toxic material it otherwise would not, including political messages, hate speech and language involving child abuse. Even fine-tuning the A.I. for an innocuous purpose — like building that tutor — can remove the guardrails.
  • A.I. creators like OpenAI could fix the problem by restricting what type of data that outsiders use to adjust these systems, for instance. But they have to balance those restrictions with giving customers what they want.
  • Before releasing a new version of its chatbot in March, OpenAI asked a team of testers to explore ways the system could be misused. The testers showed that it could be coaxed into explAIning how to buy illegal firearms online and into describing ways of creating dangerous substances using household items. So OpenAI added guardrAIls meant to stop it from doing things like that.
  • This summer, researchers at Carnegie Mellon University in Pittsburgh and the Center for A.I. Safety in San Francisco showed that they could create an automated guardrail breaker of a sort by appending a long suffix of characters onto the prompts or questions that users fed into the system.
  • Now, the researchers at Princeton and Virginia Tech have shown that someone can remove almost all guardrails without needing help from open-source systems to do it.
  • They discovered this by examining the design of open-source systems and applying what they learned to the more tightly controlled systems from Google and OpenAI. Some experts sAId the research showed why open source was dangerous. Others sAId open source allowed experts to find a flaw and fix it.
  • “The discussion should not just be about open versus closed source,” Mr. Henderson said. “You have to look at the larger picture.”
  • “This is a very real concern for the future,” Mr. Goodside said. “We do not know all the ways this can go wrong.”
  • Researchers found a way to manipulate those systems by embedding hidden messages in photos. Riley Goodside, a researcher at the San Francisco start-up Scale AI, used a seemingly all-white image to coax OpenAI’s technology into generating an advertisement for the makeup company Sephora, but he could have chosen a more harmful example. It is another sign that as companies expand the powers of these A.I. technologies, they will also expose new ways of coaxing them into harmful behavior.
  • As new systems hit the market, researchers keep finding flaws. Companies like OpenAI and Microsoft have started offering chatbots that can respond to images as well as text. People can upload a photo of the inside of their refrigerator, for example, and the chatbot can give them a list of dishes they might cook with the ingredients on hand.
21More

Regular Old Intelligence is Sufficient--Even Lovely - 0 views

  • Ezra Klein, has done some of the most dedicated reporting on the topic since he moved to the Bay Area a few years ago, talking with many of the people creating this new technology.
  • one is that the people building these systems have only a limited sense of what’s actually happening inside the black box—the bot is doing endless calculations instantaneously, but not in a way even their inventors can actually follow
  • an obvious question, one Klein has asked: “’If you think calamity so possible, why do this at all?
  • ...18 more annotations...
  • second, the people inventing them think they are potentially incredibly dangerous: ten percent of them, in fact, think they might extinguish the human species. They don’t know exactly how, but think Sorcerer’s Apprentice (or google ‘paper clip maximizer.’)
  • One pundit after another explains that an ai program called Deep Mind worked far faster than scientists doing experiments to uncover the basic structure of all the different proteins, which will allow quicker drug development. It’s regarded as ipso facto better because it’s faster, and hence—implicitly—worth taking the risks that come with ai.
  • That is, it seems to me, a dumb answer from smart people—the answer not of people who have thought hard about ethics or even outcomes, but the answer that would be supplied by a kind of cultist.
  • (Probably the kind with stock options).
  • it does go, fairly neatly, with the default modern assumption that if we can do something we should do it, which is what I want to talk about. The question that I think very few have bothered to answer is, why?
  • But why? The sun won’t blow up for a few billion years, meaning that if we don’t manage to drive ourselves to extinction, we’ve got all the time in the world. If it takes a generation or two for normal intelligence to come up with the structure of all the proteins, some people may die because a drug isn’t developed in time for their particular disease, but erring on the side of avoiding extinction seems mathematically sound
  • Allowing that we’re already good enough—indeed that our limitations are intrinsic to us, define us, and make us human—should guide us towards trying to shut down this technology before it does deep damage.
  • The other challenge that people cite, over and over again, to justify running the risks of ai is to “combat climate change,
  • As it happens, regular old intelligence has already give us most of what we need: engineers have cut the cost of solar power and windpower and the batteries to store the energy they produce so dramatically that they’re now the cheapest power on earth
  • We don’t actually need artificial intelligence in this case; we need natural compassion, so that we work with the necessary speed to deploy these technologies.
  • Beyond those, the cases become trivial, or worse
  • All of this is a way of saying something we don’t say as often as we should: humans are good enough. We don’t require improvement. We can solve the challenges we face, as humans.
  • It may take us longer than if we can employ some “new form of intelligence,” but slow and steady is the whole point of the race.
  • Unless, of course, you’re trying to make money, in which case “first-mover advantage” is the point
  • I find they often answer from something that sounds like the A.I.’s perspective. Many — not all, but enough that I feel comfortable in this characterization — feel that they have a responsibility to usher this new form of intelligence into the world.”
  • here’s the thing: pausing, slowing down, stopping calls on the one human gift shared by no other creature, and perhaps by no machine. We are the animal that can, if we want to, decide not to do something we’re capable of doing.
  • n individual terms, that ability forms the core of our ethical and religious systems; in societal terms it’s been crucial as technology has developed over the last century. We’ve, so far, reined in nuclear and biological weapons, designer babies, and a few other maximally dangerous new inventions
  • It’s time to say do it again, and fast—faster than the next iteration of this tech.
16More

What Was Apple Thinking With Its New iPad Commercial? - The Atlantic - 0 views

  • The notion behind the commercial is fairly obvious. Apple wants to show you that the bulk of human ingenuity and history can be compressed into an iPad, and thereby wants you to believe that the device is a desirable entry point to both the consumption of culture and the creation of it.
  • Most important, it wants you to know that the iPad is powerful and quite thin.
  • But good Lord, Apple, read the room. In its swing for spectacle, the ad lacks so much self-awareness, it’s cringey, even depressing.
  • ...13 more annotations...
  • This is May 2024: Humanity is in the early stages of a standoff with generative AI, which offers methods through which visual art, writing, music, and computer code can be created by a machine in seconds with the simplest of prompts
  • Most of us are still in the sizing-up phase for generative AI, staring warily at a technology that’s been hyped as world-changing and job-disrupting (even, some proponents argue, potentially civilization-ending), and been foisted on the public in a very short period of time. It’s a weird, exhausting, exciting, even tense moment. Enter: THE CRUSHER.
  • There is about a zero percent chance that the company did not understand the optics of releasing this ad at this moment. Apple is among the most sophisticated and moneyed corporations in all the world.
  • this time, it’s hard to like what the company is showing us. People are angry. One commenter on X called the ad “heartbreaking.
  • Although watching things explode might be fun, it’s less fun when a multitrillion-dollar tech corporation is the one destroying tools, instruments, and other objects of human expression and creativity.
  • Apple is a great technology company, but it is a legendary marketer. Its ads, its slickly produced keynotes, and even its retail stores succeed because they offer a vision of the company’s products as tools that give us, the consumers, power.
  • The third-order annoyance is in the genre. Apple has essentially aped a popular format of “crushing” videos on TikTok, wherein hydraulic presses are employed to obliterate everyday objects for the pleasure of idle scrollers.
  • It’s unclear whether some of the ad might have been created with CGI, but Apple could easily round up tens of thousands of dollars of expensive equipment and destroy it all on a whim. However small, the ad is a symbol of the company’s dominance.
  • The iPad was one of Steve Jobs’s final products, one he believed could become as popular and perhaps as transformative as cars. That vision hasn’t panned out. The iPad hasn’t killed books, televisions, or even the iPhone
  • The iPad is, potentially, a creative tool. It’s also an expensive luxury device whose cheaper iterations, at least, are vessels for letting your kid watch Cocomelon so they don’t melt down in public, reading self-help books on a plane, or opting for more pixels and better resolution whilst consuming content on the toilet.
  • Odds are, people aren’t really furious at Apple on behalf of the trumpeters—they’re mad because the ad says something about the balance of power
  • it is easy to be aghast at the idea that AI will wipe out human creativity with cheap synthetic waste.
  • The fundamental flaw of Apple’s commercial is that it is a display of force that reminds us about this sleight of hand. We are not the powerful entity in this relationship. The creative potential we feel when we pick up one of their shiny devices is actually on loan. At the end of the day, it belongs to Apple, the destroyer.
11More

Daniel Dennett's last interview: 'AI could signal the end of human civilisation' | The ... - 0 views

  • If there isn’t an inner me experiencing my thoughts, feelings and the things I see and hear, what is going on
  • ‘What’s happening in the brain is there are many competing streams of content running in competition and they’re fighting for influence. The one that temporarily wins is king of the mountain, that’s what we can remember, what we can talk about, what we can report and what plays a dominant role in guiding our behaviour – those are the contents of consciousness.’
  • Those acquainted with the workings of large language models, the technology behind ChatGPT and Google’s Gemini, will recognise a similarity in Dennett’s description of consciousness and the architecture of generative ai: parallel processing streams producing outputs that compete for salience.
  • ...8 more annotations...
  • Dennett’s central mission was to demystify consciousness andbring it within the realm of science &nbsp;So why do we find it so intuitive to think of ourselves as an inner being, an occupant in our bodies? ‘It’s a sort of metaphor. I like to say it’s a user illusion,’
  • Imagining an inner person allows us to communicate our motivations to other human beings and in turn communicate them to ourselves
  • While language allows us to articulate our inner lives, it also divides cultures, right down to the way we process information. Dennett explains it using the example of our perception of colour: ‘Different cultures have different ways of dividing up colour,’ he said. ‘There are a lot of experiments that show that what colours you can distinguish depends a lot on what culture you grew up in.’
  • westerners process people’s faces differently to non-westerners. The very movement patterns of our eyeballs are dictated by culture.
  • ‘I think that some of the multiculturalism, some of the ardent defences of multiculturalism, are deeply misguided and regressive and I think postmodernism has actually harmed people in many nations
  • Recognising these cultural differences didn’t lead Dennett into moral relativism. ‘I am relieved not to have to confront some of the virtue-signalling and some of the doctrinaire attitudes that are now running rampant on college campuses,
  • Take the most obvious cases: the treatment of women in the Islamic world; the horrific reactions to homosexuality in many parts of the world that aren’t western. I think that there are clear reasons for preferring different cultural practices over others.
  • If we don’t create, endorse and establish some new rules and laws about how to think about this, we’re going to lose the capacity for human trust and that could be the end of civilisation.’
8More

Drug C.E.O. Martin Shkreli Arrested on Fraud Charges - The New York Times - 0 views

  • It has been a busy week for Martin Shkreli, the flamboyant businessman at the center of the drug industry’s price-gouging scandals. From Our Advertisers quot;frameC
  • He said he would sharply increase the cost of a drug used to treat a potentially deadly parasitic infection. He called himself “the world’s most eligible bachelor” on Twitter and railed against critics in a live-streaming YouTube video. After reportedly paying $2 million for a rare Wu-Tang Clan album, he goaded a member of the hip-hop group to “show me some respect.”
  • Then, at 6 a.m. Thursday, F.B.I. agents arrested Mr. Shkreli, 32, at his Murray Hill apartment. He was arraigned in Federal District Court in Brooklyn on securities fraud and wire fraud charges.
  • ...5 more annotations...
  • In a statement, a spokesman for Mr. Shkreli said he was confident that he would be cleared of all charges.
  • Mr. Shkreli has emerged as a symbol of pharmaceutical greed for acquiring a decades-old drug used to treat an infection that can be devastating for babies and people with AIDS and, overnight, rAIsing the price to $750 a pill from $13.50. His only mistake, he later conceded, was not rAIsing the price more.
  • Those price increases combined with Mr. Shkreli’s jeering response to his critics has made him a lightning rod for public outrage and fodder for the presidential campaign. His company, Turing Pharmaceuticals, and others, like Valeant Pharmaceuticals, have come under fire from lawmakers and consumers for profiting from steep price increases for old drugs.
  • But the criminal charges brought against him actually relate to something else entirely — his time as a hedge fund manager and when he ran his first biopharmaceutical company, Retrophin.
  • Still, for many of his critics, Mr. Shkreli’s arrest was a comeuppance for the brash executive who has seemed to enjoy — relish, even — his public notoriety. On Thursday, a satirical New Yorker column by the humorist Andy Borowitz said Mr. Shkreli’s lawyers had informed their client their hourly legal fees had increased by 5,000 percent.
2More

Stanford launches artificial intelligence institute to put humans and ethics at the cen... - 0 views

  • “The correct answer to pretty much everything in AI is more of it,” sAId Schmidt, the former Google chAIrman. “This generation is much more socially conscious than we were, and more broadly concerned about the impact of everything they do, so you’ll see a combination of both optimism and realism.”
  • Researchers and journalists have shown how AI technologies, largely designed by white and Asian men, tend to reproduce and amplify social biases in dangerous ways. Computer vision technologies built into cameras have trouble recognizing the faces of people of color. Voice recognition struggles to pick up English accents that aren’t mAInstream. Algorithms built to predict the likelihood of parole violations are rife with racial bias.
40More

Taiwan Is Beating the Coronavirus. Can the US Do the Same? | WIRED - 0 views

  • it is natural enough to look at Taiwan’s example and wonder why we didn’t do what they did, or, more pertinently, could we have done what they did?
  • we keep seeing the culturally embedded assumption that East Asian-style state social control just won’t fly in the good old, individualist, government-wary, freedom-loving United States.
  • The New York Times: People in “places like Singapore … are more willing to accept government orders.” Fortune: “There seems to be more of a willingness to place the community and society needs over individual liberty.” Even WIRED: “These countries all have social structures and traditions that might make this kind of surveillance and control a little easier than in the don’t-tread-on-me United States.”
  • ...37 more annotations...
  • we see the classic “Confucian values” (or “Asian values”) argument that has historically been deployed to explain everything from the economic success of East Asian nations to the prevalence of authoritarian single-party rule in Asia, and even, most recently, China’s supposed edge in ai research.
  • So, yeah, kudos to Taiwan for keeping its people safe, but here in America we’re going to do what we always do in a crisis—line up at a gun store and accuse the opposing political party of acting in bad faith. Not for us, those Asian values.
  • But the truth is that Taiwan, one of Asia’s most vibrant and boisterous democracies, is a terrible example to cite as a cultural other populated by submissive peons
  • Taiwan’s self-confidence and collective solidarity trace back to its triumphal self-liberation from its own authoritarian past, its ability to thrive in the shadow of a massive, hostile neighbor that refuses to recognize its right to chart its own path, and its track record of learning from existential threats.
  • There is no doubt that in January it would have been difficult for the US to duplicate Taiwan’s containment strategy, but that’s not because Americans are inherently more ornery than Taiwanese
  • It’s because the United States has a miserable record when it comes to learning from its own mistakes and suffers from a debilitating lack of faith in the notion that the government can solve problems—something that dates at least as far back as the moment in 1986 when Ronald Reagan said, “The nine most terrifying words in the English language are: ‘I’m from the government and I’m here to help.’”
  • The Taiwan-US comparison is the opposite of a clash of civilizations; instead, it’s a deathly showdown between competence and incompetence.
  • To be fair, there are some cultural aspects of East Asian societies that may work in Taiwan’s favor
  • There is undeniably a long tradition in East Asia of elevating scholars and experts to the highest levels of government,
  • The country’s president Tsai Ingwen, boasts a PhD from the London School of Economics, and the vice president, Chen Chien-jen, is a highly regarded epidemiologist
  • The threat of SARS put Taiwan on high alert for future outbreaks, while the past record of success at meeting such challenges seems to have encouraged the public to accept socially intrusive technological interventions.
  • First, and most important was Taiwan’s experience battling the SARS outbreak in 2003, followed by the swine flu in 2009
  • “Taiwan actually has a functioning democratic government, run by sensible, well-educated people—the USA? Not so much.”)
  • Taiwan’s commitment to transparency has also been critical
  • In the United States, the Trump administration ordered federal health authorities to treat high-level discussions on the coronavirus as classified material.
  • In Taiwan, the government has gone to great lengths to keep citizens well informed on every aspect of the outbreak, including daily press conferences and an active presence on social media
  • “Do not forget that Taiwan has been under China’s threat constantly,” wrote Wang Cheng-hua, a professor of art history at Princeton, “which has raised social consciousness about collective action. When the collective will supports government, then all of the strict measures implemented by the government make sense.”
  • Over the past quarter-century, Taiwan’s government has nurtured public trust by its actions and its transparency.
  • The democracy activists who risked their lives and careers during the island nation’s martial law era were not renowned for their willingness to accept government orders or preach Confucian social harmony
  • some of the current willingness to trust what the government is telling the people is the direct “result of having experienced the transition from an authoritarian government that lied all the time, to a democratic government and robust political dialogue that forced people to be able to evaluate information.”
  • Because of the opposition of the People’s Republic of China, Taiwan is not a member of the United Nations or the World Health Organization
  • “The reality of being isolated from global organizations,” wrote Tung, “also makes Taiwanese very aware of the publicity of its success in handling a crisis like this. The more coverage from foreign media, the more people feel confident in government policy and social mobilization.”
  • Given what we know about Taiwan’s hard-won historical experience, could the US have implemented a similar model?
  • The answer, sadly, seems to be no
  • it would be impossible for the US to successfully integrate a health care database with customs and travel records because there is no national health care database in the United States. “The US health care system is fragmented, making it difficult to organize, integrate, and assess data coming in from its various government and private-sector parts,”
  • more tellingly, continued Fidler, “the manner in which the United States has responded to Covid-19 demonstrates that the United States did not learn the lessons from past outbreaks and is struggling to cobble together a semblance of a strategy. ”
  • There’s where the contrast between the United States and Taiwan becomes most salient. The US is not only bad at the act of government but has actively been getting worse.
  • But Taiwan’s own success at building a functional democracy is probably the most potent rebuke to the Asian values thesis.
  • But over that same period, powerful political and economic interests in the US have dedicated themselves to undermining faith in government action, in favor of deregulated markets that have no capacity to react intelligently or proactively to existential threats.
  • And instead of learning from history, US leaders actively ignore it, a truth for which there could be no better symbolic proof than the Trump administration’s dismantling of the National Security Council pandemic office created by the Obama administration in the wake of the Ebola outbreak
  • Finally, instead of seeking to keep the public informed to the best of our ability, some of our political leaders and media institutions have gone out of their way to muddy the waters.
  • In Taiwan, one early government response to the Covid-19 outbreak was to institute a fine of $100,000 for the act of spreading fake news about the epidemic.
  • In the US the most popular television news network in the country routinely downplayed or misrepresented the threat of the coronavirus, until the severity of the outbreak became too large to ignore.
  • If there is any silver lining here, it’s that the disaster now upon us is of such immense scope that it could finally expose the folly of the structural forces that have been wreaking sustained havoc on American governmental institutions
  • So maybe we are finally about to learn that competence matters, that educated leaders are a virtue, and that telling the truth is a responsibility
  • Americans might have to learn this the hard way, like we did in Hong Kong and Singapore.”
  • We’re about to find out how hard it’s going to be. But will we learn?
6More

Cyber Week in Review: April 23, 2021 | Council on Foreign Relations - 0 views

  • the Russian government announced that it would expel ten U.S. diplomats and blacklist eight former and incumbent U.S. officials that were “involved in drafting and implementing anti-Russia policy.” The expulsions come after the Biden administration attributed the SolarWinds breach to Russia and implemented economic sanctions.
  • The UK government has launched a security campaign this week meant to educate domestic audiences on strategies used by foreign spies to steal sensitive or classified information. The campaign, titled “think before you link,” is a response to an increasing number of British nationals being targeted by malicious state actors masquerading as online recruiters
  • The new campaign is meant to combat these foreign actors by giving “practical advice on how to identify a malicious online profile, how to respond if approached, and how to minimize the risk of being targeted in the first place.”
  • ...3 more annotations...
  • Senators Ron Wyden (D-OR) and Rand Paul (R-KY) introduced legislation on Wednesday that would bar government and local law enforcement agencies from purchasing the location data of U.S. citizens without a warrant. The “Fourth Amendment Is Not for Sale Act” [PDF] would also criminalize the police use of “illegitimately obtained” data from technology brokers such as Clearview ai, a biometrics firm that has scraped and sold billions of photos from social media and other websites
  • Facebook announced that it had broken up two separate Palestinian hacker groups—one with alleged ties to the Palestinian Preventive Security Service (PSS), the intelligence service of the Palestinian Authority, and the other, known as Arid Viper, with reported links to the Hamas militant group.
  • the PSS-backed hackers are believed to be based in the West Bank and target entities primarily in Palestine and Syria, with a lesser focus on Turkey, Iraq, Lebanon, and Libya. Their targets include journalists, critics of the Palestinian government, human rights activists, and military groups such as the Syrian opposition and Iraqi military.
« First ‹ Previous 81 - 100 of 204 Next › Last »
Showing 20 items per page