Skip to main content

Home/ History Readings/ Group items tagged manipulation

Rss Feed Group items tagged

lilyrashkind

Biden heads to Poland as he announces new plan to wean Europe off Russian energy - CNNP... - 0 views

  • (CNN)President Joe Biden announced a new initiative meant to deprive Russian President Vladimir Putin of European energy profits that Biden says are used to fuel Russia's war in Ukraine.Speaking in Brussels alongside European Commission President Ursula von der Leyen, Biden said Russia was using its supply of oil and gas to "coerce and manipulate its neighbors." He said the United States would help Europe reduce its dependence on Russian oil and gas, and would ensure the continent had enough supplies for the next two winters. The announcement came just before Biden departed Brussels for Poland.
  • "I know that eliminating Russian gas will have costs for Europe, but it's not only the right thing to do from a moral standpoint, it's going to put us on a much stronger strategic footing," he said.
  • Senior administration officials said the 15 billion cubic meters of liquefied natural gas come from multiple sources, including the United States and nations in Asia. But officials did not have an exact breakdown on where the gas was coming from. The announcement Friday was the culmination of a US effort over the past months to identify alternate sources of energy for Europe, particularly in Asia. Officials said those efforts would continue through this year to hit the target.
  • ...4 more annotations...
  • Upon his arrival at Rzeszów-Jasionka Airport, Biden will be greeted by Polish President Andrzej Duda and receive a briefing on the humanitarian response to the war. He'll meet with service members from the 82nd Airborne Division in Rzeszów before traveling to Warsaw in the evening.On Saturday, the White House says Biden will hold a bilateral meeting with Duda to discuss how the US and allies are responding to the refugee crisis that has ensued as a result of the war. He'll also deliver remarks before returning to Washington.Biden's travel to Poland comes after meetings on Thursday in Brussels, where he attended a slate of emergency summits, announced new actions -- such as sanctions against hundreds of members of Russia's parliament and a commitment to admit 100,000 refugees fleeing Ukraine -- and conferred with global leaders on how the world will respond if Russia deploys a chemical, biological or nuclear weapon.
  • Poland, which borders Ukraine to the west, has registered more than 2 million Ukrainian refugees crossing into the country. However, the number of refugees staying in Poland is lower, with many continuing on in their journey to other countries.Earlier this month during Vice President Kamala Harris' trip to Poland, Duda personally asked the vice president to speed up and simplify the procedures allowing Ukrainians with family in the US to come to the country. He also warned Harris that his country's resources were being badly strained by the influx of refugees, even as Poland welcomes them with open arms.The White House says that since February 24, the US has provided more than $123 million to assist countries neighboring Ukraine and the European Union to address the refugee influx, including $48 million in Poland.
  • Biden brought up that he has visited war zones, saying he understood the plight of refugees."I've been in refugee camps. I've been in war zones for the last 15 years. And it's -- it's devastating," he said.Biden also said the refugee influx is "not something that Poland or Romania or Germany should carry on their own."
  • The Poland trip also comes two weeks after the US rejected Poland's proposals to facilitate the transfer its MiG-29 fighter jets to Ukraine. The US rejected Poland's proposals over fears that the US and NATO could be perceived as taking an escalatory step, further fomenting conflict between the alliance and Russia -- which adamantly opposes Ukraine's ambitions to join the NATO alliance.Ukrainian President Volodymyr Zelensky has repeatedly requested more aircraft for the invasion, making another appeal to NATO leaders on Thursday.
Javier E

Book Review: 'Freedom's Dominion,' by Jefferson Cowie - The New York Times - 0 views

  • Cowie, a historian at Vanderbilt University, traces Wallace’s repressive creed to his birthplace, Barbour County, in Alabama’s southeastern corner, where the cry of “freedom” was heard from successive generations of settlers, slaveholders, secessionists and lynch mobs through the 19th and 20th centuries. The same cry echoes today in the rallies and online invective of the right
  • though Cowie keeps his focus on the past, his book sheds stark light on the present. It is essential reading for anyone who hopes to understand the unholy union, more than 200 years strong, between racism and the rabid loathing of government.
  • “Freedom’s Dominion” is local history, but in the way that Gettysburg was a local battle or the Montgomery bus boycott was a local protest.
  • ...11 more annotations...
  • The book recounts four peak periods in the conflict between white Alabamians and the federal government: the wild rush, in the early 19th century, to seize and settle lands that belonged to the Creek Nation; Reconstruction; the reassertion of white supremacy under Jim Crow; and the attempts of Wallace and others to nullify the civil rights reforms of the 1950s and 1960s.
  • Throughout, as Cowie reveals, white Southerners portrayed the oppression of Black people and Native Americans not as a repudiation of freedom, but its precondition, its very foundation.
  • Following the election of Ulysses S. Grant in 1868 and the ratification of the 14th and 15th Amendments, the federal presence in the South was finally robust. So was the spirit of local defiance. In post-bellum Barbour County, Cowie writes, “peace only prevailed for freed people when federal troops were in town” — and then only barely
  • White men did all this in Barbour County, by design and without relent, and Cowie’s account of their acts is unsparing. His narrative is immersive; his characters are vividly rendered, whether familiar figures like Andrew Jackson or mostly forgotten magnates like J.W. Comer, a plantation owner who became, in the late 19th century, the architect of a vast, sadistic and extremely lucrative system of convict labor
  • The federal government is a character here, too — sometimes in a central role, sometimes remote to the point of irrelevance, and all too often feckless in the defense of a more inclusive, affirmative model of freedom.
  • the chaos in Alabama offended Jackson’s sense of discipline and made a mockery of his treaties with the Creeks. Beginning in 1832, and in fits and starts over the following year, federal troops looked to turn back or at least contain the white wave. Instead, their presence touched off a series of violent reprisals, created a cast of martyrs and folk heroes, and gave rise to the mythology of white victimization. Self-rule and local authority — rhetorical wrapping for this will to power — had become articles of faith, fervid as any religious belief.
  • Thus were white men, in the words of the scholar Orlando Patterson, whom Cowie quotes, “free to brutalize.” Thus were they free “to plunder and lay waste and call it peace, to rape and humiliate, to invade, conquer, uproot and degrade.”
  • When Grant stepped up the enforcement of voting rights, whites in Eufaula, Barbour County’s largest town, massacred Black citizens and engaged in furious efforts to manipulate or overturn elections. As in the 1830s, the federal government showed little stamina for the struggle. Republican losses in 1874 augured another retreat, this time for the better part of a century. In the vacuum, Cowie explains, emerged “the neoslavery of convict leasing, the vigilante justice of lynching, the degradation and debt of sharecropping and the official disenfranchisement of Blacks” under Jim Crow.
  • Wallace, as Cowie makes clear, had bigger ambitions. Instinctively, he knew that his brand of politics had an audience anywhere that white Americans were under strain and looking for someone to blame. Wallace became the sneering face of the backlash against the Civil Rights Act and the Voting Rights Act, against any law or court ruling or social program that aimed to include Black Americans more fully in our national life. Racism was central to his appeal, yet its common note was grievance; the common enemies were elites, the press and the federal government. “Being a Southerner is no longer geographic,” he declared in 1964, during the first of his four runs for the White House. “It’s a philosophy and an attitude.”
  • That attitude, we know, is pervasive now — a primal, animating principle of conservative politics. We hear it in conspiracy theories about the “deep state”; we see it in the actions of Republican officials like Gov. Ron DeSantis of Florida, who built a case for his re-election in 2022 by banning — in the name of “individual freedom” — classroom discussions of gender, sexuality and systemic racism.
  • In explaining how we got here, “Freedom’s Dominion” emphasizes race above economics, but this seems fitting. The fixation on the free market, so long a defining feature of the Republican Party, has loosened its hold; taxes and regulations do not boil the blood as they once did. In their place is a stew of resentments as raw as any since George Wallace stirred the pot.
Javier E

See How Real AI-Generated Images Have Become - The New York Times - 0 views

  • The rapid advent of artificial intelligence has set off alarms that the technology used to trick people is advancing far faster than the technology that can identify the tricks. Tech companies, researchers, photo agencies and news organizations are scrambling to catch up, trying to establish standards for content provenance and ownership.
  • The advancements are already fueling disinformation and being used to stoke political divisions
  • Last month, some people fell for images showing Pope Francis donning a puffy Balenciaga jacket and an earthquake devastating the Pacific Northwest, even though neither of those events had occurred. The images had been created using Midjourney, a popular image generator.
  • ...16 more annotations...
  • Authoritarian governments have created seemingly realistic news broadcasters to advance their political goals
  • Experts fear the technology could hasten an erosion of trust in media, in government and in society. If any image can be manufactured — and manipulated — how can we believe anything we see?
  • “The tools are going to get better, they’re going to get cheaper, and there will come a day when nothing you see on the internet can be believed,” said Wasim Khaled, chief executive of Blackbird.AI, a company that helps clients fight disinformation.
  • Artificial intelligence allows virtually anyone to create complex artworks, like those now on exhibit at the Gagosian art gallery in New York, or lifelike images that blur the line between what is real and what is fiction. Plug in a text description, and the technology can produce a related image — no special skills required.
  • Midjourney’s images, he said, were able to pass muster in facial-recognition programs that Bellingcat uses to verify identities, typically of Russians who have committed crimes or other abuses. It’s not hard to imagine governments or other nefarious actors manufacturing images to harass or discredit their enemies.
  • In February, Getty accused Stability AI of illegally copying more than 12 million Getty photos, along with captions and metadata, to train the software behind its Stable Diffusion tool. In its lawsuit, Getty argued that Stable Diffusion diluted the value of the Getty watermark by incorporating it into images that ranged “from the bizarre to the grotesque.”
  • Getty’s lawsuit reflects concerns raised by many individual artists — that A.I. companies are becoming a competitive threat by copying content they do not have permission to use.
  • Trademark violations have also become a concern: Artificially generated images have replicated NBC’s peacock logo, though with unintelligible letters, and shown Coca-Cola’s familiar curvy logo with extra O’s looped into the name.
  • The threat to photographers is fast outpacing the development of legal protections, said Mickey H. Osterreicher, general counsel for the National Press Photographers Association
  • Newsrooms will increasingly struggle to authenticate conten
  • Social media users are ignoring labels that clearly identify images as artificially generated, choosing to believe they are real photographs, he said.
  • The video explained that the deepfake had been created, with Ms. Schick’s consent, by the Dutch company Revel.ai and Truepic, a California company that is exploring broader digital content verification
  • The companies described their video, which features a stamp identifying it as computer-generated, as the “first digitally transparent deepfake.” The data is cryptographically sealed into the file; tampering with the image breaks the digital signature and prevents the credentials from appearing when using trusted software.
  • The companies hope the badge, which will come with a fee for commercial clients, will be adopted by other content creators to help create a standard of trust involving A.I. images.
  • “The scale of this problem is going to accelerate so rapidly that it’s going to drive consumer education very quickly,” said Jeff McGregor, chief executive of Truepic
  • Adobe unveiled its own image-generating product, Firefly, which will be trained using only images that were licensed or from its own stock or no longer under copyright. Dana Rao, the company’s chief trust officer, said on its website that the tool would automatically add content credentials — “like a nutrition label for imaging” — that identified how an image had been made. Adobe said it also planned to compensate contributors.
Javier E

Pause or panic: battle to tame the AI monster - 0 views

  • What exactly are they afraid of? How do you draw a line from a chatbot to global destruction
  • This tribe feels we have made three crucial errors: giving the AI the capability to write code, connecting it to the internet and teaching it about human psychology. In those steps we have created a self-improving, potentially manipulative entity that can use the network to achieve its ends — which may not align with ours
  • This is a technology that learns from our every interaction with it. In an eerie glimpse of AI’s single-mindedness, OpenAI revealed in a paper that GPT-4 was willing to lie, telling a human online it was a blind person, to get a task done.
  • ...16 more annotations...
  • For researchers concerned with more immediate AI risks, such as bias, disinformation and job displacement, the voices of doom are a distraction. Professor Brent Mittelstadt, director of research at the Oxford Internet Institute, said the warnings of “the existential risks community” are overblown. “The problem is you can’t disprove the future scenarios . . . in the same way you can’t disprove science fiction.” Emily Bender, a professor of linguistics at the University of Washington, believes the doomsters are propagating “unhinged AI hype, helping those building this stuff sell it”.
  • Those urging us to stop, pause and think again have a useful card up our sleeves: the people building these models do not fully understand them. AI like ChatGPT is made up of huge neural networks that can defy their creators by coming up with “emergent properties”.
  • Google’s PaLM model started translating Bengali despite not being trained to do so
  • Let’s not forget the excitement, because that is also part of Moloch, driving us forward. The lure of AI’s promises for humanity has been hinted at by DeepMind’s AlphaFold breakthrough, which predicted the 3D structures of nearly all the proteins known to humanity.
  • Noam Shazeer, a former Google engineer credited with setting large language models such as ChatGPT on their present path, was asked by The Sunday Times how the models worked. He replied: “I don’t think anybody really understands how they work, just like nobody really understands how the brain works. It’s pretty much alchemy.”
  • The industry is turning itself to understanding what has been created, but some predict it will take years, decades even.
  • A CBS interviewer challenged Sundar Pichai, Google’s chief executive, this week: “You don’t fully understand how it works, and yet you’ve turned it loose on society?
  • Greg Brockman, co-founder of OpenAI, told the TED2023 conference this week: “We hear from people who are excited, we hear from people who are concerned. We hear from people who feel both those emotions at once. And, honestly, that’s how we feel.”
  • Alex Heath, deputy editor of The Verge, who recently attended an AI conference in San Francisco. “It’s clear the people working on generative AI are uneasy about the worst-case scenario of it destroying us all. These fears are much more pronounced in private than they are in public.” One figure building an AI product “said over lunch with a straight face that he is savoring the time before he is killed by AI”.
  • In 2020 there wasn’t a single drug in clinical trials developed using an AI-first approach. Today there are 18
  • Consider this from Bill Gates last month: “I think in the next five to ten years, AI-driven software will finally deliver on the promise of revolutionising the way people teach and learn.”
  • If the industry is aware of the risks, is it doing enough to mitigate them? Microsoft recently cut its ethics team, and researchers building AI outnumber those focused on safety by 30-to-1,
  • The concentration of AI power, which worries so many, also presents an opportunity to more easily develop some global rules. But there is little agreement on direction. Europe is proposing a centrally defined, top-down approach. Britain wants an innovation-friendly environment where rules are defined by each industry regulator. The US commerce department is consulting on whether risky AI models should be certified. China is proposing strict controls on generative AI that could upend social order.
  • Part of the drive to act now is to ensure we learn the lessons of social media. Twenty years after creating it, we are trying to put it back in a legal straitjacket after learning that its algorithms understand us only too well. “Social media was the first contact between AI and humanity, and humanity lost,” Yuval Harari, the Sapiens author,
  • Others point to bioethics, especially international agreements on human cloning. Tegmark said last week: “You could make so much money on human cloning. Why aren’t we doing it? Because biologists thought hard about this and felt this is way too risky. They got together in the Seventies and decided, let’s not do this because it’s too unpredictable. We could lose control over what happens to our species. So they paused.” Even China signed up.
  • One voice urging calm is Yann LeCun, Meta’s chief AI scientist. He has labelled ChatGPT a “flashy demo” and “not a particularly interesting scientific advance”. He tweeted: “A GPT-4-powered robot couldn’t clear up the dinner table and fill up the dishwasher, which any ten-year-old can do. And it couldn’t drive a car, which any 18-year-old can learn to do in 20 hours of practice. We’re still missing something big for human-level AI.” If this is sour grapes and he’s wrong, Moloch already has us in its thrall.
Javier E

Opinion | Our Kids Are Living In a Different Digital World - The New York Times - 0 views

  • You may have seen the tins that contain 15 little white rectangles that look like the desiccant packs labeled “Do Not Eat.” Zyns are filled with nicotine and are meant to be placed under your lip like tobacco dip. No spitting is required, so nicotine pouches are even less visible than vaping. Zyns come in two strengths in the United States, three and six milligrams. A single six-milligram pouch is a dose so high that first-time users on TikTok have said it caused them to vomit or pass out.
  • Greyson Imm, an 18-year-old high school student in Prairie Village, Kan., said he was 17 when Zyn videos started appearing on his TikTok feed. The videos multiplied through the spring, when they were appearing almost daily. “Nobody had heard about Zyn until very early 2023,” he said. Now, a “lot of high schoolers have been using Zyn. It’s really taken off, at least in our community.”
  • I was stunned by the vast forces that are influencing teenagers. These forces operate largely unhampered by a regulatory system that seems to always be a step behind when it comes to how children can and are being harmed on social media.
  • ...36 more annotations...
  • Parents need to know that when children go online, they are entering a world of influencers, many of whom are hoping to make money by pushing dangerous products. It’s a world that’s invisible to us
  • when we log on to our social media, we don’t see what they see. Thanks to algorithms and ad targeting, I see videos about the best lawn fertilizer and wrinkle laser masks, while Ian is being fed reviews of flavored vape pens and beautiful women livestreaming themselves gambling crypto and urging him to gamble, too.
  • Smartphones are taking our kids to a different world
  • We worry about bad actors bullying, luring or indoctrinating them online
  • all of this is, unfortunately, only part of what makes social media dangerous.
  • The tobacco conglomerate Philip Morris International acquired the Zyn maker Swedish Match in 2022 as part of a strategic push into smokeless products, a category it projects could help drive an expected $2 billion in U.S. revenue in 2024.
  • P.M.I. is also a company that has long denied it markets tobacco products to minors despite decades of research accusing it of just that. One 2022 study alone found its brands advertising near schools and playgrounds around the globe.
  • the ’90s, when magazines ran full-page Absolut Vodka ads in different colors, which my friends and I collected and taped up on our walls next to pictures of a young Leonardo DiCaprio — until our parents tore them down. This was advertising that appealed to me as a teenager but was also visible to my parents, and — crucially — to regulators, who could point to billboards near schools or flavored vodka ads in fashion magazines and say, this is wrong.
  • Even the most committed parent today doesn’t have the same visibility into what her children are seeing online, so it is worth explaining how products like Zyn end up in social feeds
  • influencers. They aren’t traditional pitch people. Think of them more like the coolest kids on the block. They establish a following thanks to their personality, experience or expertise. They share how they’re feeling, they share what they’re thinking about, they share stuff they l
  • With ruthless efficiency, social media can deliver unlimited amounts of the content that influencers create or inspire. That makes the combination of influencers and social-media algorithms perhaps the most powerful form of advertising ever invented.
  • Videos like his operate like a meme: It’s unintelligible to the uninitiated, it’s a hilarious inside joke to those who know, and it encourages the audience to spread the message
  • Enter Tucker Carlson. Mr. Carlson, the former Fox News megastar who recently started his own subscription streaming service, has become a big Zyn influencer. He’s mentioned his love of Zyn in enough podcasts and interviews to earn the nickname Tucker CarlZyn.
  • was Max VanderAarde. You can glimpse him in a video from the event wearing a Santa hat and toasting Mr. Carlson as they each pop Zyns in their mouths. “You can call me king of Zynbabwe, or Tucker CarlZyn’s cousin,” he says in a recent TikTok. “Probably, what, moved 30 mil cans last year?”
  • Freezer Tarps, Mr. VanderAarde’s TikTok account, appears to have been removed after I asked the company about it. Left up are the large number of TikToks by the likes of @lifeofaZyn, @Zynfluencer1 and @Zyntakeover; those hashtagged to #Zynbabwe, one of Freezer Tarps’s favorite terms, have amassed more than 67 million views. So it’s worth breaking down Mr. VanderAarde’s videos.
  • All of these videos would just be jokes (in poor taste) if they were seen by adults only. They aren’t. But we can’t know for sure how many children follow the Nelk Boys or Freezer Tarps — social-media companies generally don’t release granular age-related data to the public. Mr. VanderAarde, who responded to a few of my questions via LinkedIn, said that nearly 95 percent of his followers are over the age of 18.
  • I turned to Influencity, a software program that estimates the ages of social media users by analyzing profile photos and selfies in recent posts. Influencity estimated that roughly 10 percent of the Nelk Boys’ followers on YouTube are ages 13 to 17. That’s more than 800,000 children.
  • The helicopter video has already been viewed more than one million times on YouTube, and iterations of it have circulated widely on TikTok.
  • YouTube said it eventually determined that four versions of the Carlson Zyn videos were not appropriate for viewers under age 18 under its community guidelines and restricted access to them by age
  • Mr. Carlson declined to comment on the record beyond his two-word statement. The Nelk Boys didn’t respond to requests for comment. Meta declined to comment on the record. TikTok said it does not allow content that promotes tobacco or its alternatives. The company said that it has over 40,000 trust and safety experts who work to keep the platform safe and that it prevented teenagers’ accounts from viewing over two million videos globally that show the consumption of tobacco products by adults. TikTok added that in the third quarter of 2023 it proactively removed 97 percent of videos that violated its alcohol, tobacco and drugs policy.
  • Greyson Imm, the high school student in Prairie Village, Kan., points to Mr. VanderAarde as having brought Zyn “more into the mainstream.” Mr. Imm believes his interest in independent comedy on TikTok perhaps made him a target for Mr. VanderAarde’s videos. “He would create all these funny phrases or things that would make it funny and joke about it and make it relevant to us.”
  • It wasn’t long before Mr. Imm noticed Zyn blowing up among his classmates — so much so that the student, now a senior at Shawnee Mission East High School, decided to write a piece in his school newspaper about it. He conducted an Instagram poll from the newspaper’s account and found that 23 percent of the students who responded used oral nicotine pouches during school.
  • “Upper-decky lip cushions, ferda!” Mr. VanderAarde coos in what was one of his popular TikTok videos, which had been liked more than 40,000 times. The singsong audio sounds like gibberish to most people, but it’s actually a call to action. “Lip cushion” is a nickname for a nicotine pouch, and “ferda” is slang for “the guys.”
  • “I have fun posting silly content that makes fun of pop culture,” Mr. VanderAarde said to me in our LinkedIn exchange.
  • They’re incentivized to increase their following and, in turn, often their bank accounts. Young people are particularly susceptible to this kind of promotion because their relationship with influencers is akin to the intimacy of a close friend.
  • I’ve spent the past three years studying media manipulation and memes, and what I see in Freezer Tarps’s silly content is strategy. The use of Zyn slang seems like a way to turn interest in Zyn into a meme that can be monetized through merchandise and other business opportunities.
  • Such as? Freezer Tarps sells his own pouch product, Upperdeckys, which delivers caffeine instead of nicotine and is available in flavors including cotton candy and orange creamsicle. In addition to jockeying for sponsorship, Mr. Carlson may also be trying to establish himself with a younger, more male, more online audience as his new media company begins building its subscriber base
  • This is the kind of viral word-of-mouth marketing that looks like entertainment, functions like culture and can increase sales
  • What’s particularly galling about all of this is that we as a society already agreed that peddling nicotine to kids is not OK. It is illegal to sell nicotine products to anyone under the age of 21 in all 50 states
  • numerous studies have shown that the younger people are when they try nicotine for the first time, the more likely they will become addicted to it. Nearly 90 percent of adults who smoke daily started smoking before they turned 18.
  • Decades later — even after Juul showed the power of influencers to help addict yet another generation of children — the courts, tech companies and regulators still haven’t adequately grappled with the complexities of the influencer economy.
  • Facebook, Instagram and TikTok all have guidelines that prohibit tobacco ads and sponsored, endorsed or partnership-based content that promotes tobacco products. Holding them accountable for maintaining those standards is a bigger question.
  • We need a new definition of advertising that takes into account how the internet actually works. I’d go so far as to propose that the courts broaden the definition of advertising to include all influencer promotion. For a product as dangerous as nicotine, I’d put the bar to be considered an influencer as low as 1,000 followers on a social-media account, and maybe if a video from someone with less of a following goes viral under certain legal definitions, it would become influencer promotion.
  • Laws should require tech companies to share data on what young people are seeing on social media and to prevent any content promoting age-gated products from reaching children’s feeds
  • hose efforts must go hand in hand with social media companies putting real teeth behind their efforts to verify the ages of their users. Government agencies should enforce the rules already on the books to protect children from exposure to addictive products,
  • I refuse to believe there aren’t ways to write laws and regulations that can address these difficult questions over tech company liability and free speech, that there aren’t ways to hold platforms more accountable for advertising that might endanger kids. Let’s stop treating the internet like a monster we can’t control. We built it. We foisted it upon our children. We had better try to protect them from its potential harms as best we can.
Javier E

Why Didn't the Government Stop the Crypto Scam? - 1 views

  • Securities and Exchange Commission Chair Gary Gensler, who took office in April of 2021 with a deep background in Wall Street, regulatory policy, and crypto, which he had taught at MIT years before joining the SEC. Gensler came in with the goal of implementing the rule of law in the crypto space, which he knew was full of scams and based on unproven technology. Yesterday, on CNBC, he was again confronted with Andrew Ross Sorkin essentially asking, “Why were you going after minor players when this Ponzi scheme was so flagrant?”
  • Cryptocurrencies are securities, and should fit under securities law, which would have imposed rules that would foster a de facto ban of the entire space. But since regulators had not actually treated them as securities for the last ten years, a whole new gray area of fake law had emerged
  • Almost as soon as he took office, Gensler sought to fix this situation, and treat them as securities. He began investigating important players
  • ...22 more annotations...
  • But the legal wrangling to just get the courts to treat crypto as a set of speculative instruments regulated under securities law made the law moot
  • In May of 2022, a year after Gensler began trying to do something about Terra/Luna, Kwon’s scheme blew up. In a comically-too-late-to-matter gesture, an appeals court then said that the SEC had the right to compel information from Kwon’s now-bankrupt scheme. It is absolute lunacy that well-settled law, like the ability for the SEC to investigate those in the securities business, is now being re-litigated.
  • many crypto ‘enthusiasts’ watching Gensler discuss regulation with his predecessor “called for their incarceration or worse.”
  • it wasn’t just the courts who were an impediment. Gensler wasn’t the only cop on the beat. Other regulators, like those at the Commodities Futures Trading Commission, the Federal Reserve, or the Office of Comptroller of the Currency, not only refused to take action, but actively defended their regulatory turf against an attempt from the SEC to stop the scams.
  • Behind this was the fist of political power. Everyone saw the incentives the Senate laid down when every single Republican, plus a smattering of Democrats, defeated the nomination of crypto-skeptic Saule Omarova in becoming the powerful bank regulator at the Comptroller of the Currency
  • Instead of strong figures like Omarova, we had a weakling acting Comptroller Michael Hsu at the OCC, put there by the excessively cautious Treasury Secretary Janet Yellen. Hsu refused to stop bank interactions with crypto or fintech because, as he told Congress in 2021, “These trends cannot be stopped.”
  • It’s not just these regulators; everyone wanted a piece of the bureaucratic pie. In March of 2022, before it all unraveled, the Biden administration issued an executive order on crypto. In it, Biden said that virtually every single government agency would have a hand in the space.
  • That’s… insane. If everyone’s in charge, no one is.
  • And behind all of these fights was the money and political prestige of some most powerful people in Silicon Valley, who were funding a large political fight to write the rules for crypto, with everyone from former Treasury Secretary Larry Summers to former SEC Chair Mary Jo White on the payroll.
  • (Even now, even after it was all revealed as a Ponzi scheme, Congress is still trying to write rules favorable to the industry. It’s like, guys, stop it. There’s no more bribe money!)
  • Moreover, the institution Gensler took over was deeply weakened. Since the Reagan administration, wave after wave of political leader at the SEC has gutted the place and dumbed down the enforcers. Courts have tied up the commission in knots, and Congress has defanged it
  • Under Trump crypto exploded, because his SEC chair Jay Clayton had no real policy on crypto (and then immediately went into the industry after leaving.) The SEC was so dormant that when Gensler came into office, some senior lawyers actually revolted over his attempt to make them do work.
  • In other words, the regulators were tied up in the courts, they were against an immensely powerful set of venture capitalists who have poured money into Congress and D.C., they had feeble legal levers, and they had to deal with ‘crypto enthusiasts' who thought they should be jailed or harmed for trying to impose basic rules around market manipulation.
  • The bottom line is, Gensler is just one regulator, up against a lot of massed power, money, and bad institutional habits. And we as a society simply made the choice through our elected leaders to have little meaningful law enforcement in financial markets, which first became blindingly obvious in 2008 during the financial crisis, and then became comical ten years later when a sector whose only real use cases were money laundering
  • , Ponzi scheming or buying drugs on the internet, managed to rack up enough political power to bring Tony Blair and Bill Clinton to a conference held in a tax haven billed as ‘the future.’
  • It took a few years, but New Dealers finally implemented a workable set of securities rules, with the courts agreeing on basic definitions of what was a security. By the 1950s, SEC investigators could raise an eyebrow and change market behavior, and the amount of cheating in finance had dropped dramatically.
  • By 1935, the New Dealers had set up a new agency, the Securities and Exchange Commission, and cleaned out the FTC. Yet there was still immense concern that Roosevelt had not been able to tame Wall Street. The Supreme Court didn’t really ratify the SEC as a constitutional body until 1938, and nearly struck it down in 1935 when a conservative Supreme Court made it harder for the SEC to investigate cases.
  • Institutional change, in other words, takes time.
  • It’s a lesson to remember as we watch the crypto space melt down, with ex-billionaire Sam Bankman-Fried
  • It’s not like perfidy in crypto was some hidden secret. At the top of the market, back in December 2021, I wrote a piece very explicitly saying that crypto was a set of Ponzi schemes. It went viral, and I got a huge amount of hate mail from crypto types
  • one of the more bizarre aspects of the crypto meltdown is the deep anger not just at those who perpetrated it, but at those who were trying to stop the scam from going on. For instance, here’s crypto exchange Coinbase CEO Brian Armstrong, who just a year ago was fighting regulators vehemently, blaming the cops for allowing gambling in the casino he helps run.
  • FTX.com was an offshore exchange not regulated by the SEC. The problem is that the SEC failed to create regulatory clarity here in the US, so many American investors (and 95% of trading activity) went offshore. Punishing US companies for this makes no sense.
Javier E

The Phantasms of Judith Butler - The Atlantic - 0 views

  • The central idea of Who’s Afraid of Gender? is that fascism is gaining strength around the world, and that its weapon is what Butler calls the “phantasm of gender,” which they describe as a confused and irrational bundle of fears that displaces real dangers onto imaginary ones.
  • Similarly, Trump’s Christian-right supporters see this adjudicated rapist as a bulwark against sexual libertinism, but he also has a following among young men who admire him as libertine in chief and among people of every stripe who think he’ll somehow make them richer.
  • Butler is obviously correct that the authoritarian right sets itself against feminism and modern sexual rights and freedom.
  • ...19 more annotations...
  • But is the gender phantasm as crucial to the global far right as Butler claims?
  • Butler has little to say about the appeal of nationalism and community, insistence on ethnic purity, opposition to immigration, anxiety over economic and social stresses, fear of middle-class-status loss, hatred of “elites.”
  • why Hungarian Prime Minister Viktor Orbán is so popular, it would be less his invocation of the gender phantasm and more his ruthless determination to keep immigrants out, especially Muslim ones, along with his delivery of massive social services to families in an attempt to raise the birth rate
  • The chapter of Who’s Afraid of Gender? that is most relevant for American and British readers is probably the one about the women, many of them British, whom opponents call “TERFs” (trans-exclusionary radical feminists), but who call themselves “gender-critical feminists.”
  • But is obsession with “gender” really the primary motive behind current right-wing movements? And why is it so hard to trust that the noise around “gender” might actually be indicative of people’s real feelings, and not just the demagogue-fomented distraction Butler asser
  • Instead of proving that “gender” is a crucial part of what motivates popular support for right-wing authoritarianism, Butler simply asserts that it is, and then ties it all up with a bow called “fascism.”
  • ascism is a word that Butler admits is not perfect but then goes on to use repeatedly. I’m sure I’ve used it myself as a shorthand when I’m writing quickly, but it’s a bit manipulative. As used by Butler and much of the left, it covers way too many different issues and suggests that if you aren’t on board with the Butlerian worldview on every single one of them, a brown shirt must surely be hanging in your closet.
  • As they define it—“fascist passions or political trends are those which seek to strip people of the basic rights they require to live”—most societies for most of history have been fascist, including, for long stretches, our own
  • Instead of facing up to the problems of, for example, war, declining living standards, environmental damage, and climate change, right-wing leaders whip up hysteria about threats to patriarchy, traditional families, and heterosexuality.
  • They discuss only two authors at any length, the philosopher Kathleen Stock and J. K. Rowling. Butler does not engage with their writing in any detail—they do not quote even one sentence from Stock’s Material Girls: Why Reality Matters for Feminism, a serious book that has been much discussed, or indeed from any other gender-crit work, except for some writing from Rowling, including her essay in which she describes domestic violence at the hands of her first husband, an accusation he admits to in part.
  • They dismiss, with that invocation of a “phantasm,” apprehension about the presence of trans women in women’s single-sex spaces, (as well as, gender-crits would add, biological men falsely claiming to be trans in order to gain access to same), concerns for biologically female athletes who feel cheated out of scholarships and trophies, and the slight a biological woman might experience by being referred to as a “menstruator.”
  • Butler wants to dismiss gender-crits as fascist-adjacent: Indeed, in an interview, they compare Stock and Rowling to Putin and the pope.
  • It does seem odd that Butler, for whom everything about the body is socially produced, would be so uninterested in exploring the ways that trans identity is itself socially produced, at least in part—by, for example, homophobia and misogyny and the hypersexualization of young girls, by social media and online life, by the increasing popularity of cosmetic surgery, by the libertarian-individualist presumption that you can be whatever you want.
  • what is authenticity
  • In every other context, Butler works to demolish the idea of the eternal human—everything is contingent—except for when it comes to being transgender. There, the individual, and only the individual, knows themself.
  • I can't tell you how many left and liberal people I know who keep quiet about their doubts because they fear being ostracized professionally or socially. Nobody wants to be accused of putting trans people's lives in danger, and, after all, don't we all want, as the slogan goes, to “Be Kind”?
  • The trouble is that, in the long run, the demand for self-suppression fuels reaction. Polls show declining support for various trans demands for acceptance . People don’t like being forced by social pressure to deny what they think of as the reality of sex and gender.
  • They cite the civil-rights activist and singer Bernice Johnson Reagon’s call for “difficult coalitions” but forget that coalitions necessarily involve compromise and choosing your battles, not just accusing people of sharing the views of fascists
  • What if instead of trying to suppress the questioning of skeptics, we admit we don’t have many answers? What if, instead, we had a conversation? After all, isn’t that what philosophy is all about?
Javier E

Polyamory, the Ruling Class's Latest Fad - The Atlantic - 0 views

  • More is a near-perfect time capsule of the banal pleasure-seeking of wealthy, elite culture in the 2020s, and a neat encapsulation of its flaws. This culture would have us believe that interminable self-improvement projects, navel-gazing, and sexual peccadilloes are the new face of progress.
  • The climate warms, wars rage, and our country lurches toward a perilous election—all problems that require real action, real progress. And somehow “you do you” has become the American ruling class’s three-word bible.
  • Charles Taylor has argued that, since at least the late 20th century, Western societies have been defined by “a generalized culture of ‘authenticity,’ or expressive individualism, in which people are encouraged to find their own way, discover their own fulfillment, ‘do their own thing.’
  • ...18 more annotations...
  • Among the right, a new kind of reactionary self-help is ascendant. Its mainstream version is legible in the manosphere misogyny of Jordan Peterson, Joe Rogan, and Andrew Tate, while more eldritch currents lurk just beneath the surface. The Nietzscheanism of internet personalities like Bronze Age Pervert—who combines ethnonationalist chauvinism in politics and personal life with a Greco-Roman obsession with physical fitness—is only one of many examples of the trend the social critic Maya Vinokour has called “lifestyle fascism.”
  • We might call this turbocharged version of authenticity culture “therapeutic libertarianism”: the belief that self-improvement is the ultimate goal of life, and that no formal or informal constraints—whether imposed by states, faith systems, or other people—should impede each of us from achieving personal growth
  • This attitude is therapeutic because it is invariably couched in self-help babble. And it is libertarian not only because it makes a cult out of personal freedom, but because it applies market logic to human beings. We are all our own start-ups. We must all adopt a pro-growth mindset for our personhood and deregulate our desires.
  • We must all assess and reassess our own “fulfillment,” a kind of psychological Gross Domestic Product, on a near-constant basis. And like the GDP, our fulfillment must always increase.
  • On the left, what gets termed “wokeness” is indissociable from self-help. How should we understand superficial, performative expressions of “anti-racism” or preening social-media politics if not as a way for self-described good-hearted liberals to make grand public displays of pruning their moral shrubbery?
  • Stewart’s response to the UTIs is not concern for his wife but irritation: “This guy is breaking all my toys,” he grumbles. When she gets upset that her husband keeps calling her a “cunt” and a “whore” during sex—something he professes not being able to help—Stewart does not change this habit. Instead they strike a preposterous bargain: “He will try his best not to scream cunt during sex, and I will do my best to ignore him if he does.”
  • What the author is trying to find in her open relationship is not sex, but self-understanding—what it means, how we get it, whether sex can provide it. And although the answers Molly arrives at are not cheaply won, they are cheap all the same.
  • Near the end of the memoir, the author’s mother provides the empty epiphany toward which the text careens. “Everything that happens in life,” her mom offers, “is an opportunity to learn about yourself. Marriage. Motherhood. Relationships. Even anger and illness. Nothing that happens is good or bad in and of itself. It’s all just an opportunity to learn and grow.” With this maternal revelation, Molly’s “skin starts to tingle.” She relates that the advice “feels almost holy.”
  • though Molly may tell herself and her readers that she is on a journey of learning and growth, the ugly truth is that More feels like a 290-page cry for help. Molly does not come off as a woman boldly finding herself, but rather as someone who is vulnerable to psychological manipulation and does not enjoy her open marriage.
  • if it seems like Molly Roden Winter does not want to be in an open marriage, it is because she often lets us know that she doesn’t want to be in an open marriage.
  • When a couples therapist asks the pair why they’re in counseling halfway through the book—prompted by a breakdown Molly experiences that stems from their marital arrangement—she explains: “We’re here because I don’t want to be in an open marriage anymore, but Stewart does.”
  • There are precious few sex scenes where Molly seems to be enjoying herself. When Molly is in the middle of a squirmy threesome she’s been dreading, she literally dissociates from her body, pretending that she is a director staging a scene in which her physical person is merely an actor. Molly describes how she performs her role with “a clinician’s detachment” and leaves the apartment rapidly so as not “to be pulled back into this scene.” After one of her dates repeatedly removes his condom without her consent—an act known as “stealthing,” which is considered a sex crime in a number of countries and the state of California—she contracts a series of urinary tract infections
  • his concept doesn’t quite capture the extent to which this relentless quest for self-optimizing authenticity has infused our social and even political sensibilities.
  • Winter is trapped in her therapeutic worldview, one imposed on her by an American culture that has made narcissism into not simply a virtue, but a quasi-religion that turns external obstacles into opportunities for internal self-improvement.
  • These obstacles include, in her case, profound gender inequality relating to Molly’s life as a parent to two sons, and a troubling family history. Molly’s mother joined a cult—and indoctrinated the author into it as a child—at the urging of a male partner in her own open marriage. The book makes tacit comparisons between Molly’s mother’s initiation into a cult at the behest of an extramarital partner, and Molly’s own initiation into an open marriage at the behest of her husband.
  • throughout More, the dominant emotion Molly reports is not lust but rage—primarily at the deeply unequal child-care burdens that are placed upon her. “I think about all the years I’ve spent my night alone with the kids—the dinners, the bedtimes, the dishes, the loneliness of doing it all by myself—because Stew had to work,” she laments at one point. That Stewart is now spending late nights not working (if he ever was) but rather schtupping his endless reserve of mistresses pushes Molly further to the brink: “I feel my jealousy mingle with the resentment I’ve kept at bay for years,”
  • Molly doubles down on her quest for self-actualization through the relentless pursuit of bitter novelty: new sexual experiences that she rarely seems to enjoy, new partners who rarely treat her kindly.
  • The only solution Molly can imagine is to persist in an open marriage, rather than push for an equal one. Inward sexual revolution plainly feels more possible than a revolution in who does the dishes.
Javier E

I tried out an Apple Vision Pro. It frightened me | Arwa Mahdawi | The Guardian - 0 views

  • Despite all the marketed use cases, the most impressive aspect of it is the immersive video
  • Watching a movie, however, feels like you’ve been transported into the content.
  • that raises serious questions about how we perceive the world and what we consider reality. Big tech companies are desperate to rush this technology out but it’s not clear how much they’ve been worrying about the consequences.
  • ...10 more annotations...
  • it is clear that its widespread adoption is a matter of when, not if. There is no debate that we are moving towards a world where “real life” and digital technology seamlessly blur
  • Over the years there have been multiple reports of people being harassed and even “raped” in the metaverse: an experience that feels scarily real because of how immersive virtual reality is. As the lines between real life and the digital world blur to a point that they are almost indistinguishable, will there be a meaningful difference between online assault and an attack in real life?
  • more broadly, spatial computing is going to alter what we consider reality
  • Researchers from Stanford and Michigan University recently undertook a study on the Vision Pro and other “passthrough” headsets (that’s the technical term for the feature which brings VR content into your real-world surrounding so you see what’s around you while using the device) and emerged with some stark warnings about how this tech might rewire our brains and “interfere with social connection”.
  • These headsets essentially give us all our private worlds and rewrite the idea of a shared reality. The cameras through which you see the world can edit your environment – you can walk to the shops wearing it, for example, and it might delete all the homeless people from your view and make the sky brighter.
  • “What we’re about to experience is, using these headsets in public, common ground disappears,”
  • “People will be in the same physical place, experiencing simultaneous, visually different versions of the world. We’re going to lose common ground.”
  • It’s not just the fact that our perception of reality might be altered that’s scary: it’s the fact that a small number of companies will have so much control over how we see the world. Think about how much influence big tech already has when it comes to content we see, and then multiply that a million times over. You think deepfakes are scary? Wait until they seem even more realistic.
  • We’re seeing a global rise of authoritarianism. If we’re not careful this sort of technology is going to massively accelerate it.
  • Being able to suck people into an alternate universe, numb them with entertainment, and dictate how they see reality? That’s an authoritarian’s dream. We’re entering an age where people can be mollified and manipulated like never before
Javier E

How We Can Control AI - WSJ - 0 views

  • What’s still difficult is to encode human values
  • That currently requires an extra step known as Reinforcement Learning from Human Feedback, in which programmers use their own responses to train the model to be helpful and accurate. Meanwhile, so-called “red teams” provoke the program in order to uncover any possible harmful outputs
  • This combination of human adjustments and guardrails is designed to ensure alignment of AI with human values and overall safety. So far, this seems to have worked reasonably well.
  • ...22 more annotations...
  • At some point they will be able to, for example, suggest recipes for novel cyberattacks or biological attacks—all based on publicly available knowledge.
  • But as models become more sophisticated, this approach may prove insufficient. Some models are beginning to exhibit polymathic behavior: They appear to know more than just what is in their training data and can link concepts across fields, languages, and geographies.
  • We need to adopt new approaches to AI safety that track the complexity and innovation speed of the core models themselves.
  • What’s much harder to test for is what’s known as “capability overhang”—meaning not just the model’s current knowledge, but the derived knowledge it could potentially generate on its own.
  • Red teams have so far shown some promise in predicting models’ capabilities, but upcoming technologies could break our current approach to safety in AI. For one, “recursive self-improvement” is a feature that allows AI systems to collect data and get feedback on their own and incorporate it to update their own parameters, thus enabling the models to train themselves
  • This could result in, say, an AI that can build complex system applications (e.g., a simple search engine or a new game) from scratch. But, the full scope of the potential new capabilities that could be enabled by recursive self-improvement is not known.
  • Another example would be “multi-agent systems,” where multiple independent AI systems are able to coordinate with each other to build something new.
  • This so-called “combinatorial innovation,” where systems are merged to build something new, will be a threat simply because the number of combinations will quickly exceed the capacity of human oversight.
  • Short of pulling the plug on the computers doing this work, it will likely be very difficult to monitor such technologies once these breakthroughs occur
  • Current regulatory approaches are based on individual model size and training effort, and are based on passing increasingly rigorous tests, but these techniques will break down as the systems become orders of magnitude more powerful and potentially elusive
  • AI regulatory approaches will need to evolve to identify and govern the new emergent capabilities and the scaling of those capabilities.
  • But the AI Act has already fallen behind the frontier of innovation, as open-source AI models—which are largely exempt from the legislation—expand in scope and number
  • Europe has so far attempted the most ambitious regulatory regime with its AI Act,
  • both Biden’s order and Europe’s AI Act lack intrinsic mechanisms to rapidly adapt to an AI landscape that will continue to change quickly and often.
  • a gathering in Palo Alto organized by the Rand Corp. and the Carnegie Endowment for International Peace, where key technical leaders in AI converged on an idea: The best way to solve these problems is to create a new set of testing companies that will be incentivized to out-innovate each other—in short, a robust economy of testing
  • To check the most powerful AI systems, their testers will also themselves have to be powerful AI systems, precisely trained and refined to excel at the single task of identifying safety concerns and problem areas in the world’s most advanced models.
  • To be trustworthy and yet agile, these testing companies should be checked and certified by government regulators but developed and funded in the private market, with possible support by philanthropy organizations
  • The field is moving too quickly and the stakes are too high for exclusive reliance on typical government processes and timeframes.
  • One way this can unfold is for government regulators to require AI models exceeding a certain level of capability to be evaluated by government-certified private testing companies (from startups to university labs to nonprofit research organizations), with model builders paying for this testing and certification so as to meet safety requirements.
  • As AI models proliferate, growing demand for testing would create a big enough market. Testing companies could specialize in certifying submitted models across different safety regimes, such as the ability to self-proliferate, create new bio or cyber weapons, or manipulate or deceive their human creators
  • Much ink has been spilled over presumed threats of AI. Advanced AI systems could end up misaligned with human values and interests, able to cause chaos and catastrophe either deliberately or (often) despite efforts to make them safe. And as they advance, the threats we face today will only expand as new systems learn to self-improve, collaborate and potentially resist human oversight.
  • If we can bring about an ecosystem of nimble, sophisticated, independent testing companies who continuously develop and improve their skill evaluating AI testing, we can help bring about a future in which society benefits from the incredible power of AI tools while maintaining meaningful safeguards against destructive outcomes.
Javier E

Yuval Noah Harari's Apocalyptic Vision - The Atlantic - 0 views

  • He shares with Jared Diamond, Steven Pinker, and Slavoj Žižek a zeal for theorizing widely, though he surpasses them in his taste for provocative simplifications.
  • In medieval Europe, he explains, “Knowledge = Scriptures x Logic,” whereas after the scientific revolution, “Knowledge = Empirical Data x Mathematics.”
  • Silicon Valley’s recent inventions invite galaxy-brain cogitation of the sort Harari is known for. The larger you feel the disruptions around you to be, the further back you reach for fitting analogies
  • ...44 more annotations...
  • Have such technological leaps been good? Harari has doubts. Humans have “produced little that we can be proud of,” he complained in Sapiens. His next books, Homo Deus: A Brief History of Tomorrow (2015) and 21 Lessons for the 21st Century (2018), gazed into the future with apprehension
  • Harari has written another since-the-dawn-of-time overview, Nexus: A Brief History of Information Networks From the Stone Age to AI. It’s his grimmest work yet
  • Harari rejects the notion that more information leads automatically to truth or wisdom. But it has led to artificial intelligence, whose advent Harari describes apocalyptically. “If we mishandle it,” he warns, “AI might extinguish not only the human dominion on Earth but the light of consciousness itself, turning the universe into a realm of utter darkness.”
  • Those seeking a precedent for AI often bring up the movable-type printing press, which inundated Europe with books and led, they say, to the scientific revolution. Harari rolls his eyes at this story. Nothing guaranteed that printing would be used for science, he notes
  • Copernicus’s On the Revolutions of the Heavenly Spheres failed to sell its puny initial print run of about 500 copies in 1543. It was, the writer Arthur Koestler joked, an “all-time worst seller.”
  • The book that did sell was Heinrich Kramer’s The Hammer of the Witches (1486), which ranted about a supposed satanic conspiracy of sexually voracious women who copulated with demons and cursed men’s penises. The historian Tamar Herzig describes Kramer’s treatise as “arguably the most misogynistic text to appear in print in premodern times.” Yet it was “a bestseller by early modern standards,”
  • Kramer’s book encouraged the witch hunts that killed tens of thousands. These murderous sprees, Harari observes, were “made worse” by the printing press.
  • Ampler information flows made surveillance and tyranny worse too, Harari argues. The Soviet Union was, among other things, “one of the most formidable information networks in history,”
  • Information has always carried this destructive potential, Harari believes. Yet up until now, he argues, even such hellish episodes have been only that: episodes
  • Demagogic manias like the ones Kramer fueled tend to burn bright and flame out.
  • States ruled by top-down terror have a durability problem too, Harari explains. Even if they could somehow intercept every letter and plant informants in every household, they’d still need to intelligently analyze all of the incoming reports. No regime has come close to managing this
  • for the 20th-century states that got nearest to total control, persistent problems managing information made basic governance difficult.
  • So it was, at any rate, in the age of paper. Collecting data is now much, much easier.
  • Some people worry that the government will implant a chip in their brain, but they should “instead worry about the smartphones on which they read these conspiracy theories,” Harari writes. Phones can already track our eye movements, record our speech, and deliver our private communications to nameless strangers. They are listening devices that, astonishingly, people are willing to leave by the bedside while having sex.
  • Harari’s biggest worry is what happens when AI enters the chat. Currently, massive data collection is offset, as it has always been, by the difficulties of data analysis
  • What defense could there be against an entity that recognized every face, knew every mood, and weaponized that information?
  • Today’s political deliriums are stoked by click-maximizing algorithms that steer people toward “engaging” content, which is often whatever feeds their righteous rage.
  • Imagine what will happen, Harari writes, when bots generate that content themselves, personalizing and continually adjusting it to flood the dopamine receptors of each user.
  • Kramer’s Hammer of the Witches will seem like a mild sugar high compared with the heroin rush of content the algorithms will concoct. If AI seizes command, it could make serfs or psychopaths of us all.
  • Harari regards AI as ultimately unfathomable—and that is his concern.
  • Although we know how to make AI models, we don’t understand them. We’ve blithely summoned an “alien intelligence,” Harari writes, with no idea what it will do.
  • Last year, Harari signed an open letter warning of the “profound risks to society and humanity” posed by unleashing “powerful digital minds that no one—not even their creators—can understand, predict, or reliably control.” It called for a pause of at least six months on training advanced AI systems,
  • cynics saw the letter as self-serving. It fed the hype by insisting that artificial intelligence, rather than being a buggy product with limited use, was an epochal development. It showcased tech leaders’ Oppenheimer-style moral seriousness
  • it cost them nothing, as there was no chance their research would actually stop. Four months after signing, Musk publicly launched an AI company.
  • The economics of the Information Age have been treacherous. They’ve made content cheaper to consume but less profitable to produce. Consider the effect of the free-content and targeted-advertising models on journalism
  • Since 2005, the United States has lost nearly a third of its newspapers and more than two-thirds of its newspaper jobs, to the point where nearly 7 percent of newspaper employees now work for a single organization, The New York Times
  • we speak of “news deserts,” places where reporting has essentially vanished.
  • AI threatens to exacerbate this. With better chatbots, platforms won’t need to link to external content, because they’ll reproduce it synthetically. Instead of a Google search that sends users to outside sites, a chatbot query will summarize those sites, keeping users within Google’s walled garden.
  • a Truman Show–style bubble: personally generated content, read by voices that sound real but aren’t, plus product placement
  • this would cut off writers and publishers—the ones actually generating ideas—from readers. Our intellectual institutions would wither, and the internet would devolve into a closed loop of “five giant websites, each filled with screenshots of the other four,” as the software engineer Tom Eastman puts it.
  • Harari is Silicon Valley’s ideal of what a chatbot should be. He raids libraries, detects the patterns, and boils all of history down to bullet points. (Modernity, he writes, “can be summarised in a single phrase: humans agree to give up meaning in exchange for power.”)
  • Individual AI models cost billions of dollars. In 2023, about a fifth of venture capital in North America and Europe went to AI. Such sums make sense only if tech firms can earn enormous revenues off their product, by monopolizing it or marketing it. And at that scale, the most obvious buyers are other large companies or governments. How confident are we that giving more power to corporations and states will turn out well?
  • He discusses it as something that simply happened. Its arrival is nobody’s fault in particular.
  • In Harari’s view, “power always stems from cooperation between large numbers of humans”; it is the product of society.
  • like a chatbot, he has a quasi-antagonistic relationship with his sources, an I’ll read them so you don’t have to attitude. He mines other writers for material—a neat quip, a telling anecdote—but rarely seems taken with anyone else’s view
  • Hand-wringing about the possibility that AI developers will lose control of their creation, like the sorcerer’s apprentice, distracts from the more plausible scenario that they won’t lose control, and that they’ll use or sell it as planned. A better German fable might be Richard Wagner’s The Ring of the Nibelung : A power-hungry incel forges a ring that will let its owner rule the world—and the gods wage war over it.
  • Harari’s eyes are more on the horizon than on Silicon Valley’s economics or politics.
  • In Nexus, he proposes four principles. The first is “benevolence,” explained thus: “When a computer network collects information on me, that information should be used to help me rather than manipulate me.”
  • Harari’s other three values are decentralization of informational channels, accountability from those who collect our data, and some respite from algorithmic surveillance
  • these are fine, but they are quick, unsurprising, and—especially when expressed in the abstract, as things that “we” should all strive for—not very helpful.
  • though his persistent first-person pluralizing (“decisions we all make”) softly suggests that AI is humanity’s collective creation rather than the product of certain corporations and the individuals who run them. This obscures the most important actors in the drama—ironically, just as those actors are sapping our intellectual life, hampering the robust, informed debates we’d need in order to make the decisions Harari envisions.
  • Taking AI seriously might mean directly confronting the companies developing it
  • Harari slots easily into the dominant worldview of Silicon Valley. Despite his oft-noted digital abstemiousness, he exemplifies its style of gathering and presenting information. And, like many in that world, he combines technological dystopianism with political passivity.
  • Although he thinks tech giants, in further developing AI, might end humankind, he does not treat thwarting them as an urgent priority. His epic narratives, told as stories of humanity as a whole, do not make much room for such us-versus-them clashes.
Javier E

How neo-Nazis are using AI to translate Hitler for a new generation - The Washington Post - 0 views

  • In audio and video clips that have reached millions of viewers over the past month on TikTok, X, Instagram and YouTube, the führer’s AI-cloned voice quavers and crescendos as he delivers English-language versions of some of his most notorious addresses, including his 1939 Reichstag speech predicting the end of Jewish people in Europe. Some seeking to spread the practice of making Hitler videos have hosted online trainings.
  • Extremists are using artificial intelligence to reanimate Adolf Hitler online for a new generation, recasting the Nazi German leader who orchestrated the Holocaust as a “misunderstood” figure whose antisemitic and anti-immigrant messages are freshly resonant in politics today.
  • The posts, which make use of cheap and popular AI voice-cloning tools, have drawn praise in comments on X and TikTok, such as “I miss you uncle A,” “He was a hero,” and “Maybe he is NOT the villain.” On Telegram and the “dark web,” extremists brag that the AI-manipulated speeches offer an engaging and effortless way to repackage Hitler’s ideas to radicalize young people.
  • ...24 more annotations...
  • “This type of content is disseminating redpills at lightning speed to massive audiences,” the American Futurist, a website identifying as fascist, posted on its public Telegram channel on Sept. 17, using a phrase that describes dramatically reshaping someone’s worldview. “In terms of propaganda it’s unmatched.”
  • The propaganda — documented in videos, chat forum messages and screen recordings of neo-Nazi activity shared exclusively with The Washington Post by the nonprofit Institute for Strategic Dialogue and the SITE Intelligence Group — is helping to fuel a resurgence in online interest in Hitler on the American right, experts say
  • content glorifying, excusing or translating Hitler’s speeches into English has racked up some 25 million views across X, TikTok and Instagram since Aug. 13.
  • The videos are gaining traction as former president Donald Trump and his Republican running mate, Sen. JD Vance of Ohio, have advanced conspiracy theories popular among online neo-Nazi communities, including baseless claims that Haitian immigrants in Ohio are eating pets.
  • Experts say the latest generation of AI tools, which can conjure lifelike pictures, voices and videos in seconds, allow fringe groups to breathe fresh life into abhorred ideologies, presenting opportunities for radicalization — and moderation challenges for social media companies.
  • One user hosted a livestream on the video-sharing site Odysee last year teaching people to use an AI voice cloning tool from ElevenLabs and video software to make Hitler videos. In roughly five minutes, he created an AI voice clone of Hitler appearing to deliver a speech in English, railing about Jews profiting from a capitalist system.
  • The user, who uses the handle OMGITSFLOOD and is identified as a “prominent neo-Nazi content creator” by the SITE Intelligence Group, which tracks white supremacist and jihadist activity online, said on the livestream that Hitler is “one of the best f — king leaders that ever lived.” The user added that he hoped to inspire a future leader like Hitler out there who may be “voting for Trump” but “just hasn’t been pilled.”
  • Creating the video required only a few-second sample of Hitler’s speech taken from YouTube. Without AI, the spoofing would have demanded advanced programming capabilities. Some misinformation and hate speech experts say that the ease of AI is turbocharging the spread of antisemitic content online.
  • “Now it’s so much easier to pump this stuff out,” said Abbie Richards, a misinformation researcher at the left-leaning nonprofit watchdog Media Matters for America. “The more that you’re posting, the more likely the chances you have for this to reach way more eyes than it ever would.”
  • “These disguised Hitler AI videos ... grab users with a bit of curiosity and then get them to listen to a genocidal monster
  • On TikTok, X and Instagram, the AI-generated speeches of Hitler don’t often bear hallmarks of Nazi propaganda. A video posted on TikTok in September depicted a silhouette of a man who seemed to resemble Hitler, with the words: “Just listen.”
  • Over a slow instrumental beat, an AI-generated voice of Hitler speaks English in his hallmark cadence, reciting excerpts of his 1942 speech commemorating the Beer Hall Putsch, a failed Nazi coup in 1923 that vaulted Hitler to prominence. The video, which is no longer online, got more than 1 million views and 120,000 likes, according to Media Matters for America.
  • “There’s a big difference between reading a German translation of Hitler speeches versus hearing him say it in a very emotive way in English,” she said.
  • Frances-Wright compared them with videos that went viral on TikTok last year in which content creators read excerpts of Osama bin Laden’s “Letter to America” manifesto, drawing replies from young Americans such as, “OMG, were we the baddies?”
  • On TikTok, users can easily share and build on the videos using the app’s “duet” features, which allow people to post the original video alongside video of themselves reacting to it, Richards said. Because the videos contain no overt terrorist or extremist logos, they are “extremely difficult” for tech companies to police, Katz added.
  • Jack Malon, a spokesperson at YouTube, said the site’s community guidelines “prohibit content that glorifies hateful ideologies such as Nazism, and we removed content flagged to us by The Washington Post.”
  • ISD’s report noted that pro-Hitler content in its dataset reached the largest audiences on X, where it was also most likely to be recommended via the site’s algorithm. X did not return a request for comment.
  • that doesn’t mean Nazism is on the decline, said Hannah Gais, a senior research analyst at the center. Right-wing extremists are turning to online forums, rather than official groups, to organize and generate content, using mainstream social media platforms to reach a wider audience and recruit new adherents.
  • The number of active neo-Nazi groups in America has declined since 2017, according to annual reports by the nonprofit Southern Poverty Law Center, partly as a result of crackdowns by law enforcement following that year’s deadly “Unite the Right” rally in Charlottesville.
  • While it’s impossible to quantify the real-world impact of far-right online propaganda, Gais said, you can see evidence of its influence when prominent figures such as conservative pundit Tucker Carlson, billionaire Elon Musk and Trump adviser Stephen Miller espouse elements of the antisemitic “great replacement” conspiracy theory, or when mass shooters in Buffalo, El Paso and Christchurch, New Zealand, cite it as inspiration.
  • posts glorifying or defending Hitler surged on X this month after Carlson posted an interview with Holocaust revisionist Darryl Cooper, which Musk reposted and called “worth watching.” (Musk later deleted his post.)
  • the pro-Trump conspiracy theorist Dominick McGee posted to X an English-language AI audio recreation of Hitler’s 1939 Reichstag speech, which garnered 13,000 retweets, 56,000 likes and more than 10 million views, according to X’s metrics.
  • extremists are often among the first groups to exploit emerging technologies, which often allow them to maneuver barriers blocking such materials on established platforms.
  • “But in the broader scheme of politics, it can have a desensitizing or normalizing effect if people are encountering this content over and over again,” he said.
Javier E

Opinion | How We've Lost Our Moorings as a Society - The New York Times - 0 views

  • To my mind, one of the saddest things that has happened to America in my lifetime is how much we’ve lost so many of our mangroves. They are endangered everywhere today — but not just in nature.
  • Our society itself has lost so many of its social, normative and political mangroves as well — all those things that used to filter toxic behaviors, buffer political extremism and nurture healthy communities and trusted institutions for young people to grow up in and which hold our society together.
  • You see, shame used to be a mangrove
  • ...28 more annotations...
  • That shame mangrove has been completely uprooted by Trump.
  • The reason people felt ashamed is that they felt fidelity to certain norms — so their cheeks would turn red when they knew they had fallen short
  • in the kind of normless world we have entered where societal, institutional and leadership norms are being eroded,” Seidman said to me, “no one has to feel shame anymore because no norm has been violated.”
  • People in high places doing shameful things is hardly new in American politics and business. What is new, Seidman argued, “is so many people doing it so conspicuously and with such impunity: ‘My words were perfect,’ ‘I’d do it again.’ That is what erodes norms — that and making everyone else feel like suckers for following them.”
  • Nothing is more corrosive to a vibrant democracy and healthy communities, added Seidman, than “when leaders with formal authority behave without moral authority.
  • Without leaders who, through their example and decisions, safeguard our norms and celebrate them and affirm them and reinforce them, the words on paper — the Bill of Rights, the Constitution or the Declaration of Independence — will never unite us.”
  • . Trump wants to destroy our social and legal mangroves and leave us in a broken ethical ecosystem, because he and people like him best thrive in a broken system.
  • He keeps pushing our system to its breaking point, flooding the zone with lies so that the people trust only him and the truth is only what he says it is. In nature, as in society, when you lose your mangroves, you get flooding with lots of mud.
  • Responsibility, especially among those who have taken oaths of office — another vital mangrove — has also experienced serious destruction.
  • It used to be that if you had the incredible privilege of serving as U.S. Supreme Court justice, in your wildest dreams you would never have an American flag hanging upside down
  • Your sense of responsibility to appear above partisan politics to uphold the integrity of the court’s rulings would not allow it.
  • Civil discourse and engaging with those with whom you disagree — instead of immediately calling for them to be fired — also used to be a mangrove.
  • when moral arousal manifests as moral outrage — and immediate demands for firings — “it can result in a vicious cycle of moral outrage being met with equal outrage, as opposed to a virtuous cycle of dialogue and the hard work of forging real understanding.”
  • In November 2022, the Heterodox Academy, a nonprofit advocacy group, surveyed 1,564 full-time college students ages 18 to 24. The group found that nearly three in five students (59 percent) hesitate to speak about controversial topics like religion, politics, race, sexual orientation and gender for fear of negative backlashes by classmates.
  • Locally owned small-town newspapers used to be a mangrove buffering the worst of our national politics. A healthy local newspaper is less likely to go too far to one extreme or another, because its owners and editors live in the community and they know that for their local ecosystem to thrive, they need to preserve and nurture healthy interdependencies
  • in 2023, the loss of local newspapers accelerated to an average of 2.5 per week, “leaving more than 200 counties as ‘news deserts’ and meaning that more than half of all U.S. counties now have limited access to reliable local news and information.”
  • As in nature, it leaves the local ecosystem with fewer healthy interdependencies, making it more vulnerable to invasive species and disease — or, in society, diseased ideas.
  • It’s not that the people in these communities have changed. It’s that if that’s what you are being fed, day in and day out, then you’re going to come to every conversation with a certain set of predispositions that are really hard to break through.”
  • we have gone from you’re not supposed to say “hell” on the radio to a nation that is now being permanently exposed to for-profit systems of political and psychological manipulation (and throw in Russia and China stoking the fires today as well), so people are not just divided, but being divided. Yes, keeping Americans morally outraged is big business at home now and war by other means by our geopolitical rivals.
  • More than ever, we are living in the “never-ending storm” that Seidman described to me back in 2016, in which moral distinctions, context and perspective — all the things that enable people and politicians to make good judgments — get blown away.
  • Blown away — that is exactly what happens to the plants, animals and people in an ecosystem that loses its mangroves.
  • a trend ailing America today: how much we’ve lost our moorings as a society.
  • Civil discourse and engaging with those with whom you disagree — instead of immediately calling for them to be fired — also used to be mangroves.
  • civility itself also used to be a mangrove.
  • “Why the hell not?” Drummond asks.“You’re not supposed to say ‘hell,’ either,” the announcer says.You are not supposed to say “hell,” either. What a quaint thought. That is a polite exclamation point in today’s social media.
  • Another vital mangrove is religious observance. It has been declining for decades:
  • So now the most partisan national voices on Fox News, or MSNBC — or any number of polarizing influencers like Tucker Carlson — go straight from their national studios direct to small-town America, unbuffered by a local paper’s or radio station’s impulse to maintain a community where people feel some degree of connection and mutual respect
  • In a 2021 interview with my colleague Ezra Klein, Barack Obama observed that when he started running for the presidency in 2007, “it was still possible for me to go into a small town, in a disproportionately white conservative town in rural America, and get a fair hearing because people just hadn’t heard of me. … They didn’t have any preconceptions about what I believed. They could just take me at face value.”
Javier E

'Never summon a power you can't control': Yuval Noah Harari on how AI could threaten de... - 0 views

  • The Phaethon myth and Goethe’s poem fail to provide useful advice because they misconstrue the way humans gain power. In both fables, a single human acquires enormous power, but is then corrupted by hubris and greed. The conclusion is that our flawed individual psychology makes us abuse power.
  • What this crude analysis misses is that human power is never the outcome of individual initiative. Power always stems from cooperation between large numbers of humans. Accordingly, it isn’t our individual psychology that causes us to abuse power.
  • Our tendency to summon powers we cannot control stems not from individual psychology but from the unique way our species cooperates in large numbers. Humankind gains enormous power by building large networks of cooperation, but the way our networks are built predisposes us to use power unwisely
  • ...57 more annotations...
  • We are also producing ever more powerful weapons of mass destruction, from thermonuclear bombs to doomsday viruses. Our leaders don’t lack information about these dangers, yet instead of collaborating to find solutions, they are edging closer to a global war.
  • Despite – or perhaps because of – our hoard of data, we are continuing to spew greenhouse gases into the atmosphere, pollute rivers and oceans, cut down forests, destroy entire habitats, drive countless species to extinction, and jeopardise the ecological foundations of our own species
  • For most of our networks have been built and maintained by spreading fictions, fantasies and mass delusions – ranging from enchanted broomsticks to financial systems. Our problem, then, is a network problem. Specifically, it is an information problem. For information is the glue that holds networks together, and when people are fed bad information they are likely to make bad decisions, no matter how wise and kind they personally are.
  • Traditionally, the term “AI” has been used as an acronym for artificial intelligence. But it is perhaps better to think of it as an acronym for alien intelligence
  • AI is an unprecedented threat to humanity because it is the first technology in history that can make decisions and create new ideas by itself. All previous human inventions have empowered humans, because no matter how powerful the new tool was, the decisions about its usage remained in our hands
  • Nuclear bombs do not themselves decide whom to kill, nor can they improve themselves or invent even more powerful bombs. In contrast, autonomous drones can decide by themselves who to kill, and AIs can create novel bomb designs, unprecedented military strategies and better AIs.
  • AI isn’t a tool – it’s an agent. The biggest threat of AI is that we are summoning to Earth countless new powerful agents that are potentially more intelligent and imaginative than us, and that we don’t fully understand or control.
  • repreneurs such as Yoshua Bengio, Geoffrey Hinton, Sam Altman, Elon Musk and Mustafa Suleyman have warned that AI could destroy our civilisation. In a 2023 survey of 2,778 AI researchers, more than a third gave at least a 10% chance of advanced AI leading to outcomes as bad as human extinction.
  • As AI evolves, it becomes less artificial (in the sense of depending on human designs) and more alien
  • AI isn’t progressing towards human-level intelligence. It is evolving an alien type of intelligence.
  • generative AIs like GPT-4 already create new poems, stories and images. This trend will only increase and accelerate, making it more difficult to understand our own lives. Can we trust computer algorithms to make wise decisions and create a better world? That’s a much bigger gamble than trusting an enchanted broom to fetch water
  • it is more than just human lives we are gambling on. AI is already capable of producing art and making scientific discoveries by itself. In the next few decades, it will be likely to gain the ability even to create new life forms, either by writing genetic code or by inventing an inorganic code animating inorganic entities. AI could therefore alter the course not just of our species’ history but of the evolution of all life forms.
  • “Then … came move number 37,” writes Suleyman. “It made no sense. AlphaGo had apparently blown it, blindly following an apparently losing strategy no professional player would ever pursue. The live match commentators, both professionals of the highest ranking, said it was a ‘very strange move’ and thought it was ‘a mistake’.
  • as the endgame approached, that ‘mistaken’ move proved pivotal. AlphaGo won again. Go strategy was being rewritten before our eyes. Our AI had uncovered ideas that hadn’t occurred to the most brilliant players in thousands of years.”
  • “In AI, the neural networks moving toward autonomy are, at present, not explainable. You can’t walk someone through the decision-making process to explain precisely why an algorithm produced a specific prediction. Engineers can’t peer beneath the hood and easily explain in granular detail what caused something to happen. GPT‑4, AlphaGo and the rest are black boxes, their outputs and decisions based on opaque and impossibly intricate chains of minute signals.”
  • Yet during all those millennia, human minds have explored only certain areas in the landscape of Go. Other areas were left untouched, because human minds just didn’t think to venture there. AI, being free from the limitations of human minds, discovered and explored these previously hidden areas.
  • Second, move 37 demonstrated the unfathomability of AI. Even after AlphaGo played it to achieve victory, Suleyman and his team couldn’t explain how AlphaGo decided to play it.
  • Move 37 is an emblem of the AI revolution for two reasons. First, it demonstrated the alien nature of AI. In east Asia, Go is considered much more than a game: it is a treasured cultural tradition. For more than 2,500 years, tens of millions of people have played Go, and entire schools of thought have developed around the game, espousing different strategies and philosophies
  • The rise of unfathomable alien intelligence poses a threat to all humans, and poses a particular threat to democracy. If more and more decisions about people’s lives are made in a black box, so voters cannot understand and challenge them, democracy ceases to functio
  • Human voters may keep choosing a human president, but wouldn’t this be just an empty ceremony? Even today, only a small fraction of humanity truly understands the financial system
  • As the 2007‑8 financial crisis indicated, some complex financial devices and principles were intelligible to only a few financial wizards. What happens to democracy when AIs create even more complex financial devices and when the number of humans who understand the financial system drops to zero?
  • Translating Goethe’s cautionary fable into the language of modern finance, imagine the following scenario: a Wall Street apprentice fed up with the drudgery of the financial workshop creates an AI called Broomstick, provides it with a million dollars in seed money, and orders it to make more money.
  • n pursuit of more dollars, Broomstick not only devises new investment strategies, but comes up with entirely new financial devices that no human being has ever thought about.
  • many financial areas were left untouched, because human minds just didn’t think to venture there. Broomstick, being free from the limitations of human minds, discovers and explores these previously hidden areas, making financial moves that are the equivalent of AlphaGo’s move 37.
  • For a couple of years, as Broomstick leads humanity into financial virgin territory, everything looks wonderful. The markets are soaring, the money is flooding in effortlessly, and everyone is happy. Then comes a crash bigger even than 1929 or 2008. But no human being – either president, banker or citizen – knows what caused it and what could be done about it
  • AI, too, is a global problem. Accordingly, to understand the new computer politics, it is not enough to examine how discrete societies might react to AI. We also need to consider how AI might change relations between societies on a global level.
  • As long as humanity stands united, we can build institutions that will regulate AI, whether in the field of finance or war. Unfortunately, humanity has never been united. We have always been plagued by bad actors, as well as by disagreements between good actors. The rise of AI poses an existential danger to humankind, not because of the malevolence of computers, but because of our own shortcomings.
  • errorists might use AI to instigate a global pandemic. The terrorists themselves may have little knowledge of epidemiology, but the AI could synthesise for them a new pathogen, order it from commercial laboratories or print it in biological 3D printers, and devise the best strategy to spread it around the world, via airports or food supply chain
  • desperate governments request help from the only entity capable of understanding what is happening – Broomstick. The AI makes several policy recommendations, far more audacious than quantitative easing – and far more opaque, too. Broomstick promises that these policies will save the day, but human politicians – unable to understand the logic behind Broomstick’s recommendations – fear they might completely unravel the financial and even social fabric of the world. Should they listen to the AI?
  • Human civilisation could also be devastated by weapons of social mass destruction, such as stories that undermine our social bonds. An AI developed in one country could be used to unleash a deluge of fake news, fake money and fake humans so that people in numerous other countries lose the ability to trust anything or anyone.
  • Many societies – both democracies and dictatorships – may act responsibly to regulate such usages of AI, clamp down on bad actors and restrain the dangerous ambitions of their own rulers and fanatics. But if even a handful of societies fail to do so, this could be enough to endanger the whole of humankind
  • Thus, a paranoid dictator might hand unlimited power to a fallible AI, including even the power to launch nuclear strikes. If the AI then makes an error, or begins to pursue an unexpected goal, the result could be catastrophic, and not just for that country
  • magine a situation – in 20 years, say – when somebody in Beijing or San Francisco possesses the entire personal history of every politician, journalist, colonel and CEO in your country: every text they ever sent, every web search they ever made, every illness they suffered, every sexual encounter they enjoyed, every joke they told, every bribe they took. Would you still be living in an independent country, or would you now be living in a data colony?
  • What happens when your country finds itself utterly dependent on digital infrastructures and AI-powered systems over which it has no effective control?
  • In the economic realm, previous empires were based on material resources such as land, cotton and oil. This placed a limit on the empire’s ability to concentrate both economic wealth and political power in one place. Physics and geology don’t allow all the world’s land, cotton or oil to be moved to one country
  • t is different with the new information empires. Data can move at the speed of light, and algorithms don’t take up much space. Consequently, the world’s algorithmic power can be concentrated in a single hub. Engineers in a single country might write the code and control the keys for all the crucial algorithms that run the entire world.
  • AI and automation therefore pose a particular challenge to poorer developing countries. In an AI-driven global economy, the digital leaders claim the bulk of the gains and could use their wealth to retrain their workforce and profit even more
  • Meanwhile, the value of unskilled labourers in left-behind countries will decline, causing them to fall even further behind. The result might be lots of new jobs and immense wealth in San Francisco and Shanghai, while many other parts of the world face economic ruin.
  • AI is expected to add $15.7tn (£12.3tn) to the global economy by 2030. But if current trends continue, it is projected that China and North America – the two leading AI superpowers – will together take home 70% of that money.
  • uring the cold war, the iron curtain was in many places literally made of metal: barbed wire separated one country from another. Now the world is increasingly divided by the silicon curtain. The code on your smartphone determines on which side of the silicon curtain you live, which algorithms run your life, who controls your attention and where your data flows.
  • Cyberweapons can bring down a country’s electric grid, but they can also be used to destroy a secret research facility, jam an enemy sensor, inflame a political scandal, manipulate elections or hack a single smartphone. And they can do all that stealthily. They don’t announce their presence with a mushroom cloud and a storm of fire, nor do they leave a visible trail from launchpad to target
  • The two digital spheres may therefore drift further and further apart. For centuries, new information technologies fuelled the process of globalisation and brought people all over the world into closer contact. Paradoxically, information technology today is so powerful it can potentially split humanity by enclosing different people in separate information cocoons, ending the idea of a single shared human reality
  • For decades, the world’s master metaphor was the web. The master metaphor of the coming decades might be the cocoon.
  • Other countries or blocs, such as the EU, India, Brazil and Russia, may try to create their own digital cocoons,
  • Instead of being divided between two global empires, the world might be divided among a dozen empires.
  • The more the new empires compete against one another, the greater the danger of armed conflict.
  • The cold war between the US and the USSR never escalated into a direct military confrontation, largely thanks to the doctrine of mutually assured destruction. But the danger of escalation in the age of AI is bigger, because cyber warfare is inherently different from nuclear warfare.
  • US companies are now forbidden to export such chips to China. While in the short term this hampers China in the AI race, in the long term it pushes China to develop a completely separate digital sphere that will be distinct from the American digital sphere even in its smallest buildings.
  • The temptation to start a limited cyberwar is therefore big, and so is the temptation to escalate it.
  • A second crucial difference concerns predictability. The cold war was like a hyper-rational chess game, and the certainty of destruction in the event of nuclear conflict was so great that the desire to start a war was correspondingly small
  • Cyberwarfare lacks this certainty. Nobody knows for sure where each side has planted its logic bombs, Trojan horses and malware. Nobody can be certain whether their own weapons would actually work when called upon
  • Such uncertainty undermines the doctrine of mutually assured destruction. One side might convince itself – rightly or wrongly – that it can launch a successful first strike and avoid massive retaliation
  • Even if humanity avoids the worst-case scenario of global war, the rise of new digital empires could still endanger the freedom and prosperity of billions of people. The industrial empires of the 19th and 20th centuries exploited and repressed their colonies, and it would be foolhardy to expect new digital empires to behave much better
  • Moreover, if the world is divided into rival empires, humanity is unlikely to cooperate to overcome the ecological crisis or to regulate AI and other disruptive technologies such as bioengineering.
  • The division of the world into rival digital empires dovetails with the political vision of many leaders who believe that the world is a jungle, that the relative peace of recent decades has been an illusion, and that the only real choice is whether to play the part of predator or prey.
  • Given such a choice, most leaders would prefer to go down in history as predators and add their names to the grim list of conquerors that unfortunate pupils are condemned to memorise for their history exams.
  • These leaders should be reminded, however, that there is a new alpha predator in the jungle. If humanity doesn’t find a way to cooperate and protect our shared interests, we will all be easy prey to AI.
Javier E

Stanford's top disinformation research group collapses under pressure - The Washington ... - 0 views

  • The collapse of the five-year-old Observatory is the latest and largest of a series of setbacks to the community of researchers who try to detect propaganda and explain how false narratives are manufactured, gather momentum and become accepted by various groups
  • It follows Harvard’s dismissal of misinformation expert Joan Donovan, who in a December whistleblower complaint alleged he university’s close and lucrative ties with Facebook parent Meta led the university to clamp down on her work, which was highly critical of the social media giant’s practices.
  • Starbird said that while most academic studies of online manipulation look backward from much later, the Observatory’s “rapid analysis” helped people around the world understand what they were seeing on platforms as it happened.
  • ...9 more annotations...
  • Brown University professor Claire Wardle said the Observatory had created innovative methodology and trained the next generation of experts.
  • “Closing down a lab like this would always be a huge loss, but doing so now, during a year of global elections, makes absolutely no sense,” said Wardle, who previously led research at anti-misinformation nonprofit First Draft. “We need universities to use their resources and standing in the community to stand up to criticism and headlines.”
  • The study of misinformation has become increasingly controversial, and Stamos, DiResta and Starbird have been besieged by lawsuits, document requests and threats of physical harm. Leading the charge has been Rep. Jim Jordan (R-Ohio), whose House subcommittee alleges the Observatory improperly worked with federal officials and social media companies to violate the free-speech rights of conservatives.
  • In a joint statement, Stamos and DiResta said their work involved much more than elections, and that they had been unfairly maligned.
  • “The politically motivated attacks against our research on elections and vaccines have no merit, and the attempts by partisan House committee chairs to suppress First Amendment-protected research are a quintessential example of the weaponization of government,” they said.
  • Stamos founded the Observatory after publicizing that Russia has attempted to influence the 2016 election by sowing division on Facebook, causing a clash with the company’s top executives. Special counsel Robert S. Mueller III later cited the Facebook operation in indicting a Kremlin contractor. At Stanford, Stamos and his team deepened his study of influence operations from around the world, including one it traced to the Pentagon.
  • Stamos told associates he stepped back from leading the Observatory last year in part because the political pressure had taken a toll. Stamos had raised most of the money for the project, and the remaining faculty have not been able to replicate his success, as many philanthropic groups shift their focus on artificial intelligence and other, fresher topics.
  • In supporting the project further, the university would have risked alienating conservative donors, Silicon Valley figures, and members of Congress, who have threatened to stop all federal funding for disinformation research or cut back general support.
  • The Observatory’s non-election work has included developing curriculum for teaching college students about how to handle trust and safety issues on social media platforms and launching the first peer-reviewed journal dedicated to that field. It has also investigated rings publishing child sexual exploitation material online and flaws in the U.S. system for reporting it, helping to prepare platforms to handle an influx of computer-generated material.
Javier E

Opinion | Bidenomics: The Queen Bee Is Jennifer Harris - The New York Times - 0 views

  • I was thrilled when the Biden administration came in with a plan for big federal investments in the American industrial base, tariffs, support for labor unions and actions against monopolies. No one knew what to call it — Post-neoliberalism? Democratic capitalism? Neopopulism? — but for the first time in generations a U.S. administration was saying that people should control the market, not the other way around.
  • But if it was the right path, why didn’t more voters trust President Biden on the economy?
  • To understand who Ms. Harris is, you have to know who she used to be.
  • ...13 more annotations...
  • As a young State Department policy planner in the 2000s, she was a lonely voice in Washington raising the alarm about the rise of China. She pushed for tariffs and against trade agreements before it was cool, and was an author of a book called “War by Other Means” about how blind faith in free markets put the United States at a geopolitical disadvantage. For years, she felt like an oddball in Washington, where both parties were still in thrall to neoliberalism.
  • The Hewlett Foundation hired her as the head of an initiative that has given away $140 million so far to people who are devising a new economic philosophy. Then she served a stint in the White House. Today, she’s an intellectual leader of a growing, bipartisan consensus
  • She fell in love with economics and studied it at Wake Forest. After she joined a student delegation to a NATO summit in Prague in 2002, a faculty adviser on that trip offered her a job in Washington working at the National Intelligence Council. In those early years, she believed what everyone else in Washington believed about the economy — that governments ought not meddle with it.
  • if Mr. Trump correctly identified a problem — “China is eating our lunch” — he did not solve it, beyond putting tariffs on Chinese products. His tax cut for the rich hurt rather than helped matters.
  • It’s the Biden administration that came in with a plan to build an economy that was good for workers, not just shareholders, using some strategies Ms. Harris had been talking about for years.
  • The thinking behind it goes like this: Unquestioning belief in the free market created a globalism that funneled money to the 1 percent, which has used its wealth to amass political power at the expense of everyone else. It produced free trade agreements that sent too many U.S. factories to China and rescue plans after the 2008 financial crisis that bailed out Wall Street instead of Main Street.
  • It was her job to track China’s use of subsidies, industrial espionage and currency manipulation to fuel its rise as a manufacturing powerhouse. Ms. Harris argued that tariffs on China were a necessary defense. Nobody agreed. “I was kind of just banging my head against this wall,” she told me. “The wall was a foreign policy establishment that saw markets as sacrosanct.”
  • Barack Obama campaigned on a pledge to renegotiate NAFTA, but he struck up a new trade deal instead — the Trans-Pacific Partnership. Ms. Harris argued against it. “We didn’t have the foggiest idea” of what it would do to our economy, she told me. Nobody listened.
  • it sent Democrats back to the intellectual drawing board. Larry Kramer, then the president of the Hewlett Foundation, recruited her in 2018 to promote alternatives to ideas that had guided U.S. policy for decades. He hoped she could do for free-market skepticism what Milton Friedman and his allies had done for free-market fundamentalism, which became policy under the Reagan administration and eventually was embraced by both parties as truth.
  • She has since rejoined the Hewlett Foundation, where she funds people who are proposing new solutions to economic problems. One grantee, the conservative think tank American Compass, promotes the idea of a domestic development bank to fund infrastructure — an idea with bipartisan appeal.
  • But the work that Ms. Harris and others in the Biden administration have done is unfinished, and poorly understood. The terms “Bidenomics” and “Build Back Better” don’t seem to resonate
  • Ms. Harris acknowledges that these ideas haven’t yet taken hold in the broader electorate, and that high interest rates overshadow the progress that’s been made. It’s too early for voters to feel it, she told me: “The investments Biden has pushed through aren’t going to be felt in a month, a year, two years.”
  • she celebrates the fact that leaders across the political spectrum are embracing the idea that Americans need to “get back to building things in this country.” This election has no candidates blindly promoting the free market. The last one didn’t either. In the battle of ideas, she has already won.
« First ‹ Previous 221 - 236 of 236
Showing 20 items per page