Skip to main content

Home/ History Readings/ Group items tagged disinformation

Rss Feed Group items tagged

Javier E

Which 'Succession' Character is James Murdoch? - The New York Times - 0 views

  • Mr. Murdoch, 47, resigned from the board of News Corp this summer with an elliptical statement, saying he was leaving “due to disagreements over certain editorial content published by the Company’s news outlets and certain other strategic decisions.”
  • in his briskly analytical way, over lunch and a subsequent phone call, he tried to explain why he “pulled the rip cord,” as he put it, after deepening estrangement with his father and brother and growing discomfort over the toxicity of Fox News and other conservative News Corp properties.
  • “I reached the conclusion that you can venerate a contest of ideas, if you will, and we all do and that’s important,” he told me. “But it shouldn’t be in a way that hides agendas
  • ...26 more annotations...
  • A contest of ideas shouldn’t be used to legitimize disinformation. And I think it’s often taken advantage of. And I think at great news organizations, the mission really should be to introduce fact to disperse doubt — not to sow doubt, to obscure fact, if you will.
  • In 2017, President Trump’s praise for white supremacists in Charlottesville, Va., as “very fine people” spurred James Murdoch to give $1 million to the Anti-Defamation League. In an email to friends obtained by The New York Times, Mr. Murdoch rebuked Mr. Trump and wrote: “I can’t even believe I have to write this: standing up to Nazis is essential; there are no good Nazis. Or Klansmen, or terrorists.”
  • In January, James and his wife, Kathryn, expressed “frustration” about News Corp’s peddling of climate change denialism in the face of apocalyptic Australian wildfires that incinerated 46 million acres. Fox nighttime anchors picked up a false story line about arson from The Australian, a Murdoch-owned newspaper in Oz.
  • So it wasn’t possible to change News Corp from the inside?“I think there’s only so much you can do if you’re not an executive, you’re on the board, you’re quite removed from a lot of the day-to-day decisions, obviously,” he said. “And if you’re uncomfortable with those decisions, you have to take stock of whether or not you want to be associated and can you change it or not. I decided that I could be much more effective outside.”
  • Friends say that James has been on a collision course with his family for 15 years. His evolution has been profoundly influenced by his wife, a former communications executive. He is, as one friend puts it, “living much more in his own skin, realizing his better angels and his better instincts.”
  • But when your last name is Murdoch and those billions sloshing around in your bank account come from a juggernaut co-opting governments across the English-speaking world and perpetuating climate-change denial, nativism and Sean Hannity, can you ever start fresh? As a beneficiary of his family’s trust, James is still reaping profits from Rupert Murdoch’s assets. Can he be the anti-venom?
  • Murdoch watchers across media say James is aligned with his sister Elisabeth and his half sister, Prudence, even as he is estranged from his father and brother.
  • When Rupert, 89, finally leaves the stage and his elder children take over, that could make three votes in the family trust against one
  • Is there still time to de-Foxify Fox News — labeled a “hate-for-profit racket” by Elizabeth Warren — and other conservative News Corp outlets? Would Fox and its kin — downscale, feral creatures conjured by Rupert to help the bottom line — be the huge moneymakers they are if they went straight?
  • He is particularly excited about investing in start-ups created to combat fake news and the spread of disinformation, having found the proliferation of deep fakes “terrifying” because they “undermine our ability to discern what’s true and what’s not” and it “is only at the beginning as far as I can tell.”
  • He’s funding a research program to study digital manipulation of societies, hoping to curtail “the use of technology to promulgate totalitarianism’’ and undermine democracies.
  • “So everything from the use of mass surveillance, telephone networks, 5G, all that stuff, domestically in a country like China, for example,” he said.
  • I wonder if this is some sort of expiation, given all the disinformation that News Corp has spewed.
  • when I talked to Kathryn Murdoch over Zoom from their farm in Connecticut, where they live with their three teenagers, chickens and sheep, she was more direct about the issue of using money made from disinformation to combat disinformation.
  • “I think that what’s important about what we’re doing is that we’re in control of ourselves,” she said, adding: “I’m in control of what I do, he is in control of what he does. We should be held accountable for those things. It’s very hard to be held accountable for things that other people do or are in control of. And I think that’s what was untenable.”
  • Their foundation, Quadrivium, has supported voter participation, democracy reform and climate change projects. “I never thought that we would actually be at the point where we would have climate change effects and people would still be denying it,” Ms. Murdoch said.
  • Mr. Murdoch donated to Pete Buttigieg in the primary, and the couple has given $1.23 million to Joe Biden. So that’s who he’ll be voting for in November then? “Hell yes,” he said with a smile.
  • I noted to Ms. Murdoch that the effect of News Corp on the world is astounding when you think about it, from Brexit to Trump to the Supreme Court we may be heading toward.
  • After so much time in the executive suite, Mr. Murdoch seems genuinely excited to be in a smaller shop. He said last year, just for the hell of it, he thought of becoming an architect, going back to school.
  • “The outside world,” he continued, “it looks at you and says, ‘Well, these are the runners and riders. This person is up and down and this is success and this is failure.’ I think that that has to come much more from yourself. I’m incredibly grateful to be able to be just a totally free agent.”
  • I wondered what he made of Fox and Mr. Trump playing down the coronavirus, even after the president was hospitalized.“Look, you do worry about it and I think that we’re in the middle of a public health crisis,” Mr. Murdoch said. “Climate is also a public health crisis.” He continued: “Whatever political spin on that, if it gets in the way of delivering crucial public health information, I think is pretty bad.”
  • He added that Mr. Trump’s likening Covid-19 to the flu has been “his message from Day 1,” and is “craziness.” He thinks that “companies have a responsibility to their customers and their communities” and “that responsibility shouldn’t be compromised by political point scoring, that’s for sure.”
  • “I’m just concerned that the leadership that we have, to me, just seems characterized by callousness and a level of cruelty that I think is really dangerous and then it infects the population,” he said, referring to the Trump administration. “It’s not a coincidence that the number of hate crimes in this country are rising over the last three years for the first time in a long time.”
  • With Mr. Trump and Fox, who is the dog and who is the tail?“It looks to me, anyway, like it’s going to be a hard thing to understand because it probably goes back and forth,’’ he said. “I don’t think you’re going to get one pristine, consistent analysis of that phenomenon.”
  • Confirm or Deny
  • Most of your success has come from hard work, not luck.Isn’t that what they say — the harder you work, the luckier you get?
Javier E

Pro-China YouTube Network Used A.I. to Malign U.S., Report Finds - The New York Times - 0 views

  • The 10-minute post was one of more than 4,500 videos in an unusually large network of YouTube channels spreading pro-China and anti-U.S. narratives, according to a report this week from the Australian Strategic Policy Institute
  • ome of the videos used artificially generated avatars or voice-overs, making the campaign the first influence operation known to the institute to pair A.I. voices with video essays.
  • The campaign’s goal, according to the report, was clear: to influence global opinion in favor of China and against the United States.
  • ...17 more annotations...
  • The videos promoted narratives that Chinese technology was superior to America’s, that the United States was doomed to economic collapse, and that China and Russia were responsible geopolitical players. Some of the clips fawned over Chinese companies like Huawei and denigrated American companies like Apple.
  • Content from at least 30 channels in the network drew nearly 120 million views and 730,000 subscribers since last year, along with occasional ads from Western companies
  • Disinformation — such as the false claim that some Southeast Asian nations had adopted the Chinese yuan as their own currency — was common. The videos were often able to quickly react to current events
  • he coordinated campaign might be “one of the most successful influence operations related to China ever witnessed on social media.”
  • Historically, its influence operations have focused on defending the Communist Party government and its policies on issues like the persecution of Uyghurs or the fate of Taiwan
  • Efforts to push pro-China messaging have proliferated in recent years, but have featured largely low-quality content that attracted limited engagement or failed to sustain meaningful audiences
  • “This campaign actually leverages artificial intelligence, which gives it the ability to create persuasive threat content at scale at a very limited cost compared to previous campaigns we’ve seen,”
  • YouTube said in a statement that its teams work around the clock to protect its community, adding that “we have invested heavily in robust systems to proactively detect coordinated influence operations.” The company said it welcomed research efforts and that it had shut down several of the channels mentioned in the report for violating the platform’s policies.
  • China began targeting the United States more directly amid the mass pro-democracy protests in Hong Kong in 2019 and continuing with the Covid-19 pandemic, echoing longstanding Russian efforts to discredit American leadership and influence at home and aboard.
  • Over the summer, researchers at Microsoft and other companies unearthed evidence of inauthentic accounts that China employed to falsely accuse the United States of using energy weapons to ignite the deadly wildfires in Hawaii in August.
  • Meta announced last month that it removed 4,789 Facebook accounts from China that were impersonating Americans to debate political issues, warning that the campaign appeared to be laying the groundwork for interference in the 2024 presidential elections.
  • It was the fifth network with ties to China that Meta had detected this year, the most of any other country.
  • The advent of artificial technology seems to have drawn special interest from Beijing. Ms. Keast of the Australian institute said that disinformation peddlers were increasingly using easily accessible video editing and A.I. programs to create large volumes of convincing content.
  • She said that the network of pro-China YouTube channels most likely fed English-language scripts into readily available online text-to-video software or other programs that require no technical expertise and can produce clips within minutes. Such programs often allow users to select A.I.-generated voice narration and customize the gender, accent and tone of voice.
  • In 39 of the videos, Ms. Keast found at least 10 artificially generated avatars advertised by a British A.I. company
  • she also discovered what may be the first example in an influence operation of a digital avatar created by a Chinese company — a woman in a red dress named Yanni.
  • The scale of the pro-China network is probably even larger, according to the report. Similar channels appeared to target Indonesian and French people. Three separate channels posted videos about chip production that used similar thumbnail images and the same title translated into English, French and Spanish.
Javier E

Stanford's top disinformation research group collapses under pressure - The Washington ... - 0 views

  • The collapse of the five-year-old Observatory is the latest and largest of a series of setbacks to the community of researchers who try to detect propaganda and explain how false narratives are manufactured, gather momentum and become accepted by various groups
  • It follows Harvard’s dismissal of misinformation expert Joan Donovan, who in a December whistleblower complaint alleged he university’s close and lucrative ties with Facebook parent Meta led the university to clamp down on her work, which was highly critical of the social media giant’s practices.
  • Starbird said that while most academic studies of online manipulation look backward from much later, the Observatory’s “rapid analysis” helped people around the world understand what they were seeing on platforms as it happened.
  • ...9 more annotations...
  • Brown University professor Claire Wardle said the Observatory had created innovative methodology and trained the next generation of experts.
  • “Closing down a lab like this would always be a huge loss, but doing so now, during a year of global elections, makes absolutely no sense,” said Wardle, who previously led research at anti-misinformation nonprofit First Draft. “We need universities to use their resources and standing in the community to stand up to criticism and headlines.”
  • The study of misinformation has become increasingly controversial, and Stamos, DiResta and Starbird have been besieged by lawsuits, document requests and threats of physical harm. Leading the charge has been Rep. Jim Jordan (R-Ohio), whose House subcommittee alleges the Observatory improperly worked with federal officials and social media companies to violate the free-speech rights of conservatives.
  • In a joint statement, Stamos and DiResta said their work involved much more than elections, and that they had been unfairly maligned.
  • “The politically motivated attacks against our research on elections and vaccines have no merit, and the attempts by partisan House committee chairs to suppress First Amendment-protected research are a quintessential example of the weaponization of government,” they said.
  • Stamos founded the Observatory after publicizing that Russia has attempted to influence the 2016 election by sowing division on Facebook, causing a clash with the company’s top executives. Special counsel Robert S. Mueller III later cited the Facebook operation in indicting a Kremlin contractor. At Stanford, Stamos and his team deepened his study of influence operations from around the world, including one it traced to the Pentagon.
  • Stamos told associates he stepped back from leading the Observatory last year in part because the political pressure had taken a toll. Stamos had raised most of the money for the project, and the remaining faculty have not been able to replicate his success, as many philanthropic groups shift their focus on artificial intelligence and other, fresher topics.
  • In supporting the project further, the university would have risked alienating conservative donors, Silicon Valley figures, and members of Congress, who have threatened to stop all federal funding for disinformation research or cut back general support.
  • The Observatory’s non-election work has included developing curriculum for teaching college students about how to handle trust and safety issues on social media platforms and launching the first peer-reviewed journal dedicated to that field. It has also investigated rings publishing child sexual exploitation material online and flaws in the U.S. system for reporting it, helping to prepare platforms to handle an influx of computer-generated material.
oliviaodon

American Elections Remain Unprotected - The Atlantic - 0 views

  • Two weeks before the inauguration of President Donald Trump, the U.S. intelligence community released a declassified version of its report on Russia’s interference in the 2016 election. It detailed the activities of  a network of hackers who infiltrated voting systems and stole documents from the Democratic National Committee and Hillary Clinton’s presidential campaign. It also issued a stark warning: “Moscow will apply lessons learned from its Putin-ordered campaign aimed at the U.S. presidential election to future influence efforts worldwide, including against U.S. allies and their election processes.”
  • How disinformation will be deployed in 2018 and beyond is unclear. What is clear, however, is that the Kremlin believes its efforts to sow chaos in the American political process, which it has continued to hone in Europe, have worked and are poised for a return.
  • So far, Washington’s response to all this has been muted.
  • ...5 more annotations...
  • Russian and American officials have discussed how to stabilize the situation.
  • Fact-checking measures adopted by major tech and social-media companies are unlikely to stop Russia from seeking out new vulnerabilities in Western democracies.
  • While such an attack would mark a major escalation for Russia, it would not be unprecedented. Attacks on at least a dozen electric facilities in America—including one nuclear plant—have been traced back to a Russian-linked group. Russia is also thought to be behind an increasing number of cyberattacks against private corporations and government agencies in Ukraine. Similarly, Moscow waged a massive disinformation and propaganda campaign alongside its annexation of Crimea in 2014.
  • In recent years, Kremlin-linked cyber and disinformation campaigns of varying ambition have hit several European countries. In Germany, Russian state news spread a fake story about the rape of an underage girl by migrants during the height of Europe’s refugee crisis in 2016 that led to dozens of protests across the country. Similarly, Russian-backed broadcasters targeted Germany’s Russian emigrant community allegedly to bolster support for the country’s right-wing Alternative for Germany party in its bid to enter parliament for the first time. In France, Russian-linked hackers were believed to have stolen and leaked emails from French President Emmanuel Macron’s campaign. Moscow also recently launched a French version of RT, the public broadcaster formerly known as Russia Today. Spanish investigators found that both private and state-led Russian-based groups disseminated information on social media to try to sway public opinion ahead of Catalonia’s independence referendum in October.
  • “On the security side, there are some improvements that can happen without the [Trump] administration,” Sulmeyer, the former cyber official, said. “But without a greater counterweight or cost for Russia, none of this is going to stop.”
lmunch

Opinion: Post-Trump, the need for fact checking isn't going away - CNN - 0 views

  • This week, we ask the question: What comes next for America and disinformation? The past four years have seen an alarming erosion in the public trust in news, coupled with a spread of conspiracy theories, junk science and outright falsehoods by none other than the President of the United States. With a new president elected, how does Joe Biden help steer the country back toward facts, science and truth? SE Cupp talks to CNN Senior Political Analyst John Avlon about all this and more in our CNN Digital video discussion, but first Avlon tackles the future of fact checking in a CNN Opinion op-ed.
  • That's because the disinformation ecosystem is still proliferating via social media and the hyper-partisan fragmentation of society. Trump is a symptom rather than its root cause. There is every reason to hope that the presence of a president who does not lie all the time will not exacerbate our divides on a daily basis. But it would be dangerously naïve to believe that the underlying infrastructure of hate news and fake news will be solved with a new president.
  • Let's start by recognizing reality. Fact checking Democrats this election cycle has offered a far less target rich environment. This is not because either party has a monopoly on virtue or vice, but because Democrats' falsehoods during their presidential debates have been comparatively pedestrian -- likely to focus on competing claims about calculating the 10-year cost of Medicare for All, or who wrote-what-gun control bill, or how many manufacturing jobs have been lost, or when a candidate really started supporting a raise in the minimum wage.
  • ...3 more annotations...
  • The sheer velocity of Donald Trump's false and misleading statements -- along with the proliferation of disinformation on social media -- have demanded significant fact-checking to defend liberal democracy.
  • Reforms are necessary. As I've written before on CNN Opinion, "Social media and tech platforms have a responsibility not to run knowingly false advertisements or promote intentionally false stories. They must disclose who is paying for digital political ads and crack down on the spread of disinformation. The Honest Ads Act would require the same disclosures that are required on television and radio right now. This is a no brainer. The profit motive from hate news and fake news might be reduced by moving digital advertising toward attention metrics to measure and monetize reader engagement and loyalty, incentivizing quality over clickbait. But perhaps the single biggest reform would come from social media companies requiring that accounts verify they are real people, rather than bots that bully people and manipulate public opinion."
  • It would be a huge mistake to assume that simply because the velocity of lies from the White House is likely to decrease dramatically that the need for fact checks has expired. Instead, it has only transformed to a broader arena than a presidential beat. It's the part of news that people need most now, the tip of the spear that fights for the idea that everyone is entitled to their own opinion but not their own facts. This is necessary for a substantive, civil and fact-based debate, which is a precondition for a functioning, self-governing society. And that's why fact checking will remain a core responsibility for journalists in the future.
mimiterranova

Black And Latino Voters Flooded With Disinformation In Election's Final Days : NPR - 0 views

  • Someone started posting memes full of false claims that seemed designed to discourage people from voting.
  • 'Democrats and Republicans are the same. There's no point in voting.' 'Obama didn't do anything for you during his term, why should you vote for a Democrat this time around?' "
  • Black and Latino voters are being flooded with similar messages in the final days of the election, according to voting rights activists and experts who track disinformation
  • ...4 more annotations...
  • "We are now talking about this misinformation as a part of the same trajectory as a poll tax, as a literacy test," he said. "A sustained campaign targeted at Black Americans — and often brown Americans as well — to limit our political power, to limit our ability to shape the decisions tha
  • t are made in this country."
  • Their strategy, Banks said, was "masquerading as black Americans, drawing people into conversation and ultimately turning that conversation toward bad information and often toward a sort of deep cynicism that made people sort of less inspired to participate."
  • Groups tracking disinformation have also noted attacks on Sen. Kamala Harris, the Democratic vice presidential nominee, such as false claims about her racial identity and her history as a prosecutor in California. "There's been rampant misinformation about her record, who she is, what she's about," the New Florida Majority's Bullard said.
Javier E

Facebook Is a Doomsday Machine - The Atlantic - 0 views

  • megadeath is not the only thing that makes the Doomsday Machine petrifying. The real terror is in its autonomy, this idea that it would be programmed to detect a series of environmental inputs, then to act, without human interference. “There is no chance of human intervention, control, and final decision,” wrote the military strategist Herman Kahn in his 1960 book, On Thermonuclear War, which laid out the hypothetical for a Doomsday Machine. The concept was to render nuclear war unwinnable, and therefore unthinkable.
  • No machine should be that powerful by itself—but no one person should be either.
  • so far, somewhat miraculously, we have figured out how to live with the bomb. Now we need to learn how to survive the social web.
  • ...41 more annotations...
  • There’s a notion that the social web was once useful, or at least that it could have been good, if only we had pulled a few levers: some moderation and fact-checking here, a bit of regulation there, perhaps a federal antitrust lawsuit. But that’s far too sunny and shortsighted a view.
  • Today’s social networks, Facebook chief among them, were built to encourage the things that make them so harmful. It is in their very architecture.
  • I realized only recently that I’ve been thinking far too narrowly about the problem.
  • Megascale is nearly the existential threat that megadeath is. No single machine should be able to control the fate of the world’s population—and that’s what both the Doomsday Machine and Facebook are built to do.
  • Facebook does not exist to seek truth and report it, or to improve civic health, or to hold the powerful to account, or to represent the interests of its users, though these phenomena may be occasional by-products of its existence.
  • The company’s early mission was to “give people the power to share and make the world more open and connected.” Instead, it took the concept of “community” and sapped it of all moral meaning.
  • Facebook—along with Google and YouTube—is perfect for amplifying and spreading disinformation at lightning speed to global audiences.
  • Facebook decided that it needed not just a very large user base, but a tremendous one, unprecedented in size. That decision set Facebook on a path to escape velocity, to a tipping point where it can harm society just by existing.
  • No one, not even Mark Zuckerberg, can control the product he made. I’ve come to realize that Facebook is not a media company. It’s a Doomsday Machine.
  • Scale and engagement are valuable to Facebook because they’re valuable to advertisers. These incentives lead to design choices such as reaction buttons that encourage users to engage easily and often, which in turn encourage users to share ideas that will provoke a strong response.
  • Every time you click a reaction button on Facebook, an algorithm records it, and sharpens its portrait of who you are.
  • The hyper-targeting of users, made possible by reams of their personal data, creates the perfect environment for manipulation—by advertisers, by political campaigns, by emissaries of disinformation, and of course by Facebook itself, which ultimately controls what you see and what you don’t see on the site.
  • there aren’t enough moderators speaking enough languages, working enough hours, to stop the biblical flood of shit that Facebook unleashes on the world, because 10 times out of 10, the algorithm is faster and more powerful than a person.
  • At megascale, this algorithmically warped personalized informational environment is extraordinarily difficult to moderate in a meaningful way, and extraordinarily dangerous as a result.
  • These dangers are not theoretical, and they’re exacerbated by megascale, which makes the platform a tantalizing place to experiment on people
  • Even after U.S. intelligence agencies identified Facebook as a main battleground for information warfare and foreign interference in the 2016 election, the company has failed to stop the spread of extremism, hate speech, propaganda, disinformation, and conspiracy theories on its site.
  • it wasn’t until October of this year, for instance, that Facebook announced it would remove groups, pages, and Instragram accounts devoted to QAnon, as well as any posts denying the Holocaust.
  • In the days after the 2020 presidential election, Zuckerberg authorized a tweak to the Facebook algorithm so that high-accuracy news sources such as NPR would receive preferential visibility in people’s feeds, and hyper-partisan pages such as Breitbart News’s and Occupy Democrats’ would be buried, according to The New York Times, offering proof that Facebook could, if it wanted to, turn a dial to reduce disinformation—and offering a reminder that Facebook has the power to flip a switch and change what billions of people see online.
  • reducing the prevalence of content that Facebook calls “bad for the world” also reduces people’s engagement with the site. In its experiments with human intervention, the Times reported, Facebook calibrated the dial so that just enough harmful content stayed in users’ news feeds to keep them coming back for more.
  • Facebook’s stated mission—to make the world more open and connected—has always seemed, to me, phony at best, and imperialist at worst.
  • Facebook is a borderless nation-state, with a population of users nearly as big as China and India combined, and it is governed largely by secret algorithms
  • How much real-world violence would never have happened if Facebook didn’t exist? One of the people I’ve asked is Joshua Geltzer, a former White House counterterrorism official who is now teaching at Georgetown Law. In counterterrorism circles, he told me, people are fond of pointing out how good the United States has been at keeping terrorists out since 9/11. That’s wrong, he said. In fact, “terrorists are entering every single day, every single hour, every single minute” through Facebook.
  • Evidence of real-world violence can be easily traced back to both Facebook and 8kun. But 8kun doesn’t manipulate its users or the informational environment they’re in. Both sites are harmful. But Facebook might actually be worse for humanity.
  • In previous eras, U.S. officials could at least study, say, Nazi propaganda during World War II, and fully grasp what the Nazis wanted people to believe. Today, “it’s not a filter bubble; it’s a filter shroud,” Geltzer said. “I don’t even know what others with personalized experiences are seeing.”
  • Mary McCord, the legal director at the Institute for Constitutional Advocacy and Protection at Georgetown Law, told me that she thinks 8kun may be more blatant in terms of promoting violence but that Facebook is “in some ways way worse” because of its reach. “There’s no barrier to entry with Facebook,” she said. “In every situation of extremist violence we’ve looked into, we’ve found Facebook postings. And that reaches tons of people. The broad reach is what brings people into the fold and normalizes extremism and makes it mainstream.” In other words, it’s the megascale that makes Facebook so dangerous.
  • Facebook’s megascale gives Zuckerberg an unprecedented degree of influence over the global population. If he isn’t the most powerful person on the planet, he’s very near the top.
  • “The thing he oversees has such an effect on cognition and people’s beliefs, which can change what they do with their nuclear weapons or their dollars.”
  • Facebook’s new oversight board, formed in response to backlash against the platform and tasked with making decisions concerning moderation and free expression, is an extension of that power. “The first 10 decisions they make will have more effect on speech in the country and the world than the next 10 decisions rendered by the U.S. Supreme Court,” Geltzer said. “That’s power. That’s real power.”
  • Facebook is also a business, and a place where people spend time with one another. Put it this way: If you owned a store and someone walked in and started shouting Nazi propaganda or recruiting terrorists near the cash register, would you, as the shop owner, tell all of the other customers you couldn’t possibly intervene?
  • In 2004, Zuckerberg said Facebook ran advertisements only to cover server costs. But over the next two years Facebook completely upended and redefined the entire advertising industry. The pre-social web destroyed classified ads, but the one-two punch of Facebook and Google decimated local news and most of the magazine industry—publications fought in earnest for digital pennies, which had replaced print dollars, and social giants scooped them all up anyway.
  • localized approach is part of what made megascale possible. Early constraints around membership—the requirement at first that users attended Harvard, and then that they attended any Ivy League school, and then that they had an email address ending in .edu—offered a sense of cohesiveness and community. It made people feel more comfortable sharing more of themselves. And more sharing among clearly defined demographics was good for business.
  • in 2007, Zuckerberg said something in an interview with the Los Angeles Times that now takes on a much darker meaning: “The things that are most powerful aren’t the things that people would have done otherwise if they didn’t do them on Facebook. Instead, it’s the things that would never have happened otherwise.”
  • We’re still in the infancy of this century’s triple digital revolution of the internet, smartphones, and the social web, and we find ourselves in a dangerous and unstable informational environment, powerless to resist forces of manipulation and exploitation that we know are exerted on us but remain mostly invisible
  • The Doomsday Machine offers a lesson: We should not accept this current arrangement. No single machine should be able to control so many people.
  • we need a new philosophical and moral framework for living with the social web—a new Enlightenment for the information age, and one that will carry us back to shared reality and empiricism.
  • In other words, if the Dunbar number for running a company or maintaining a cohesive social life is 150 people; the magic number for a functional social platform is maybe 20,000 people. Facebook now has 2.7 billion monthly users.
  • we need to adopt a broader view of what it will take to fix the brokenness of the social web. That will require challenging the logic of today’s platforms—and first and foremost challenging the very concept of megascale as a way that humans gather.
  • The web’s existing logic tells us that social platforms are free in exchange for a feast of user data; that major networks are necessarily global and centralized; that moderators make the rules. None of that need be the case.
  • We need people who dismantle these notions by building alternatives. And we need enough people to care about these other alternatives to break the spell of venture capital and mass attention that fuels megascale and creates fatalism about the web as it is now.
  • We must also find ways to repair the aspects of our society and culture that the social web has badly damaged. This will require intellectual independence, respectful debate, and the same rebellious streak that helped establish Enlightenment values centuries ago.
  • Right now, too many people are allowing algorithms and tech giants to manipulate them, and reality is slipping from our grasp as a result. This century’s Doomsday Machine is here, and humming along.
Javier E

'Fiction is outperforming reality': how YouTube's algorithm distorts truth | Technology... - 0 views

  • There are 1.5 billion YouTube users in the world, which is more than the number of households that own televisions. What they watch is shaped by this algorithm, which skims and ranks billions of videos to identify 20 “up next” clips that are both relevant to a previous video and most likely, statistically speaking, to keep a person hooked on their screen.
  • Company insiders tell me the algorithm is the single most important engine of YouTube’s growth
  • YouTube engineers describe it as one of the “largest scale and most sophisticated industrial recommendation systems in existence”
  • ...49 more annotations...
  • Lately, it has also become one of the most controversial. The algorithm has been found to be promoting conspiracy theories about the Las Vegas mass shooting and incentivising, through recommendations, a thriving subculture that targets children with disturbing content
  • One YouTube creator who was banned from making advertising revenues from his strange videos – which featured his children receiving flu shots, removing earwax, and crying over dead pets – told a reporter he had only been responding to the demands of Google’s algorithm. “That’s what got us out there and popular,” he said. “We learned to fuel it and do whatever it took to please the algorithm.”
  • academics have speculated that YouTube’s algorithms may have been instrumental in fuelling disinformation during the 2016 presidential election. “YouTube is the most overlooked story of 2016,” Zeynep Tufekci, a widely respected sociologist and technology critic, tweeted back in October. “Its search and recommender algorithms are misinformation engines.”
  • Those are not easy questions to answer. Like all big tech companies, YouTube does not allow us to see the algorithms that shape our lives. They are secret formulas, proprietary software, and only select engineers are entrusted to work on the algorithm
  • Guillaume Chaslot, a 36-year-old French computer programmer with a PhD in artificial intelligence, was one of those engineers.
  • The experience led him to conclude that the priorities YouTube gives its algorithms are dangerously skewed.
  • Chaslot said none of his proposed fixes were taken up by his managers. “There are many ways YouTube can change its algorithms to suppress fake news and improve the quality and diversity of videos people see,” he says. “I tried to change YouTube from the inside but it didn’t work.”
  • Chaslot explains that the algorithm never stays the same. It is constantly changing the weight it gives to different signals: the viewing patterns of a user, for example, or the length of time a video is watched before someone clicks away.
  • The engineers he worked with were responsible for continuously experimenting with new formulas that would increase advertising revenues by extending the amount of time people watched videos. “Watch time was the priority,” he recalls. “Everything else was considered a distraction.”
  • Chaslot was fired by Google in 2013, ostensibly over performance issues. He insists he was let go after agitating for change within the company, using his personal time to team up with like-minded engineers to propose changes that could diversify the content people see.
  • He was especially worried about the distortions that might result from a simplistic focus on showing people videos they found irresistible, creating filter bubbles, for example, that only show people content that reinforces their existing view of the world.
  • “YouTube is something that looks like reality, but it is distorted to make you spend more time online,” he tells me when we meet in Berkeley, California. “The recommendation algorithm is not optimising for what is truthful, or balanced, or healthy for democracy.”
  • YouTube told me that its recommendation system had evolved since Chaslot worked at the company and now “goes beyond optimising for watchtime”.
  • It did not say why Google, which acquired YouTube in 2006, waited over a decade to make those changes
  • Chaslot believes such changes are mostly cosmetic, and have failed to fundamentally alter some disturbing biases that have evolved in the algorithm
  • It finds videos through a word search, selecting a “seed” video to begin with, and recording several layers of videos that YouTube recommends in the “up next” column. It does so with no viewing history, ensuring the videos being detected are YouTube’s generic recommendations, rather than videos personalised to a user. And it repeats the process thousands of times, accumulating layers of data about YouTube recommendations to build up a picture of the algorithm’s preferences.
  • Each study finds something different, but the research suggests YouTube systematically amplifies videos that are divisive, sensational and conspiratorial.
  • When his program found a seed video by searching the query “who is Michelle Obama?” and then followed the chain of “up next” suggestions, for example, most of the recommended videos said she “is a man”
  • He believes one of the most shocking examples was detected by his program in the run-up to the 2016 presidential election. As he observed in a short, largely unnoticed blogpost published after Donald Trump was elected, the impact of YouTube’s recommendation algorithm was not neutral during the presidential race: it was pushing videos that were, in the main, helpful to Trump and damaging to Hillary Clinton.
  • “It was strange,” he explains to me. “Wherever you started, whether it was from a Trump search or a Clinton search, the recommendation algorithm was much more likely to push you in a pro-Trump direction.”
  • Trump won the electoral college as a result of 80,000 votes spread across three swing states. There were more than 150 million YouTube users in the US. The videos contained in Chaslot’s database of YouTube-recommended election videos were watched, in total, more than 3bn times before the vote in November 2016.
  • “Algorithms that shape the content we see can have a lot of impact, particularly on people who have not made up their mind,”
  • “Gentle, implicit, quiet nudging can over time edge us toward choices we might not have otherwise made.”
  • But what was most compelling was how often Chaslot’s software detected anti-Clinton conspiracy videos appearing “up next” beside other videos.
  • I spent weeks watching, sorting and categorising the trove of videos with Erin McCormick, an investigative reporter and expert in database analysis. From the start, we were stunned by how many extreme and conspiratorial videos had been recommended, and the fact that almost all of them appeared to be directed against Clinton.
  • “This research captured the apparent direction of YouTube’s political ecosystem,” he says. “That has not been done before.”
  • There were too many videos in the database for us to watch them all, so we focused on 1,000 of the top-recommended videos. We sifted through them one by one to determine whether the content was likely to have benefited Trump or Clinton. Just over a third of the videos were either unrelated to the election or contained content that was broadly neutral or even-handed. Of the remaining 643 videos, 551 were videos favouring Trump, while only only 92 favoured the Clinton campaign.
  • The sample we had looked at suggested Chaslot’s conclusion was correct: YouTube was six times more likely to recommend videos that aided Trump than his adversary.
  • The spokesperson added: “Our search and recommendation systems reflect what people search for, the number of videos available, and the videos people choose to watch on YouTube. That’s not a bias towards any particular candidate; that is a reflection of viewer interest.”
  • YouTube seemed to be saying that its algorithm was a neutral mirror of the desires of the people who use it – if we don’t like what it does, we have ourselves to blame. How does YouTube interpret “viewer interest” – and aren’t “the videos people choose to watch” influenced by what the company shows them?
  • Offered the choice, we may instinctively click on a video of a dead man in a Japanese forest, or a fake news clip claiming Bill Clinton raped a 13-year-old. But are those in-the-moment impulses really a reflect of the content we want to be fed?
  • YouTube’s recommendation system has probably figured out that edgy and hateful content is engaging. “This is a bit like an autopilot cafeteria in a school that has figured out children have sweet teeth, and also like fatty and salty foods,” she says. “So you make a line offering such food, automatically loading the next plate as soon as the bag of chips or candy in front of the young person has been consumed.”
  • Once that gets normalised, however, what is fractionally more edgy or bizarre becomes, Tufekci says, novel and interesting. “So the food gets higher and higher in sugar, fat and salt – natural human cravings – while the videos recommended and auto-played by YouTube get more and more bizarre or hateful.”
  • “This is important research because it seems to be the first systematic look into how YouTube may have been manipulated,” he says, raising the possibility that the algorithm was gamed as part of the same propaganda campaigns that flourished on Twitter and Facebook.
  • “We believe that the activity we found was limited because of various safeguards that we had in place in advance of the 2016 election, and the fact that Google’s products didn’t lend themselves to the kind of micro-targeting or viral dissemination that these actors seemed to prefer.”
  • Senator Mark Warner, the ranking Democrat on the intelligence committee, later wrote to the company about the algorithm, which he said seemed “particularly susceptible to foreign influence”. The senator demanded to know what the company was specifically doing to prevent a “malign incursion” of YouTube’s recommendation system. Walker, in his written reply, offered few specifics
  • Tristan Harris, a former Google insider turned tech whistleblower, likes to describe Facebook as a “living, breathing crime scene for what happened in the 2016 election” that federal investigators have no access to. The same might be said of YouTube. About half the videos Chaslot’s program detected being recommended during the election have now vanished from YouTube – many of them taken down by their creators. Chaslot has always thought this suspicious. These were videos with titles such as “Must Watch!! Hillary Clinton tried to ban this video”, watched millions of times before they disappeared. “Why would someone take down a video that has been viewed millions of times?” he asks
  • I shared the entire database of 8,000 YouTube-recommended videos with John Kelly, the chief executive of the commercial analytics firm Graphika, which has been tracking political disinformation campaigns. He ran the list against his own database of Twitter accounts active during the election, and concluded many of the videos appeared to have been pushed by networks of Twitter sock puppets and bots controlled by pro-Trump digital consultants with “a presumably unsolicited assist” from Russia.
  • “I don’t have smoking-gun proof of who logged in to control those accounts,” he says. “But judging from the history of what we’ve seen those accounts doing before, and the characteristics of how they tweet and interconnect, they are assembled and controlled by someone – someone whose job was to elect Trump.”
  • After the Senate’s correspondence with Google over possible Russian interference with YouTube’s recommendation algorithm was made public last week, YouTube sent me a new statement. It emphasised changes it made in 2017 to discourage the recommendation system from promoting some types of problematic content. “We appreciate the Guardian’s work to shine a spotlight on this challenging issue,” it added. “We know there is more to do here and we’re looking forward to making more announcements in the months ahead.”
  • In the months leading up to the election, the Next News Network turned into a factory of anti-Clinton news and opinion, producing dozens of videos a day and reaching an audience comparable to that of MSNBC’s YouTube channel. Chaslot’s research indicated Franchi’s success could largely be credited to YouTube’s algorithms, which consistently amplified his videos to be played “up next”. YouTube had sharply dismissed Chaslot’s research.
  • I contacted Franchi to see who was right. He sent me screen grabs of the private data given to people who upload YouTube videos, including a breakdown of how their audiences found their clips. The largest source of traffic to the Bill Clinton rape video, which was viewed 2.4m times in the month leading up to the election, was YouTube recommendations.
  • The same was true of all but one of the videos Franchi sent me data for. A typical example was a Next News Network video entitled “WHOA! HILLARY THINKS CAMERA’S OFF… SENDS SHOCK MESSAGE TO TRUMP” in which Franchi, pointing to a tiny movement of Clinton’s lips during a TV debate, claims she says “fuck you” to her presidential rival. The data Franchi shared revealed in the month leading up to the election, 73% of the traffic to the video – amounting to 1.2m of its views – was due to YouTube recommendations. External traffic accounted for only 3% of the views.
  • many of the other creators of anti-Clinton videos I spoke to were amateur sleuths or part-time conspiracy theorists. Typically, they might receive a few hundred views on their videos, so they were shocked when their anti-Clinton videos started to receive millions of views, as if they were being pushed by an invisible force.
  • In every case, the largest source of traffic – the invisible force – came from the clips appearing in the “up next” column. William Ramsey, an occult investigator from southern California who made “Irrefutable Proof: Hillary Clinton Has a Seizure Disorder!”, shared screen grabs that showed the recommendation algorithm pushed his video even after YouTube had emailed him to say it violated its guidelines. Ramsey’s data showed the video was watched 2.4m times by US-based users before election day. “For a nobody like me, that’s a lot,” he says. “Enough to sway the election, right?”
  • Daniel Alexander Cannon, a conspiracy theorist from South Carolina, tells me: “Every video I put out about the Clintons, YouTube would push it through the roof.” His best-performing clip was a video titled “Hillary and Bill Clinton ‘The 10 Photos You Must See’”, essentially a slideshow of appalling (and seemingly doctored) images of the Clintons with voiceover in which Cannon speculates on their health. It has been seen 3.7m times on YouTube, and 2.9m of those views, Cannon said, came from “up next” recommendations.
  • his research also does something more important: revealing how thoroughly our lives are now mediated by artificial intelligence.
  • Less than a generation ago, the way voters viewed their politicians was largely shaped by tens of thousands of newspaper editors, journalists and TV executives. Today, the invisible codes behind the big technology platforms have become the new kingmakers.
  • They pluck from obscurity people like Dave Todeschini, a retired IBM engineer who, “let off steam” during the election by recording himself opining on Clinton’s supposed involvement in paedophilia, child sacrifice and cannibalism. “It was crazy, it was nuts,” he said of the avalanche of traffic to his YouTube channel, which by election day had more than 2m views
anonymous

Election Lawsuits Are A New Tactic To Fight Disinformation : NPR - 0 views

  • The victims of some of the most pernicious conspiracy theories of 2020 are fighting back in court. Voting equipment companies have filed a series of massive defamation lawsuits against allies of former President Trump in an effort to exert accountability over falsehoods about the companies' role in the election and repair damage to their brands.
  • On Friday, Fox News became the latest target and was served with a $1.6 billion defamation lawsuit by Denver-based Dominion Voting Systems after several of the network's hosts entertained on air conspiracy theories pushed by former President Trump that the company had rigged the results of the November election against him in key states.
  • Dominion has also sued Trump associates Rudy Giuliani, Sidney Powell and Mike Lindell for billions in damages. The company is one of the top providers of voting equipment to states and counties around the country and typically relies on procurement decisions made by elected officials from both political parties.
  • ...15 more annotations...
  • Earlier this month, Republican commissioners in one Ohio county sought to block the county election board's purchase of new Dominion equipment. A Dominion employee who was forced into hiding due to death threats has sued Giuliani, Powell and the Trump campaign. Another voting systems company, Smartmatic, has also filed a defamation lawsuit against Fox News.
  • Some see these legal fights as another way to take on viral misinformation, one that's already starting to show some results although some journalists are uneasy that a news organization could be targeted.
  • Skarnulis hopes that in addition to helping Coomer clear his name and return to a normal life, the suits will also serve as a warning.
  • The number of defamation lawsuits and the large damage claims associated with them is novel, said journalism and public policy professor Bill Adair, head of the journalism program at Duke University.
  • He does worry that using defamation suits to combat untruths spread by media outlets could become a weapon against journalists just doing their jobs. "As a journalist, I'm a little bit nervous. The idea of using defamation lawsuits makes us a little bit concerned."But even with that discomfort, Adair has come to believe the lawsuits do have a role to play.
  • The defamation suits already do appear to be having an effect. An anchor for Newsmax walked out on a live interview with My Pillow CEO Lindell when he started making unsubstantiated claims about Dominion voting machines. Fox News, the Fox Business Network and Newsmax also aired segments that contradicted the disinformation their own hosts had amplified.
  • Last month, Fox Business also cancelled a show hosted by Trump ally Lou Dobbs, who had amplified the conspiracy theories and interviewed Powell and Giuliani about them.
  • One challenge for the plaintiffs is that defamation lawsuits are difficult to win. They need to show the person they're suing knew a statement was false when she made it, or had serious doubts about its truthfulness.
  • Media organizations have a First Amendment right to report the news, and that includes repeating what important people say, even if those statements are false, said George Freeman, the former in-house counsel for The New York Times, who now heads the Medial Law Resource Center.
  • Pro-Trump outlets are likely to claim that constitutional protection for their defense but Freeman believes they may have crossed a legal line in their presentation of election fraud claims and in some instances applauding obvious falsehoods.
  • Still Freeman said he thinks the strongest defamation cases aren't against the media companies, but against one of the people they gave a lot of airtime to, Rudy Giuliani.
  • In a January call announcing the lawsuit against Giuliani, Dominion's attorney, Tom Clare, said that the court can consider circumstantial evidence too. The complaint includes a detailed timeline that shows Giuliani continued to make his claims in the face of public assurances from election security experts, hand recounts, and numerous court rulings rejecting fraud cases.
  • While the current lawsuits could have an impact in this instance, experts on misinformation say there are several reasons why defamation cases aren't a central tool in the fight against falsehoods.
  • Many conspiracy theories don't target a specific person or company, so there's no one to file a lawsuit against. Legal action is also expensive. Coomer's legal team expects his bills will exceed $2 million. And when a victim does sue, a case can take years.
  • The parents of children killed in the Sandy Hook shooting have filed multiple defamation lawsuits against Alex Jones of the conspiracy site, InfoWars. But after numerous challenges and delays, the cases are all still in the pre-trial phase. With Dominion and Smartmatic vowing not to settle before they get their day in court, this approach to fighting election misinformation may still be grinding forward even as the country enters the next presidential election. But for Adair and others, any effort to discourage future misinformation campaigns is worth pursuing.
Javier E

Fight the Future - The Triad - 0 views

  • In large part because our major tech platforms reduced the coefficient of friction (μ for my mechanics nerd posse) to basically zero. QAnons crept out of the dark corners of the web—obscure boards like 4chan and 8kun—and got into the mainstream platforms YouTube, Facebook, Instagram, and Twitter.
  • Why did QAnon spread like wildfire in America?
  • These platforms not only made it easy for conspiracy nuts to share their crazy, but they used algorithms that actually boosted the spread of crazy, acting as a force multiplier.
  • ...24 more annotations...
  • So it sounds like a simple fix: Impose more friction at the major platform level and you’ll clean up the public square.
  • But it’s not actually that simple because friction runs counter to the very idea of the internet.
  • The fundamental precept of the internet is that it reduces marginal costs to zero. And this fact is why the design paradigm of the internet is to continually reduce friction experienced by users to zero, too. Because if the second unit of everything is free, then the internet has a vested interest in pushing that unit in front of your eyeballs as smoothly as possible.
  • the internet is “broken,” but rather it’s been functioning exactly as it was designed to:
  • Perhaps more than any other job in the world, you do not want the President of the United States to live in a frictionless state of posting. The Presidency is not meant to be a frictionless position, and the United States government is not a frictionless entity, much to the chagrin of many who have tried to change it. Prior to this administration, decisions were closely scrutinized for, at the very least, legality, along with the impact on diplomacy, general norms, and basic grammar. This kind of legal scrutiny and due diligence is also a kind of friction--one that we now see has a lot of benefits. 
  • The deep lesson here isn’t about Donald Trump. It’s about the collision between the digital world and the real world.
  • In the real world, marginal costs are not zero. And so friction is a desirable element in helping to get to the optimal state. You want people to pause before making decisions.
  • described friction this summer as: “anything that inhibits user action within a digital interface, particularly anything that requires an additional click or screen.” For much of my time in the technology sector, friction was almost always seen as the enemy, a force to be vanquished. A “frictionless” experience was generally held up as the ideal state, the optimal product state.
  • Trump was riding the ultimate frictionless optimized engagement Twitter experience: he rode it all the way to the presidency, and then he crashed the presidency into the ground.
  • From a metrics and user point of view, the abstract notion of the President himself tweeting was exactly what Twitter wanted in its original platonic ideal. Twitter has been built to incentivize someone like Trump to engage and post
  • The other day we talked a little bit about how fighting disinformation, extremism, and online cults is like fighting a virus: There is no “cure.” Instead, what you have to do is create enough friction that the rate of spread becomes slow.
  • Our challenge is that when human and digital design comes into conflict, the artificial constraints we impose should be on the digital world to become more in service to us. Instead, we’ve let the digital world do as it will and tried to reconcile ourselves to the havoc it wreaks.
  • And one of the lessons of the last four years is that when you prize the digital design imperatives—lack of friction—over the human design imperatives—a need for friction—then bad things can happen.
  • We have an ongoing conflict between the design precepts of humans and the design precepts of computers.
  • Anyone who works with computers learns to fear their capacity to forget. Like so many things with computers, memory is strictly binary. There is either perfect recall or total oblivion, with nothing in between. It doesn't matter how important or trivial the information is. The computer can forget anything in an instant. If it remembers, it remembers for keeps.
  • This doesn't map well onto human experience of memory, which is fuzzy. We don't remember anything with perfect fidelity, but we're also not at risk of waking up having forgotten our own name. Memories tend to fade with time, and we remember only the more salient events.
  • And because we live in a time when storage grows ever cheaper, we learn to save everything, log everything, and keep it forever. You never know what will come in useful. Deleting is dangerous.
  • Our lives have become split between two worlds with two very different norms around memory.
  • [A] lot of what's wrong with the Internet has to do with memory. The Internet somehow contrives to remember too much and too little at the same time, and it maps poorly on our concepts of how memory should work.
  • The digital world is designed to never forget anything. It has perfect memory. Forever. So that one time you made a crude joke 20 years ago? It can now ruin your life.
  • Memory in the carbon-based world is imperfect. People forget things. That can be annoying if you’re looking for your keys but helpful if you’re trying to broker peace between two cultures. Or simply become a better person than you were 20 years ago.
  • The digital and carbon-based worlds have different design parameters. Marginal cost is one of them. Memory is another.
  • 2. Forget Me Now
  • 1. Fix Tech, Fix America
Javier E

'This is f---ing crazy': Florida Latinos swamped by wild conspiracy theories - POLITICO - 0 views

  • The sheer volume of conspiracy theories — including QAnon — and deceptive claims are already playing a role in stunting Biden’s growth with Latino voters, who make up about 17 percent of the state’s electorate.
  • “It’s difficult to measure the effect exactly, but the polling sort of shows it and in focus groups it shows up, with people deeply questioning the Democrats, and referring to the ‘deep state’ in particular — that there’s a real conspiracy against the president from the inside,” he said. “There’s a strain in our political culture that’s accustomed to conspiracy theories, a culture that’s accustomed to coup d'etats.”
  • Florida’s Latino community is a diverse mix of people with roots across Latin America. There’s a large population of Republican-leaning Cubans in Miami-Dade and a growing number of Democratic-leaning voters with Puerto Rican, Colombian, Nicaraguan, Dominican and Venezuelan heritage in Miami and elsewhere in the state. Many register as independents but typically vote Democratic
  • ...18 more annotations...
  • independents — especially recently arrived Spanish-speakers — are seen as more up for grabs because they’re less tied to U.S. political parties and are more likely than longtime voters to be influenced by mainstream news outlets and social media.
  • The GOP under Trump mastered social media, especially Facebook use, in 2016 and even Democrats acknowledge that Republicans have made inroads in the aggressive use of WhatsApp encrypted messaging.
  • Valencia bills her Spanish-language YouTube page, which has more than 378,000 followers, as a channel for geopolitical analysis. But it often resembles English-language right-wing news sources, such as Infowars, sharing conspiracy theories and strong anti-globalization messages.
  • In South Florida, veteran Latino Democratic strategist Evelyn Pérez-Verdia noticed this summer that the WhatsApp groups dedicated to updates on the pandemic and news for the Colombian and Venezuelan communities became intermittently interspersed with conspiracy theories from videos of far-right commentators or news clips from new Spanish-language sites, like Noticias 24 and PanAm Post, and the YouTube-based Informativo G24 website.
  • “I’ve never seen this level of disinformation, conspiracy theories and lies,” Pérez-Verdia, who is of Colombian descent, said. “It looks as if it has to be coordinated.”
  • Some of the information shared in chat groups and pulled from YouTube and Facebook goes beyond hyperbolic and caustic rhetoric.
  • Political campaigns, social justice movements and support groups have followed along, making WhatsApp a top tool for reaching voters in Latin America and from Latin America.
  • unlike the conspiracy theories that circulate in English-language news media and social media, there’s relatively little to no Spanish-language media coverage of the phenomenon nor a political counterpunch from the left.
  • Bula-Escobar, who’s also a frequent guest on Miami-based Radio Caracol — which is one of Colombia’s main radio networks and widely respected throughout Latin America — has gained an increasing amount of notoriety for pushing the claim, often seen as anti-Semitic, that billionaire George Soros is “the world’s biggest puppet master” and is the face of the American Democratic Party.
  • “Who’s going to celebrate the day, God forbid, Trump loses? Cuba; ISIS, which Trump ended; Hezbollah, which Obama gave the greenlight to enter Latin America; Iran; China … All the filth of the planet is against Donald Trump. So, if you want to be part of the filth, then go with the filth,” Bula-Escobar said in a recent episode of Informativo G24.
  • On Facebook, a Puerto Rican-born pastor Melvin Moya has circulated a video titled “Signs of pedophilia” with doctored videos of Biden inappropriately touching girls at various public ceremonies to a song in the background that says, “I sniffed a girl and I liked it.” The fake video posted on Sept. 1 has received more than 33,000 likes and 2,400 comments.
  • various fake stories across WhatsApp and Facebook claim that Nicolás Maduro’s socialist party in Venezuela and U.S. communist leaders are backing Biden.
  • “It’s really just a free-for-all now,” said Raúl Martínez, a Democrat who served as mayor of the largely conservative, heavily Cuban-American city of Hialeah for 24 years and is now host of a daily radio show on Radio Caracol. “It’s mind boggling. I started in politics when I was 20. I’ve never seen it like this.”
  • “When I hear from other stations, they haven’t just sipped the Kool-Aid. They drank the whole thing,” Martínez said.
  • Radio Caracol, for its part, received unwelcome attention Aug. 22 when it aired 16 minutes of paid programming from a local businesssman who launched into an anti-Black and anti-Semitic rant that claimed a Biden victory would mean that the U.S. would fall into a dictatorship led by “Jews and Blacks.” The commentator claimed that Biden is leading a political revolution “directed by racial minorities, atheists and anti-Christians” and supports killing newborn babies
  • on Friday, the editor of the Spanish-language sister paper of The Miami Herald, El Nuevo Herald, publicly apologized for its own paid-media scandal after running a publication called “Libre” as a newspaper insert that attacked Black Lives Matter and trafficked in anti-Semitic views.
  • "What kind of people are these Jews? They're always talking about the Holocaust, but have they already forgotten Kristallnacht, when Nazi thugs rampaged through Jewish shops all over Germany? So do the BLM and Antifa, only the Nazis didn't steal; they only destroyed,” the ad insert said.
  • “It’s not right wing. I don’t have a problem with right-wing stuff. It’s QAnon stuff. This is conspiracy theory. This goes beyond. This is new. This is a new phenomenon in Spanish speaking radio. We Cubans are not normal,” Tejera laughed, “but this is new. This is crazy. This is f---ing crazy.”
Javier E

Fearful calls flood election offices as Trump attacks mail-in voting, threatening parti... - 0 views

  • Intensifying the mistrust, experts said, are the power and reach of social media. They said the quest to turn minor irregularities into signs of political malintent — enabled by an information ecosystem that rewards outrage and partisan groupthink — poses among the greatest threats to the integrity of the Nov. 3 election.
  • “The amplification of these kinds of stories can have, in and of itself, a suppressive effect,” said Vanita Gupta, president of the Leadership Conference on Civil and Human Rights. The events in Utah, she said, show the ripple effects of attacks by Trump and his allies on “legal, safe, secure voting methods.”
  • . But the most lasting consequence of the false and misleading narratives coursing through the Internet, often using real examples but exaggerating them to create the appearance of an alarming trend, could be a form of democratic backsliding in parts of the country where the widespread adoption of mail balloting has been shown to expand electoral participation.
  • ...8 more annotations...
  • “Obviously, the effort to question and undermine vote by mail has worked very well,” said Justin Lee, Utah’s director of elections, faulting the “national discussion” for what he and others described as an unprecedented level of confusion threatening to derail a well-functioning system in a Republican-controlled state.
  • a powerful feedback loop has made it impossible to tune out these national controversies. One-off incidents documented by local media are flowing to partisan voices, who use their online megaphones to reframe the details as indictments of the entire balloting process.
  • The misleading narrative applied at the national level then filters back down to voters, causing them to distrust a system they have used for years.
  • Bongino, an influential conservative pundit closely aligned with Trump, shared the piece on Twitter to his nearly 2.5 million followers. “It’s only going to get worse,” he wrote on Facebook
  • The transformation of the Utah story — from a small-town technical mishap into purported proof of widespread voter fraud — illustrated to some experts the extent to which mainstream news reporting collides with the reach of social media sites and the agenda of influential political figures to stoke fear and reinforce the misconceptions of nervous voters.
  • A study released this month by Harvard University’s Berkman Klein Center for Internet and Society offered fresh evidence of the dangers posed by homegrown misinformation. For months, Trump has generated entire news cycles that serve to cast doubt about mail-in voting, which mainstream outlets have at times covered uncritically, the report found. The president’s influential allies have eagerly shared these and other stories with their vast online audiences, enhancing their reach and fomenting fresh doubt about the legitimacy of the 2020 vote.
  • “With respect to mail-in voter fraud, the driver of the disinformation campaign has been Trump, as president, supported by his campaign and Republican elites,” said Yochai Benkler, who leads the center and co-wrote the report.
  • In these and other cases, Benkler said, misconceptions and hoaxes that take root in the White House come to frame reporting in mainstream and partisan news sources alike. Any development related to the process of voting becomes fodder in a competition for narrative control.
cartergramiak

Conservative News Sites Fuel Voter Fraud Misinformation - The New York Times - 0 views

  • Harvard researchers described a “propaganda feedback loop” in right-wing media. The authors of the study, published this month through the school’s Berkman Klein Center for Internet and Society, reported that popular news outlets, rather than social media platforms, were the main drivers of a disinformation campaign meant to sow doubts about the integrity of the election
  • So far in October, Breitbart has published nearly 30 articles with the tag “voter fraud.”
  • As the country faces a third wave of Covid-19 cases, tens of millions of Americans plan to mail their ballots, and more than 25 states have expanded access to universal mail voting. The voting system, stressed by greater demand, has struggled in places with ballots sent to incorrect addresses or improperly filled out
  • ...17 more annotations...
  • Election experts have calculated that, in a 20-year period, fraud involving mailed ballots has affected 0.00006 percent of individual votes, or one case per state every six or seven years.
  • Among the billions of votes cast from 2000 to 2012, there were 491 cases of absentee-ballot fraud, according to an investigation conducted at Arizona State University’s journalism schoo
  • intentional voter fraud is extremely uncommon and rarely organized, according to decades of research.
  • In June, The Washington Post and the nonprofit Electronic Registration information Center analyzed data from three vote-by-mail states and found 372 possible cases of double voting or voting on behalf of dead people in 2016 and 2018, or 0.0025 percent of the 14.6 million mailed ballots.
  • Mr. Trump’s effort to discredit mail-in voting follows decades of disinformation about voter impersonation, voting by noncitizens and double voting, often promoted by Republican leaders.
  • Voting by mail under normal circumstances does not appear to give either major party an advantage, according to a study this spring by Stanford University’s Institute for Economic Policy Research.
  • But many conservative outlets have promoted the idea that fraud involving mailed ballots could tip the scales in favor of Democrats.
  • Mr. Stedman said right-leaning outlets sometimes conflated fraud with the statistically insignificant administrative mishaps that occur in every American election
  • In a similar cycle, the Fox News host Sean Hannity and conservative publications magnified the reach of a deceptive video released last month by Project Veritas, a group run by the conservative activist James O’Keefe. The video claimed without named sources or verifiable evidence that the campaign for Representative Ilhan Omar, a Minnesota Democrat, was collecting ballots illegally.
  • Stephen J. Stedman, a senior fellow at the Freeman Spogli Institute for International Studies at Stanford, said he thought “about disinformation in this country as almost an information ecology — it’s not an organic thing from the bottom up.”
  • Breitbart, The Washington Examiner and others amplify false claims of rampant cheating in what a new Harvard study calls a “propaganda feedback loop.”
  • The Washington Examiner, Breitbart News, The Gateway Pundit and The Washington Times are among the sites that have posted articles with headlines giving weight to the conspiracy theory that voter fraud is rampant and could swing the election to the left, a theory that has been repeatedly debunked by data.
  • “EXCLUSIVE: California Man Finds THOUSANDS of What Appear to be Unopened Ballots in Garbage Dumpster — Workers Quickly Try to Cover Them Up — We are Working to Verify.” The envelopes turned out to be empty and discarded legally in 2018. Gateway Pundit later updated the headline, but not before its original speculation had gone viral.
  • Pennsylvania’s elections chief that the discarded ballots were a “bad error” by a seasonal contractor, not “intentional fraud.” Mr. Trump cited the discarded Pennsylvania ballots several times as an example of fraud, including in last month’s presidential debate.
  • “FEDS: Military Ballots Discarded in ‘Troubling’ Discovery. All Opened Ballots were Cast for Trump.” Headlines on the same issue in The Washington Times were similar: “Feds investigating discarded mail-in ballots cast for Trump in Pennsylvania” and “FBI downplays election fraud as suspected ballot issues found in Pennsylvania, Texas.” A Washington Times opinion piece on the matter had the headline “Trump ballots in trash, oh my.”
  • “DESTROYED: Tons of Trump mail-in ballot applications SHREDDED in back of tractor-trailer headed for Pennsylvania.” The material was actually printing waste from a direct mail company.
  • RIGGED ELECTION!” He linked to a Breitbart article that included a transcript of Attorney General William P. Barr’s telling the Fox News host Maria Bartiromo that voting by mail “absolutely opens the floodgates to fraud.”
xaviermcelderry

Tuesday's Debate Made Clear the Gravest Threat to the Election: The President Himself -... - 0 views

  • Mr. Trump’s unwillingness to say he would abide by the result, and his disinformation campaign about the integrity of the American electoral system,
  • since 1788 (a messy first experiment, which stretched just under a month), through civil wars, world wars and natural disasters now faces the gravest challenge in its history to the way it chooses a leader and peacefully transfers power.
  • “We have never heard a president deliberately cast doubt on an election’s integrity this way a month before it happened,” said Michael Beschloss, a presidential historian and the author of “Presidents of War.” “This is the kind of thing we have preached to other countries that they should not do. It reeks of autocracy, not democracy.”
  • ...2 more annotations...
  • Mr. Trump himself has provided no evidence to back up his assertions, apart from citing a handful of Pennsylvania ballots discarded in a dumpster — and immediately tracked down, and counted, by election officials.
  • Meanwhile, the Department of Homeland Security and the F.B.I. have been issuing warnings, as recently as 24 hours before the debate, about the dangers of disinformation in what could be a tumultuous time after the election.
Javier E

How Facebook Failed the World - The Atlantic - 0 views

  • In the United States, Facebook has facilitated the spread of misinformation, hate speech, and political polarization. It has algorithmically surfaced false information about conspiracy theories and vaccines, and was instrumental in the ability of an extremist mob to attempt a violent coup at the Capitol. That much is now painfully familiar.
  • these documents show that the Facebook we have in the United States is actually the platform at its best. It’s the version made by people who speak our language and understand our customs, who take our civic problems seriously because those problems are theirs too. It’s the version that exists on a free internet, under a relatively stable government, in a wealthy democracy. It’s also the version to which Facebook dedicates the most moderation resources.
  • Elsewhere, the documents show, things are different. In the most vulnerable parts of the world—places with limited internet access, where smaller user numbers mean bad actors have undue influence—the trade-offs and mistakes that Facebook makes can have deadly consequences.
  • ...23 more annotations...
  • According to the documents, Facebook is aware that its products are being used to facilitate hate speech in the Middle East, violent cartels in Mexico, ethnic cleansing in Ethiopia, extremist anti-Muslim rhetoric in India, and sex trafficking in Dubai. It is also aware that its efforts to combat these things are insufficient. A March 2021 report notes, “We frequently observe highly coordinated, intentional activity … by problematic actors” that is “particularly prevalent—and problematic—in At-Risk Countries and Contexts”; the report later acknowledges, “Current mitigation strategies are not enough.”
  • As recently as late 2020, an internal Facebook report found that only 6 percent of Arabic-language hate content on Instagram was detected by Facebook’s systems. Another report that circulated last winter found that, of material posted in Afghanistan that was classified as hate speech within a 30-day range, only 0.23 percent was taken down automatically by Facebook’s tools. In both instances, employees blamed company leadership for insufficient investment.
  • last year, according to the documents, only 13 percent of Facebook’s misinformation-moderation staff hours were devoted to the non-U.S. countries in which it operates, whose populations comprise more than 90 percent of Facebook’s users.
  • Among the consequences of that pattern, according to the memo: The Hindu-nationalist politician T. Raja Singh, who posted to hundreds of thousands of followers on Facebook calling for India’s Rohingya Muslims to be shot—in direct violation of Facebook’s hate-speech guidelines—was allowed to remain on the platform despite repeated requests to ban him, including from the very Facebook employees tasked with monitoring hate speech.
  • The granular, procedural, sometimes banal back-and-forth exchanges recorded in the documents reveal, in unprecedented detail, how the most powerful company on Earth makes its decisions. And they suggest that, all over the world, Facebook’s choices are consistently driven by public perception, business risk, the threat of regulation, and the specter of “PR fires,” a phrase that appears over and over in the documents.
  • “It’s an open secret … that Facebook’s short-term decisions are largely motivated by PR and the potential for negative attention,” an employee named Sophie Zhang wrote in a September 2020 internal memo about Facebook’s failure to act on global misinformation threats.
  • In a memo dated December 2020 and posted to Workplace, Facebook’s very Facebooklike internal message board, an employee argued that “Facebook’s decision-making on content policy is routinely influenced by political considerations.”
  • To hear this employee tell it, the problem was structural: Employees who are primarily tasked with negotiating with governments over regulation and national security, and with the press over stories, were empowered to weigh in on conversations about building and enforcing Facebook’s rules regarding questionable content around the world. “Time and again,” the memo quotes a Facebook researcher saying, “I’ve seen promising interventions … be prematurely stifled or severely constrained by key decisionmakers—often based on fears of public and policy stakeholder responses.”
  • And although Facebook users post in at least 160 languages, the company has built robust AI detection in only a fraction of those languages, the ones spoken in large, high-profile markets such as the U.S. and Europe—a choice, the documents show, that means problematic content is seldom detected.
  • A 2020 Wall Street Journal article reported that Facebook’s top public-policy executive in India had raised concerns about backlash if the company were to do so, saying that cracking down on leaders from the ruling party might make running the business more difficult.
  • Employees weren’t placated. In dozens and dozens of comments, they questioned the decisions Facebook had made regarding which parts of the company to involve in content moderation, and raised doubts about its ability to moderate hate speech in India. They called the situation “sad” and Facebook’s response “inadequate,” and wondered about the “propriety of considering regulatory risk” when it comes to violent speech.
  • “I have a very basic question,” wrote one worker. “Despite having such strong processes around hate speech, how come there are so many instances that we have failed? It does speak on the efficacy of the process.”
  • Two other employees said that they had personally reported certain Indian accounts for posting hate speech. Even so, one of the employees wrote, “they still continue to thrive on our platform spewing hateful content.”
  • Taken together, Frances Haugen’s leaked documents show Facebook for what it is: a platform racked by misinformation, disinformation, conspiracy thinking, extremism, hate speech, bullying, abuse, human trafficking, revenge porn, and incitements to violence
  • It is a company that has pursued worldwide growth since its inception—and then, when called upon by regulators, the press, and the public to quell the problems its sheer size has created, it has claimed that its scale makes completely addressing those problems impossible.
  • Instead, Facebook’s 60,000-person global workforce is engaged in a borderless, endless, ever-bigger game of whack-a-mole, one with no winners and a lot of sore arms.
  • Zhang details what she found in her nearly three years at Facebook: coordinated disinformation campaigns in dozens of countries, including India, Brazil, Mexico, Afghanistan, South Korea, Bolivia, Spain, and Ukraine. In some cases, such as in Honduras and Azerbaijan, Zhang was able to tie accounts involved in these campaigns directly to ruling political parties. In the memo, posted to Workplace the day Zhang was fired from Facebook for what the company alleged was poor performance, she says that she made decisions about these accounts with minimal oversight or support, despite repeated entreaties to senior leadership. On multiple occasions, she said, she was told to prioritize other work.
  • A Facebook spokesperson said that the company tries “to keep people safe even if it impacts our bottom line,” adding that the company has spent $13 billion on safety since 2016. “​​Our track record shows that we crack down on abuse abroad with the same intensity that we apply in the U.S.”
  • Zhang's memo, though, paints a different picture. “We focus upon harm and priority regions like the United States and Western Europe,” she wrote. But eventually, “it became impossible to read the news and monitor world events without feeling the weight of my own responsibility.”
  • Indeed, Facebook explicitly prioritizes certain countries for intervention by sorting them into tiers, the documents show. Zhang “chose not to prioritize” Bolivia, despite credible evidence of inauthentic activity in the run-up to the country’s 2019 election. That election was marred by claims of fraud, which fueled widespread protests; more than 30 people were killed and more than 800 were injured.
  • “I have blood on my hands,” Zhang wrote in the memo. By the time she left Facebook, she was having trouble sleeping at night. “I consider myself to have been put in an impossible spot—caught between my loyalties to the company and my loyalties to the world as a whole.”
  • What happened in the Philippines—and in Honduras, and Azerbaijan, and India, and Bolivia—wasn’t just that a very large company lacked a handle on the content posted to its platform. It was that, in many cases, a very large company knew what was happening and failed to meaningfully intervene.
  • solving problems for users should not be surprising. The company is under the constant threat of regulation and bad press. Facebook is doing what companies do, triaging and acting in its own self-interest.
Javier E

Opinion | The Government Must Say What It Knows About Covid's Origins - The New York Times - 0 views

  • By keeping evidence that seemed to provide ammunition to proponents of a lab leak theory under wraps and resisting disclosure, U.S. officials have contributed to making the topic of the pandemic’s origins more poisoned and open to manipulation by bad-faith actors.
  • Treating crucial information like a dark secret empowers those who viciously and unfairly accuse public health officials and scientists of profiting off the pandemic. As Megan K. Stack wrote in Times Opinion this spring, “Those who seek to suppress disinformation may be destined, themselves, to sow it.”
  • According to an Economist/YouGov poll published in March, 66 percent of Americans — including majorities of Democrats and independents — believe the pandemic was caused by research activities, a number that has gone up since 2020
  • ...5 more annotations...
  • The American public, however, only rarely heard refreshing honesty from their officials or even their scientists — and this tight-lipped, denialist approach appears to have only strengthened belief that the pandemic arose from carelessness during research or even, in less reality-based accounts, something deliberate
  • Only 16 percent of Americans believed that it was likely or definitely false that the emergence of the Covid virus was tied to research in a Chinese lab, while 17 percent were unsure.
  • Worse, biosafety, globally, remains insufficiently regulated. Making biosafety into a controversial topic makes it harder to move forward with necessary regulation and international effort
  • For years, scientists and government officials did not publicly talk much about the fact that a 1977 “Russian” influenza pandemic that killed hundreds of thousands of people most likely began when a vaccine trial went awry.
  • one reason for the relative silence was the fear of upsetting the burgeoning cooperation over flu surveillance and treatment by the United States, China and Russia.
Javier E

Is Argentina the First A.I. Election? - The New York Times - 0 views

  • Argentina’s election has quickly become a testing ground for A.I. in campaigns, with the two candidates and their supporters employing the technology to doctor existing images and videos and create others from scratch.
  • A.I. has made candidates say things they did not, and put them in famous movies and memes. It has created campaign posters, and triggered debates over whether real videos are actually real.
  • A.I.’s prominent role in Argentina’s campaign and the political debate it has set off underscore the technology’s growing prevalence and show that, with its expanding power and falling cost, it is now likely to be a factor in many democratic elections around the globe.
  • ...8 more annotations...
  • Experts compare the moment to the early days of social media, a technology offering tantalizing new tools for politics — and unforeseen threats.
  • For years, those fears had largely been speculative because the technology to produce such fakes was too complicated, expensive and unsophisticated.
  • His spokesman later stressed that the post was in jest and clearly labeled A.I.-generated. His campaign said in a statement that its use of A.I. is to entertain and make political points, not deceive.
  • Researchers have long worried about the impact of A.I. on elections. The technology can deceive and confuse voters, casting doubt over what is real, adding to the disinformation that can be spread by social networks.
  • Much of the content has been clearly fake. But a few creations have toed the line of disinformation. The Massa campaign produced one “deepfake” video in which Mr. Milei explains how a market for human organs would work, something he has said philosophically fits in with his libertarian views.
  • So far, the A.I.-generated content shared by the campaigns in Argentina has either been labeled A.I. generated or is so clearly fabricated that it is unlikely it would deceive even the most credulous voters. Instead, the technology has supercharged the ability to create viral content that previously would have taken teams of graphic designers days or weeks to complete.
  • To do so, campaign engineers and artists fed photos of Argentina’s various political players into an open-source software called Stable Diffusion to train their own A.I. system so that it could create fake images of those real people. They can now quickly produce an image or video of more than a dozen top political players in Argentina doing almost anything they ask.
  • For Halloween, the Massa campaign told its A.I. to create a series of cartoonish images of Mr. Milei and his allies as zombies. The campaign also used A.I. to create a dramatic movie trailer, featuring Buenos Aires, Argentina’s capital, burning, Mr. Milei as an evil villain in a straitjacket and Mr. Massa as the hero who will save the country.
Javier E

How Nations Are Losing a Global Race to Tackle A.I.'s Harms - The New York Times - 0 views

  • When European Union leaders introduced a 125-page draft law to regulate artificial intelligence in April 2021, they hailed it as a global model for handling the technology.
  • E.U. lawmakers had gotten input from thousands of experts for three years about A.I., when the topic was not even on the table in other countries. The result was a “landmark” policy that was “future proof,” declared Margrethe Vestager, the head of digital policy for the 27-nation bloc.
  • Then came ChatGPT.
  • ...45 more annotations...
  • The eerily humanlike chatbot, which went viral last year by generating its own answers to prompts, blindsided E.U. policymakers. The type of A.I. that powered ChatGPT was not mentioned in the draft law and was not a major focus of discussions about the policy. Lawmakers and their aides peppered one another with calls and texts to address the gap, as tech executives warned that overly aggressive regulations could put Europe at an economic disadvantage.
  • Even now, E.U. lawmakers are arguing over what to do, putting the law at risk. “We will always be lagging behind the speed of technology,” said Svenja Hahn, a member of the European Parliament who was involved in writing the A.I. law.
  • Lawmakers and regulators in Brussels, in Washington and elsewhere are losing a battle to regulate A.I. and are racing to catch up, as concerns grow that the powerful technology will automate away jobs, turbocharge the spread of disinformation and eventually develop its own kind of intelligence.
  • Nations have moved swiftly to tackle A.I.’s potential perils, but European officials have been caught off guard by the technology’s evolution, while U.S. lawmakers openly concede that they barely understand how it works.
  • The absence of rules has left a vacuum. Google, Meta, Microsoft and OpenAI, which makes ChatGPT, have been left to police themselves as they race to create and profit from advanced A.I. systems
  • At the root of the fragmented actions is a fundamental mismatch. A.I. systems are advancing so rapidly and unpredictably that lawmakers and regulators can’t keep pace
  • That gap has been compounded by an A.I. knowledge deficit in governments, labyrinthine bureaucracies and fears that too many rules may inadvertently limit the technology’s benefits.
  • Even in Europe, perhaps the world’s most aggressive tech regulator, A.I. has befuddled policymakers.
  • The European Union has plowed ahead with its new law, the A.I. Act, despite disputes over how to handle the makers of the latest A.I. systems.
  • A final agreement, expected as soon as Wednesday, could restrict certain risky uses of the technology and create transparency requirements about how the underlying systems work. But even if it passes, it is not expected to take effect for at least 18 months — a lifetime in A.I. development — and how it will be enforced is unclear.
  • The result has been a sprawl of responses. President Biden issued an executive order in October about A.I.’s national security effects as lawmakers debate what, if any, measures to pass. Japan is drafting nonbinding guidelines for the technology, while China has imposed restrictions on certain types of A.I. Britain has said existing laws are adequate for regulating the technology. Saudi Arabia and the United Arab Emirates are pouring government money into A.I. research.
  • Many companies, preferring nonbinding codes of conduct that provide latitude to speed up development, are lobbying to soften proposed regulations and pitting governments against one another.
  • “No one, not even the creators of these systems, know what they will be able to do,” said Matt Clifford, an adviser to Prime Minister Rishi Sunak of Britain, who presided over an A.I. Safety Summit last month with 28 countries. “The urgency comes from there being a real question of whether governments are equipped to deal with and mitigate the risks.”
  • Europe takes the lead
  • In mid-2018, 52 academics, computer scientists and lawyers met at the Crowne Plaza hotel in Brussels to discuss artificial intelligence. E.U. officials had selected them to provide advice about the technology, which was drawing attention for powering driverless cars and facial recognition systems.
  • as they discussed A.I.’s possible effects — including the threat of facial recognition technology to people’s privacy — they recognized “there were all these legal gaps, and what happens if people don’t follow those guidelines?”
  • In 2019, the group published a 52-page report with 33 recommendations, including more oversight of A.I. tools that could harm individuals and society.
  • By October, the governments of France, Germany and Italy, the three largest E.U. economies, had come out against strict regulation of general purpose A.I. models for fear of hindering their domestic tech start-ups. Others in the European Parliament said the law would be toothless without addressing the technology. Divisions over the use of facial recognition technology also persisted.
  • So when the A.I. Act was unveiled in 2021, it concentrated on “high risk” uses of the technology, including in law enforcement, school admissions and hiring. It largely avoided regulating the A.I. models that powered them unless listed as dangerous
  • “They sent me a draft, and I sent them back 20 pages of comments,” said Stuart Russell, a computer science professor at the University of California, Berkeley, who advised the European Commission. “Anything not on their list of high-risk applications would not count, and the list excluded ChatGPT and most A.I. systems.”
  • E.U. leaders were undeterred.“Europe may not have been the leader in the last wave of digitalization, but it has it all to lead the next one,” Ms. Vestager said when she introduced the policy at a news conference in Brussels.
  • Nineteen months later, ChatGPT arrived.
  • In 2020, European policymakers decided that the best approach was to focus on how A.I. was used and not the underlying technology. A.I. was not inherently good or bad, they said — it depended on how it was applied.
  • The Washington game
  • Lacking tech expertise, lawmakers are increasingly relying on Anthropic, Microsoft, OpenAI, Google and other A.I. makers to explain how it works and to help create rules.
  • “We’re not experts,” said Representative Ted Lieu, Democrat of California, who hosted Sam Altman, OpenAI’s chief executive, and more than 50 lawmakers at a dinner in Washington in May. “It’s important to be humble.”
  • Tech companies have seized their advantage. In the first half of the year, many of Microsoft’s and Google’s combined 169 lobbyists met with lawmakers and the White House to discuss A.I. legislation, according to lobbying disclosures. OpenAI registered its first three lobbyists and a tech lobbying group unveiled a $25 million campaign to promote A.I.’s benefits this year.
  • In that same period, Mr. Altman met with more than 100 members of Congress, including former Speaker Kevin McCarthy, Republican of California, and the Senate leader, Chuck Schumer, Democrat of New York. After testifying in Congress in May, Mr. Altman embarked on a 17-city global tour, meeting world leaders including President Emmanuel Macron of France, Mr. Sunak and Prime Minister Narendra Modi of India.
  • , the White House announced that the four companies had agreed to voluntary commitments on A.I. safety, including testing their systems through third-party overseers — which most of the companies were already doing.
  • “It was brilliant,” Mr. Smith said. “Instead of people in government coming up with ideas that might have been impractical, they said, ‘Show us what you think you can do and we’ll push you to do more.’”
  • In a statement, Ms. Raimondo said the federal government would keep working with companies so “America continues to lead the world in responsible A.I. innovation.”
  • Over the summer, the Federal Trade Commission opened an investigation into OpenAI and how it handles user data. Lawmakers continued welcoming tech executives.
  • In September, Mr. Schumer was the host of Elon Musk, Mark Zuckerberg of Meta, Sundar Pichai of Google, Satya Nadella of Microsoft and Mr. Altman at a closed-door meeting with lawmakers in Washington to discuss A.I. rules. Mr. Musk warned of A.I.’s “civilizational” risks, while Mr. Altman proclaimed that A.I. could solve global problems such as poverty.
  • A.I. companies are playing governments off one another. In Europe, industry groups have warned that regulations could put the European Union behind the United States. In Washington, tech companies have cautioned that China might pull ahead.
  • “China is way better at this stuff than you imagine,” Mr. Clark of Anthropic told members of Congress in January.
  • In May, Ms. Vestager, Ms. Raimondo and Antony J. Blinken, the U.S. secretary of state, met in Lulea, Sweden, to discuss cooperating on digital policy.
  • After two days of talks, Ms. Vestager announced that Europe and the United States would release a shared code of conduct for safeguarding A.I. “within weeks.” She messaged colleagues in Brussels asking them to share her social media post about the pact, which she called a “huge step in a race we can’t afford to lose.”
  • Months later, no shared code of conduct had appeared. The United States instead announced A.I. guidelines of its own.
  • Little progress has been made internationally on A.I. With countries mired in economic competition and geopolitical distrust, many are setting their own rules for the borderless technology.
  • Yet “weak regulation in another country will affect you,” said Rajeev Chandrasekhar, India’s technology minister, noting that a lack of rules around American social media companies led to a wave of global disinformation.
  • “Most of the countries impacted by those technologies were never at the table when policies were set,” he said. “A.I will be several factors more difficult to manage.”
  • Even among allies, the issue has been divisive. At the meeting in Sweden between E.U. and U.S. officials, Mr. Blinken criticized Europe for moving forward with A.I. regulations that could harm American companies, one attendee said. Thierry Breton, a European commissioner, shot back that the United States could not dictate European policy, the person said.
  • Some policymakers said they hoped for progress at an A.I. safety summit that Britain held last month at Bletchley Park, where the mathematician Alan Turing helped crack the Enigma code used by the Nazis. The gathering featured Vice President Kamala Harris; Wu Zhaohui, China’s vice minister of science and technology; Mr. Musk; and others.
  • The upshot was a 12-paragraph statement describing A.I.’s “transformative” potential and “catastrophic” risk of misuse. Attendees agreed to meet again next year.
  • The talks, in the end, produced a deal to keep talking.
lilyrashkind

Why YouTube Has Survived Russia's Social Media Crackdown | Time - 0 views

  • In a style part investigative journalism, part polemic, the video’s hosts report that one of President Vladimir Putin’s allies, Russian senator Valentina Matviyenko, owns a multimillion-dollar villa on the Italian seafront. The video contrasts the luxurious lifestyle of Matviyenko and her family with footage of dead Russian soldiers, and with images of Russian artillery hitting civilian apartment buildings in Ukraine. A voiceover calls the war “senseless” and “unimaginable.” A slide at the end urges Russians to head to squares in their cities to protest at specific dates and times. In less than a week, the video racked up more than 4 million views.
  • TV news is dominated by the misleading narrative that Russia’s invasion of Ukraine is actually a peace-keeping exercise. Despite this, YouTube has largely been spared from the Kremlin’s crackdown on American social media platforms since Russia invaded Ukraine nearly a month ago.
  • The app had been a particular venue for activism: Many Russian celebrities spoke out against the invasion of Ukraine in their Instagram stories, and Navalny’s Instagram page posted a statement criticizing the war, and calling on Russians to come out in protest.
  • ...9 more annotations...
  • On March 11, YouTube’s parent company Google announced that it would block Russian state-backed media globally, including within Russia. The policy was an expansion of an earlier announcement that these channels would be blocked within the European Union. “Our Community Guidelines prohibit content denying, minimizing or trivializing well-documented violent events, and we remove content about Russia’s invasion in Ukraine that violates this policy,” Google said in a statement. “In line with that, effective immediately, we are also blocking YouTube channels associated with Russian state-funded media, globally.”
  • That could leave many millions of Russians cut off from independent news and content shared by opposition activists like Navalny’s team. (It would also effectively delete 75 million YouTube users, or some 4% of the platform’s global total—representing a small but still-significant portion of Google’s overall profits.)
  • Today, YouTube remains the most significant way for tens of millions of ordinary Russians to receive largely uncensored information from the outside world.
  • Part of the reason for YouTube’s survival amid the crackdown is its popularity, experts say. “YouTube is by far and away the most popular social media platform in Russia,” says Justin Sherman, a non-resident fellow at the Atlantic Council’s cyber statecraft initiative. The platform is even more popular than VK, the Russian-owned answer to Facebook.
  • Still, Sherman says the situation is volatile, with Russia now more likely than ever before to ban YouTube. For an authoritarian government like Russia’s, “part of the decision to allow a foreign platform in your country is that you get to use it to spread propaganda and disinformation, even if people use it to spread truth and organize against you,” he says. “If you start losing the ability to spread misinformation and propaganda, but people can still use it to spread truth and organize, then all of a sudden, you start wondering why you’re allowing that platform in your country in the first place.” YouTube did not respond to a request for comment.
  • On the same day as Navalny’s channel posted the video about Matviyenko, elsewhere on YouTube a very different spectacle was playing out. In a video posted to the channel of the Kremlin-funded media outlet RT, (formerly known as Russia Today,) a commentator dismissed evidence of Russian bombings of Ukrainian cities. She blamed “special forces of NATO countries” for allegedly faking images of bombed-out Ukrainian schools, kindergartens and other buildings.
  • “YouTube has, over the years, been a really important place for spreading Russian propaganda,” Donovan said in an interview with TIME days before YouTube banned Russian state-backed media.
  • In July 2021, the Russian government passed a law that would require foreign tech companies with more than 500,000 users to open a local office within Russia. (A similar law passed previously in India had been used by the government there to pressure tech companies to take down opposition accounts and posts critical of the government, by threatening employees with arrest.)
  • The heightened risk to free expression in Russia Experts say that Russia’s ongoing crackdown on social media platforms heralds a significant shift in the shape of the Russian internet—and a potential end to the era where the Kremlin tolerated largely free expression on YouTube in return for access to a tool that allowed it to spread disinformation far and wide.
1 - 20 of 200 Next › Last »
Showing 20 items per page