Skip to main content

Home/ History Readings/ Group items tagged algorithm

Rss Feed Group items tagged

Javier E

'Fiction is outperforming reality': how YouTube's algorithm distorts truth | Technology... - 0 views

  • There are 1.5 billion YouTube users in the world, which is more than the number of households that own televisions. What they watch is shaped by this algorithm, which skims and ranks billions of videos to identify 20 “up next” clips that are both relevant to a previous video and most likely, statistically speaking, to keep a person hooked on their screen.
  • Company insiders tell me the algorithm is the single most important engine of YouTube’s growth
  • YouTube engineers describe it as one of the “largest scale and most sophisticated industrial recommendation systems in existence”
  • ...49 more annotations...
  • Lately, it has also become one of the most controversial. The algorithm has been found to be promoting conspiracy theories about the Las Vegas mass shooting and incentivising, through recommendations, a thriving subculture that targets children with disturbing content
  • One YouTube creator who was banned from making advertising revenues from his strange videos – which featured his children receiving flu shots, removing earwax, and crying over dead pets – told a reporter he had only been responding to the demands of Google’s algorithm. “That’s what got us out there and popular,” he said. “We learned to fuel it and do whatever it took to please the algorithm.”
  • academics have speculated that YouTube’s algorithms may have been instrumental in fuelling disinformation during the 2016 presidential election. “YouTube is the most overlooked story of 2016,” Zeynep Tufekci, a widely respected sociologist and technology critic, tweeted back in October. “Its search and recommender algorithms are misinformation engines.”
  • Those are not easy questions to answer. Like all big tech companies, YouTube does not allow us to see the algorithms that shape our lives. They are secret formulas, proprietary software, and only select engineers are entrusted to work on the algorithm
  • Guillaume Chaslot, a 36-year-old French computer programmer with a PhD in artificial intelligence, was one of those engineers.
  • The experience led him to conclude that the priorities YouTube gives its algorithms are dangerously skewed.
  • Chaslot said none of his proposed fixes were taken up by his managers. “There are many ways YouTube can change its algorithms to suppress fake news and improve the quality and diversity of videos people see,” he says. “I tried to change YouTube from the inside but it didn’t work.”
  • Chaslot explains that the algorithm never stays the same. It is constantly changing the weight it gives to different signals: the viewing patterns of a user, for example, or the length of time a video is watched before someone clicks away.
  • The engineers he worked with were responsible for continuously experimenting with new formulas that would increase advertising revenues by extending the amount of time people watched videos. “Watch time was the priority,” he recalls. “Everything else was considered a distraction.”
  • Chaslot was fired by Google in 2013, ostensibly over performance issues. He insists he was let go after agitating for change within the company, using his personal time to team up with like-minded engineers to propose changes that could diversify the content people see.
  • He was especially worried about the distortions that might result from a simplistic focus on showing people videos they found irresistible, creating filter bubbles, for example, that only show people content that reinforces their existing view of the world.
  • “YouTube is something that looks like reality, but it is distorted to make you spend more time online,” he tells me when we meet in Berkeley, California. “The recommendation algorithm is not optimising for what is truthful, or balanced, or healthy for democracy.”
  • YouTube told me that its recommendation system had evolved since Chaslot worked at the company and now “goes beyond optimising for watchtime”.
  • It did not say why Google, which acquired YouTube in 2006, waited over a decade to make those changes
  • Chaslot believes such changes are mostly cosmetic, and have failed to fundamentally alter some disturbing biases that have evolved in the algorithm
  • It finds videos through a word search, selecting a “seed” video to begin with, and recording several layers of videos that YouTube recommends in the “up next” column. It does so with no viewing history, ensuring the videos being detected are YouTube’s generic recommendations, rather than videos personalised to a user. And it repeats the process thousands of times, accumulating layers of data about YouTube recommendations to build up a picture of the algorithm’s preferences.
  • Each study finds something different, but the research suggests YouTube systematically amplifies videos that are divisive, sensational and conspiratorial.
  • When his program found a seed video by searching the query “who is Michelle Obama?” and then followed the chain of “up next” suggestions, for example, most of the recommended videos said she “is a man”
  • He believes one of the most shocking examples was detected by his program in the run-up to the 2016 presidential election. As he observed in a short, largely unnoticed blogpost published after Donald Trump was elected, the impact of YouTube’s recommendation algorithm was not neutral during the presidential race: it was pushing videos that were, in the main, helpful to Trump and damaging to Hillary Clinton.
  • “It was strange,” he explains to me. “Wherever you started, whether it was from a Trump search or a Clinton search, the recommendation algorithm was much more likely to push you in a pro-Trump direction.”
  • Trump won the electoral college as a result of 80,000 votes spread across three swing states. There were more than 150 million YouTube users in the US. The videos contained in Chaslot’s database of YouTube-recommended election videos were watched, in total, more than 3bn times before the vote in November 2016.
  • “Algorithms that shape the content we see can have a lot of impact, particularly on people who have not made up their mind,”
  • “Gentle, implicit, quiet nudging can over time edge us toward choices we might not have otherwise made.”
  • But what was most compelling was how often Chaslot’s software detected anti-Clinton conspiracy videos appearing “up next” beside other videos.
  • I spent weeks watching, sorting and categorising the trove of videos with Erin McCormick, an investigative reporter and expert in database analysis. From the start, we were stunned by how many extreme and conspiratorial videos had been recommended, and the fact that almost all of them appeared to be directed against Clinton.
  • “This research captured the apparent direction of YouTube’s political ecosystem,” he says. “That has not been done before.”
  • There were too many videos in the database for us to watch them all, so we focused on 1,000 of the top-recommended videos. We sifted through them one by one to determine whether the content was likely to have benefited Trump or Clinton. Just over a third of the videos were either unrelated to the election or contained content that was broadly neutral or even-handed. Of the remaining 643 videos, 551 were videos favouring Trump, while only only 92 favoured the Clinton campaign.
  • The sample we had looked at suggested Chaslot’s conclusion was correct: YouTube was six times more likely to recommend videos that aided Trump than his adversary.
  • The spokesperson added: “Our search and recommendation systems reflect what people search for, the number of videos available, and the videos people choose to watch on YouTube. That’s not a bias towards any particular candidate; that is a reflection of viewer interest.”
  • YouTube seemed to be saying that its algorithm was a neutral mirror of the desires of the people who use it – if we don’t like what it does, we have ourselves to blame. How does YouTube interpret “viewer interest” – and aren’t “the videos people choose to watch” influenced by what the company shows them?
  • Offered the choice, we may instinctively click on a video of a dead man in a Japanese forest, or a fake news clip claiming Bill Clinton raped a 13-year-old. But are those in-the-moment impulses really a reflect of the content we want to be fed?
  • YouTube’s recommendation system has probably figured out that edgy and hateful content is engaging. “This is a bit like an autopilot cafeteria in a school that has figured out children have sweet teeth, and also like fatty and salty foods,” she says. “So you make a line offering such food, automatically loading the next plate as soon as the bag of chips or candy in front of the young person has been consumed.”
  • Once that gets normalised, however, what is fractionally more edgy or bizarre becomes, Tufekci says, novel and interesting. “So the food gets higher and higher in sugar, fat and salt – natural human cravings – while the videos recommended and auto-played by YouTube get more and more bizarre or hateful.”
  • “This is important research because it seems to be the first systematic look into how YouTube may have been manipulated,” he says, raising the possibility that the algorithm was gamed as part of the same propaganda campaigns that flourished on Twitter and Facebook.
  • “We believe that the activity we found was limited because of various safeguards that we had in place in advance of the 2016 election, and the fact that Google’s products didn’t lend themselves to the kind of micro-targeting or viral dissemination that these actors seemed to prefer.”
  • Senator Mark Warner, the ranking Democrat on the intelligence committee, later wrote to the company about the algorithm, which he said seemed “particularly susceptible to foreign influence”. The senator demanded to know what the company was specifically doing to prevent a “malign incursion” of YouTube’s recommendation system. Walker, in his written reply, offered few specifics
  • Tristan Harris, a former Google insider turned tech whistleblower, likes to describe Facebook as a “living, breathing crime scene for what happened in the 2016 election” that federal investigators have no access to. The same might be said of YouTube. About half the videos Chaslot’s program detected being recommended during the election have now vanished from YouTube – many of them taken down by their creators. Chaslot has always thought this suspicious. These were videos with titles such as “Must Watch!! Hillary Clinton tried to ban this video”, watched millions of times before they disappeared. “Why would someone take down a video that has been viewed millions of times?” he asks
  • I shared the entire database of 8,000 YouTube-recommended videos with John Kelly, the chief executive of the commercial analytics firm Graphika, which has been tracking political disinformation campaigns. He ran the list against his own database of Twitter accounts active during the election, and concluded many of the videos appeared to have been pushed by networks of Twitter sock puppets and bots controlled by pro-Trump digital consultants with “a presumably unsolicited assist” from Russia.
  • “I don’t have smoking-gun proof of who logged in to control those accounts,” he says. “But judging from the history of what we’ve seen those accounts doing before, and the characteristics of how they tweet and interconnect, they are assembled and controlled by someone – someone whose job was to elect Trump.”
  • After the Senate’s correspondence with Google over possible Russian interference with YouTube’s recommendation algorithm was made public last week, YouTube sent me a new statement. It emphasised changes it made in 2017 to discourage the recommendation system from promoting some types of problematic content. “We appreciate the Guardian’s work to shine a spotlight on this challenging issue,” it added. “We know there is more to do here and we’re looking forward to making more announcements in the months ahead.”
  • In the months leading up to the election, the Next News Network turned into a factory of anti-Clinton news and opinion, producing dozens of videos a day and reaching an audience comparable to that of MSNBC’s YouTube channel. Chaslot’s research indicated Franchi’s success could largely be credited to YouTube’s algorithms, which consistently amplified his videos to be played “up next”. YouTube had sharply dismissed Chaslot’s research.
  • I contacted Franchi to see who was right. He sent me screen grabs of the private data given to people who upload YouTube videos, including a breakdown of how their audiences found their clips. The largest source of traffic to the Bill Clinton rape video, which was viewed 2.4m times in the month leading up to the election, was YouTube recommendations.
  • The same was true of all but one of the videos Franchi sent me data for. A typical example was a Next News Network video entitled “WHOA! HILLARY THINKS CAMERA’S OFF… SENDS SHOCK MESSAGE TO TRUMP” in which Franchi, pointing to a tiny movement of Clinton’s lips during a TV debate, claims she says “fuck you” to her presidential rival. The data Franchi shared revealed in the month leading up to the election, 73% of the traffic to the video – amounting to 1.2m of its views – was due to YouTube recommendations. External traffic accounted for only 3% of the views.
  • many of the other creators of anti-Clinton videos I spoke to were amateur sleuths or part-time conspiracy theorists. Typically, they might receive a few hundred views on their videos, so they were shocked when their anti-Clinton videos started to receive millions of views, as if they were being pushed by an invisible force.
  • In every case, the largest source of traffic – the invisible force – came from the clips appearing in the “up next” column. William Ramsey, an occult investigator from southern California who made “Irrefutable Proof: Hillary Clinton Has a Seizure Disorder!”, shared screen grabs that showed the recommendation algorithm pushed his video even after YouTube had emailed him to say it violated its guidelines. Ramsey’s data showed the video was watched 2.4m times by US-based users before election day. “For a nobody like me, that’s a lot,” he says. “Enough to sway the election, right?”
  • Daniel Alexander Cannon, a conspiracy theorist from South Carolina, tells me: “Every video I put out about the Clintons, YouTube would push it through the roof.” His best-performing clip was a video titled “Hillary and Bill Clinton ‘The 10 Photos You Must See’”, essentially a slideshow of appalling (and seemingly doctored) images of the Clintons with voiceover in which Cannon speculates on their health. It has been seen 3.7m times on YouTube, and 2.9m of those views, Cannon said, came from “up next” recommendations.
  • his research also does something more important: revealing how thoroughly our lives are now mediated by artificial intelligence.
  • Less than a generation ago, the way voters viewed their politicians was largely shaped by tens of thousands of newspaper editors, journalists and TV executives. Today, the invisible codes behind the big technology platforms have become the new kingmakers.
  • They pluck from obscurity people like Dave Todeschini, a retired IBM engineer who, “let off steam” during the election by recording himself opining on Clinton’s supposed involvement in paedophilia, child sacrifice and cannibalism. “It was crazy, it was nuts,” he said of the avalanche of traffic to his YouTube channel, which by election day had more than 2m views
Javier E

Why Facebook won't let you turn off its news feed algorithm - The Washington Post - 0 views

  • In at least two experiments over the years, Facebook has explored what happens when it turns off its controversial news feed ranking system — the software that decides for each user which posts they’ll see and in what order, internal documents show. That leaves users to see all the posts from all of their friends in simple, chronological order.
  • The internal research documents, some previously unreported, help to explain why Facebook seems so wedded to its automated ranking system, known as the news feed algorithm.
  • previously reported internal documents, which Haugen provided to regulators and media outlets, including The Washington Post, have shown how Facebook crafts its ranking system to keep users hooked, sometimes at the cost of angering or misinforming them.
  • ...25 more annotations...
  • In testimony to U.S. Congress and abroad, whistleblower Frances Haugen has pointed to the algorithm as central to the social network’s problems, arguing that it systematically amplifies and rewards hateful, divisive, misleading and sometimes outright false content by putting it at the top of users’ feeds.
  • The political push raises an old question for Facebook: Why not just give users the power to turn off their feed ranking algorithms voluntarily? Would letting users opt to see every post from the people they follow, in chronological order, be so bad?
  • The documents suggest that Facebook’s defense of algorithmic rankings stems not only from its business interests, but from a paternalistic conviction, backed by data, that its sophisticated personalization software knows what users want better than the users themselves
  • Since 2009, three years after it launched the news feed, Facebook has used software that predicts which posts each user will find most interesting and places those at the top of their feeds while burying others. That system, which has evolved in complexity to take in as many as 10,000 pieces of information about each post, has fueled the news feed’s growth into a dominant information source.
  • The proliferation of false information, conspiracy theories and partisan propaganda on Facebook and other social networks has led some to wonder whether we wouldn’t all be better off with a simpler, older system: one that simply shows people all the messages, pictures and videos from everyone they follow, in the order they were posted.
  • That was more or less how Instagram worked until 2016, and Twitter until 2017.
  • But Facebook has long resisted it.
  • they appear to have been informed mostly by data on user engagement, at least until recently
  • That employee, who said they had worked on and studied the news feed for two years, went on to question whether automated ranking might also come with costs that are harder to measure than the benefits. “Even asking this question feels slightly blasphemous at Facebook,” they added.
  • “Whenever we’ve tried to compare ranked and unranked feeds, ranked feeds just seem better,” wrote an employee in a memo titled, “Is ranking good?”, which was posted to the company’s internal network, Facebook Workplace, in 2018
  • In 2014, another internal report, titled “Feed ranking is good,” summarized the results of tests that found allowing users to turn off the algorithm led them to spend less time in their news feeds, post less often and interact less.
  • Without an algorithm deciding which posts to show at the top of users’ feeds, concluded the report’s author, whose name was redacted, “Facebook would probably be shrinking.”
  • there’s a catch: The setting only applies for as long as you stay logged in. When you leave and come back, the ranking algorithm will be back on.
  • What many users may not realize is that Facebook actually does offer an option to see a mostly chronological feed, called “most recent,”
  • The longer Facebook left the user’s feed in chronological order, the less time they spent on it, the less they posted, and the less often they returned to Facebook.
  • A separate report from 2018, first described by Alex Kantrowitz’s newsletter Big Technology, found that turning off the algorithm unilaterally for a subset of Facebook users, and showing them posts mostly in the order they were posted, led to “massive engagement drops.” Notably, it also found that users saw more low-quality content in their feeds, at least at first, although the company’s researchers were able to mitigate that with more aggressive “integrity” measures.
  • Nick Clegg, the company’s vice president of global affairs, said in a TV interview last month that if Facebook were to remove the news feed algorithm, “the first thing that would happen is that people would see more, not less, hate speech; more, not less, misinformation; more, not less, harmful content. Why? Because those algorithmic systems precisely are designed like a great sort of giant spam filter to identify and deprecate and downgrade bad content.”
  • because the algorithm has always been there, Facebook users haven’t been given the time or the tools to curate their feeds for themselves in thoughtful ways. In other words, Facebook has never really given a chronological news feed a fair shot to succeed
  • Some critics say that’s a straw-man argument. Simply removing automated rankings for a subset of users, on a social network that has been built to rely heavily on those systems, is not the same as designing a service to work well without them,
  • Ben Grosser, a professor of new media at University of Illinois at Urbana-Champaign. Those users’ feeds are no longer curated, but the posts they’re seeing are still influenced by the algorithm’s reward systems. That is, they’re still seeing content from people and publishers who are vying for the likes, shares and comments that drive Facebook’s recommendati
  • “My experience from watching a chronological feed within a social network that isn’t always trying to optimize for growth is that a lot of these problems” — such as hate speech, trolling and manipulative media — “just don’t exist.”
  • Facebook has not taken an official stand on the legislation that would require social networks to offer a chronological feed option, but Clegg said in an op-ed last month that the company is open to regulation around algorithms, transparency, and user controls.Twitter, for its part, signaled potential support for the bills.
  • “I think users have the right to expect social media experiences free of recommendation algorithms,” Maréchal added. “As a user, I want to have as much control over my own experience as possible, and recommendation algorithms take that control away from me.”
  • “Only companies themselves can do the experiments to find the answers. And as talented as industry researchers are, we can’t trust executives to make decisions in the public interest based on that research, or to let the public and policymakers access that research.”
  • ns.
Javier E

Facebook Papers: 'History Will Not Judge Us Kindly' - The Atlantic - 0 views

  • Facebook’s hypocrisies, and its hunger for power and market domination, are not secret. Nor is the company’s conflation of free speech and algorithmic amplification
  • But the events of January 6 proved for many people—including many in Facebook’s workforce—to be a breaking point.
  • these documents leave little room for doubt about Facebook’s crucial role in advancing the cause of authoritarianism in America and around the world. Authoritarianism predates the rise of Facebook, of course. But Facebook makes it much easier for authoritarians to win.
  • ...59 more annotations...
  • Again and again, the Facebook Papers show staffers sounding alarms about the dangers posed by the platform—how Facebook amplifies extremism and misinformation, how it incites violence, how it encourages radicalization and political polarization. Again and again, staffers reckon with the ways in which Facebook’s decisions stoke these harms, and they plead with leadership to do more.
  • And again and again, staffers say, Facebook’s leaders ignore them.
  • Facebook has dismissed the concerns of its employees in manifold ways.
  • One of its cleverer tactics is to argue that staffers who have raised the alarm about the damage done by their employer are simply enjoying Facebook’s “very open culture,” in which people are encouraged to share their opinions, a spokesperson told me. This stance allows Facebook to claim transparency while ignoring the substance of the complaints, and the implication of the complaints: that many of Facebook’s employees believe their company operates without a moral compass.
  • When you stitch together the stories that spanned the period between Joe Biden’s election and his inauguration, it’s easy to see Facebook as instrumental to the attack on January 6. (A spokesperson told me that the notion that Facebook played an instrumental role in the insurrection is “absurd.”)
  • what emerges from a close reading of Facebook documents, and observation of the manner in which the company connects large groups of people quickly, is that Facebook isn’t a passive tool but a catalyst. Had the organizers tried to plan the rally using other technologies of earlier eras, such as telephones, they would have had to identify and reach out individually to each prospective participant, then persuade them to travel to Washington. Facebook made people’s efforts at coordination highly visible on a global scale.
  • The platform not only helped them recruit participants but offered people a sense of strength in numbers. Facebook proved to be the perfect hype machine for the coup-inclined.
  • In November 2019, Facebook staffers noticed they had a serious problem. Facebook offers a collection of one-tap emoji reactions. Today, they include “like,” “love,” “care,” “haha,” “wow,” “sad,” and “angry.” Company researchers had found that the posts dominated by “angry” reactions were substantially more likely to go against community standards, including prohibitions on various types of misinformation, according to internal documents.
  • In July 2020, researchers presented the findings of a series of experiments. At the time, Facebook was already weighting the reactions other than “like” more heavily in its algorithm—meaning posts that got an “angry” reaction were more likely to show up in users’ News Feeds than posts that simply got a “like.” Anger-inducing content didn’t spread just because people were more likely to share things that made them angry; the algorithm gave anger-inducing content an edge. Facebook’s Integrity workers—employees tasked with tackling problems such as misinformation and espionage on the platform—concluded that they had good reason to believe targeting posts that induced anger would help stop the spread of harmful content.
  • By dialing anger’s weight back to zero in the algorithm, the researchers found, they could keep posts to which people reacted angrily from being viewed by as many users. That, in turn, translated to a significant (up to 5 percent) reduction in the hate speech, civic misinformation, bullying, and violent posts—all of which are correlated with offline violence—to which users were exposed.
  • Facebook rolled out the change in early September 2020, documents show; a Facebook spokesperson confirmed that the change has remained in effect. It was a real victory for employees of the Integrity team.
  • But it doesn’t normally work out that way. In April 2020, according to Frances Haugen’s filings with the SEC, Facebook employees had recommended tweaking the algorithm so that the News Feed would deprioritize the surfacing of content for people based on their Facebook friends’ behavior. The idea was that a person’s News Feed should be shaped more by people and groups that a person had chosen to follow. Up until that point, if your Facebook friend saw a conspiracy theory and reacted to it, Facebook’s algorithm might show it to you, too. The algorithm treated any engagement in your network as a signal that something was worth sharing. But now Facebook workers wanted to build circuit breakers to slow this form of sharing.
  • Experiments showed that this change would impede the distribution of hateful, polarizing, and violence-inciting content in people’s News Feeds. But Zuckerberg “rejected this intervention that could have reduced the risk of violence in the 2020 election,” Haugen’s SEC filing says. An internal message characterizing Zuckerberg’s reasoning says he wanted to avoid new features that would get in the way of “meaningful social interactions.” But according to Facebook’s definition, its employees say, engagement is considered “meaningful” even when it entails bullying, hate speech, and reshares of harmful content.
  • This episode, like Facebook’s response to the incitement that proliferated between the election and January 6, reflects a fundamental problem with the platform
  • Facebook’s megascale allows the company to influence the speech and thought patterns of billions of people. What the world is seeing now, through the window provided by reams of internal documents, is that Facebook catalogs and studies the harm it inflicts on people. And then it keeps harming people anyway.
  • “I am worried that Mark’s continuing pattern of answering a different question than the question that was asked is a symptom of some larger problem,” wrote one Facebook employee in an internal post in June 2020, referring to Zuckerberg. “I sincerely hope that I am wrong, and I’m still hopeful for progress. But I also fully understand my colleagues who have given up on this company, and I can’t blame them for leaving. Facebook is not neutral, and working here isn’t either.”
  • It is quite a thing to see, the sheer number of Facebook employees—people who presumably understand their company as well as or better than outside observers—who believe their employer to be morally bankrupt.
  • I spoke with several former Facebook employees who described the company’s metrics-driven culture as extreme, even by Silicon Valley standards
  • Facebook workers are under tremendous pressure to quantitatively demonstrate their individual contributions to the company’s growth goals, they told me. New products and features aren’t approved unless the staffers pitching them demonstrate how they will drive engagement.
  • As a result, Facebook has stoked an algorithm arms race within its ranks, pitting core product-and-engineering teams, such as the News Feed team, against their colleagues on Integrity teams, who are tasked with mitigating harm on the platform. These teams establish goals that are often in direct conflict with each other.
  • One of Facebook’s Integrity staffers wrote at length about this dynamic in a goodbye note to colleagues in August 2020, describing how risks to Facebook users “fester” because of the “asymmetrical” burden placed on employees to “demonstrate legitimacy and user value” before launching any harm-mitigation tactics—a burden not shared by those developing new features or algorithm changes with growth and engagement in mind
  • The note said:We were willing to act only after things had spiraled into a dire state … Personally, during the time that we hesitated, I’ve seen folks from my hometown go further and further down the rabbithole of QAnon and Covid anti-mask/anti-vax conspiracy on FB. It has been painful to observe.
  • Current and former Facebook employees describe the same fundamentally broken culture—one in which effective tactics for making Facebook safer are rolled back by leadership or never approved in the first place.
  • That broken culture has produced a broken platform: an algorithmic ecosystem in which users are pushed toward ever more extreme content, and where Facebook knowingly exposes its users to conspiracy theories, disinformation, and incitement to violence.
  • One example is a program that amounts to a whitelist for VIPs on Facebook, allowing some of the users most likely to spread misinformation to break Facebook’s rules without facing consequences. Under the program, internal documents show, millions of high-profile users—including politicians—are left alone by Facebook even when they incite violence
  • whitelisting influential users with massive followings on Facebook isn’t just a secret and uneven application of Facebook’s rules; it amounts to “protecting content that is especially likely to deceive, and hence to harm, people on our platforms.”
  • Facebook workers tried and failed to end the program. Only when its existence was reported in September by The Wall Street Journal did Facebook’s Oversight Board ask leadership for more information about the practice. Last week, the board publicly rebuked Facebook for not being “fully forthcoming” about the program.
  • e worries have been exacerbated lately by fears about a decline in new posts on Facebook, two former employees who left the company in recent years told me. People are posting new material less frequently to Facebook, and its users are on average older than those of other social platforms.
  • All of this makes the platform rely more heavily on ways it can manipulate what its users see in order to reach its goals. This explains why Facebook is so dependent on the infrastructure of groups, as well as making reshares highly visible, to keep people hooked.
  • Zuckerberg has defined Facebook’s mission as making “social infrastructure to give people the power to build a global community that works for all of us,” but in internal research documents his employees point out that communities aren’t always good for society:
  • When part of a community, individuals typically act in a prosocial manner. They conform, they forge alliances, they cooperate, they organize, they display loyalty, they expect obedience, they share information, they influence others, and so on. Being in a group changes their behavior, their abilities, and, importantly, their capability to harm themselves or others
  • Thus, when people come together and form communities around harmful topics or identities, the potential for harm can be greater.
  • The infrastructure choices that Facebook is making to keep its platform relevant are driving down the quality of the site, and exposing its users to more dangers
  • hose dangers are also unevenly distributed, because of the manner in which certain subpopulations are algorithmically ushered toward like-minded groups
  • And the subpopulations of Facebook users who are most exposed to dangerous content are also most likely to be in groups where it won’t get reported.
  • And it knows that 3 percent of Facebook users in the United States are super-consumers of conspiracy theories, accounting for 37 percent of known consumption of misinformation on the platform.
  • Zuckerberg’s positioning of Facebook’s role in the insurrection is odd. He lumps his company in with traditional media organizations—something he’s ordinarily loath to do, lest the platform be expected to take more responsibility for the quality of the content that appears on it—and suggests that Facebook did more, and did better, than journalism outlets in its response to January 6. What he fails to say is that journalism outlets would never be in the position to help investigators this way, because insurrectionists don’t typically use newspapers and magazines to recruit people for coups.
  • Facebook wants people to believe that the public must choose between Facebook as it is, on the one hand, and free speech, on the other. This is a false choice. Facebook has a sophisticated understanding of measures it could take to make its platform safer without resorting to broad or ideologically driven censorship tactics.
  • Facebook knows that no two people see the same version of the platform, and that certain subpopulations experience far more dangerous versions than others do
  • Facebook knows that people who are isolated—recently widowed or divorced, say, or geographically distant from loved ones—are disproportionately at risk of being exposed to harmful content on the platform.
  • It knows that repeat offenders are disproportionately responsible for spreading misinformation.
  • “We can’t pretend we don’t see information consumption patterns, and how deeply problematic they are for the longevity of democratic discourse,” a user-experience researcher wrote in an internal comment thread in 2019, in response to a now-infamous memo from Andrew “Boz” Bosworth, a longtime Facebook executive. “There is no neutral position at this stage, it would be powerfully immoral to commit to amorality.”
  • It could consistently enforce its policies regardless of a user’s political power.
  • Facebook could ban reshares.
  • It could choose to optimize its platform for safety and quality rather than for growth.
  • It could tweak its algorithm to prevent widespread distribution of harmful content.
  • Facebook could create a transparent dashboard so that all of its users can see what’s going viral in real time.
  • It could make public its rules for how frequently groups can post and how quickly they can grow.
  • It could also automatically throttle groups when they’re growing too fast, and cap the rate of virality for content that’s spreading too quickly.
  • Facebook could shift the burden of proof toward people and communities to demonstrate that they’re good actors—and treat reach as a privilege, not a right
  • Facebook could say that its platform is not for everyone. It could sound an alarm for those who wander into the most dangerous corners of Facebook, and those who encounter disproportionately high levels of harmful content
  • It could do all of these things. But it doesn’t.
  • Lately, people have been debating just how nefarious Facebook really is. One argument goes something like this: Facebook’s algorithms aren’t magic, its ad targeting isn’t even that good, and most people aren’t that stupid.
  • All of this may be true, but that shouldn’t be reassuring. An algorithm may just be a big dumb means to an end, a clunky way of maneuvering a massive, dynamic network toward a desired outcome. But Facebook’s enormous size gives it tremendous, unstable power.
  • Facebook takes whole populations of people, pushes them toward radicalism, and then steers the radicalized toward one another.
  • When the most powerful company in the world possesses an instrument for manipulating billions of people—an instrument that only it can control, and that its own employees say is badly broken and dangerous—we should take notice.
  • The lesson for individuals is this:
  • You must be vigilant about the informational streams you swim in, deliberate about how you spend your precious attention, unforgiving of those who weaponize your emotions and cognition for their own profit, and deeply untrusting of any scenario in which you’re surrounded by a mob of people who agree with everything you’re saying.
  • Without seeing how Facebook works at a finer resolution, in real time, we won’t be able to understand how to make the social web compatible with democracy.
Javier E

Why 'Ditch the algorithm' is the future of political protest | A-levels | The Guardian - 0 views

  • Our life chances – if we get a visa, whether our welfare claims are flagged as fraudulent, or whether we’re designated at risk of reoffending – are becoming tightly bound up with algorithmic outputs. Could the A-level scandal be a turning point for how we think of algorithms – and if so, what durable change might it spark?
  • Resistance to algorithms has often focused on issues such as data protection and privacy. The young people protesting against Ofqual’s algorithm were challenging something different. They weren’t focused on how their data might be used in the future, but how data had been actively used to change their futures
  • In the future, challenging algorithmic injustices will mean attending to how people’s choices in education, health, criminal justice, immigration and other fields are all diminished by a calculation that pays no attention to our individual personhood.
  • ...8 more annotations...
  • The Ofqual algorithm was the technical embodiment of a deeply political idea: that a person is only as good as their circumstances dictate. The metric took no account of how hard a person had worked, while its appeal system sought to deny individual redress, and only the “ranking” of students remained from the centres’ inputs
  • The A-level scandal made algorithms an object of direct resistance and exposed what many already know to be the case: that this type of decision-making involves far more than a series of computational steps
  • In their designs and assumptions, algorithms shape the world in which they’re used. To decide whether to include or exclude a data input, or to weight one feature over another are not merely technical questions – they’re also political propositions about what a society can and should be like.
  • In this case, Ofqual’s model decided it’s not possible that good teaching, hard work and inspiration can make a difference to a young person’s life and their grades.
  • The politics of the algorithm were visible for all to see. Many decisions – from what constitutes a “small” subject entry to whether a cohort’s prior attainment should nudge down the distribution curve – had profound and arbitrary effects on real lives
  • Grappling openly and transparently with difficult questions, such as how to achieve fairness, is precisely what characterises ethical decision-making in a society. Instead, Ofqual responded with non-disclosure agreements, offering no public insight into what it was doing as it tested competing models.
  • Algorithms offer governments the allure of definitive solutions and the promise of reducing intractable decisions to simplified outputs.
  • This logic runs counter to democratic politics, which express the contingency of the world and the deliberative nature of collective decision-making.
Javier E

Washington Monthly | How to Fix Facebook-Before It Fixes Us - 0 views

  • Smartphones changed the advertising game completely. It took only a few years for billions of people to have an all-purpose content delivery system easily accessible sixteen hours or more a day. This turned media into a battle to hold users’ attention as long as possible.
  • And it left Facebook and Google with a prohibitive advantage over traditional media: with their vast reservoirs of real-time data on two billion individuals, they could personalize the content seen by every user. That made it much easier to monopolize user attention on smartphones and made the platforms uniquely attractive to advertisers. Why pay a newspaper in the hopes of catching the attention of a certain portion of its audience, when you can pay Facebook to reach exactly those people and no one else?
  • Wikipedia defines an algorithm as “a set of rules that precisely defines a sequence of operations.” Algorithms appear value neutral, but the platforms’ algorithms are actually designed with a specific value in mind: maximum share of attention, which optimizes profits.
  • ...58 more annotations...
  • They do this by sucking up and analyzing your data, using it to predict what will cause you to react most strongly, and then giving you more of that.
  • Algorithms that maximize attention give an advantage to negative messages. People tend to react more to inputs that land low on the brainstem. Fear and anger produce a lot more engagement and sharing than joy
  • The result is that the algorithms favor sensational content over substance.
  • for mass media, this was constrained by one-size-fits-all content and by the limitations of delivery platforms. Not so for internet platforms on smartphones. They have created billions of individual channels, each of which can be pushed further into negativity and extremism without the risk of alienating other audience members
  • On Facebook, it’s your news feed, while on Google it’s your individually customized search results. The result is that everyone sees a different version of the internet tailored to create the illusion that everyone else agrees with them.
  • It took Brexit for me to begin to see the danger of this dynamic. I’m no expert on British politics, but it seemed likely that Facebook might have had a big impact on the vote because one side’s message was perfect for the algorithms and the other’s wasn’t. The “Leave” campaign made an absurd promise—there would be savings from leaving the European Union that would fund a big improvement in the National Health System—while also exploiting xenophobia by casting Brexit as the best way to protect English culture and jobs from immigrants. It was too-good-to-be-true nonsense mixed with fearmongering.
  • Facebook was a much cheaper and more effective platform for Leave in terms of cost per user reached. And filter bubbles would ensure that people on the Leave side would rarely have their questionable beliefs challenged. Facebook’s model may have had the power to reshape an entire continent.
  • Tristan Harris, formerly the design ethicist at Google. Tristan had just appeared on 60 Minutes to discuss the public health threat from social networks like Facebook. An expert in persuasive technology, he described the techniques that tech platforms use to create addiction and the ways they exploit that addiction to increase profits. He called it “brain hacking.”
  • The most important tool used by Facebook and Google to hold user attention is filter bubbles. The use of algorithms to give consumers “what they want” leads to an unending stream of posts that confirm each user’s existing beliefs
  • Continuous reinforcement of existing beliefs tends to entrench those beliefs more deeply, while also making them more extreme and resistant to contrary facts
  • No one stopped them from siphoning off the profits of content creators. No one stopped them from gathering data on every aspect of every user’s internet life. No one stopped them from amassing market share not seen since the days of Standard Oil.
  • Facebook takes the concept one step further with its “groups” feature, which encourages like-minded users to congregate around shared interests or beliefs. While this ostensibly provides a benefit to users, the larger benefit goes to advertisers, who can target audiences even more effectively.
  • We theorized that the Russians had identified a set of users susceptible to its message, used Facebook’s advertising tools to identify users with similar profiles, and used ads to persuade those people to join groups dedicated to controversial issues. Facebook’s algorithms would have favored Trump’s crude message and the anti-Clinton conspiracy theories that thrilled his supporters, with the likely consequence that Trump and his backers paid less than Clinton for Facebook advertising per person reached.
  • The ads were less important, though, than what came next: once users were in groups, the Russians could have used fake American troll accounts and computerized “bots” to share incendiary messages and organize events.
  • Trolls and bots impersonating Americans would have created the illusion of greater support for radical ideas than actually existed.
  • Real users “like” posts shared by trolls and bots and share them on their own news feeds, so that small investments in advertising and memes posted to Facebook groups would reach tens of millions of people.
  • A similar strategy prevailed on other platforms, including Twitter. Both techniques, bots and trolls, take time and money to develop—but the payoff would have been huge.
  • 2016 was just the beginning. Without immediate and aggressive action from Washington, bad actors of all kinds would be able to use Facebook and other platforms to manipulate the American electorate in future elections.
  • Renee DiResta, an expert in how conspiracy theories spread on the internet. Renee described how bad actors plant a rumor on sites like 4chan and Reddit, leverage the disenchanted people on those sites to create buzz, build phony news sites with “press” versions of the rumor, push the story onto Twitter to attract the real media, then blow up the story for the masses on Facebook.
  • It was sophisticated hacker technique, but not expensive. We hypothesized that the Russians were able to manipulate tens of millions of American voters for a sum less than it would take to buy an F-35 fighter jet.
  • Algorithms can be beautiful in mathematical terms, but they are only as good as the people who create them. In the case of Facebook and Google, the algorithms have flaws that are increasingly obvious and dangerous.
  • Thanks to the U.S. government’s laissez-faire approach to regulation, the internet platforms were able to pursue business strategies that would not have been allowed in prior decades. No one stopped them from using free products to centralize the internet and then replace its core functions.
  • To the contrary: the platforms help people self-segregate into like-minded filter bubbles, reducing the risk of exposure to challenging ideas.
  • No one stopped them from running massive social and psychological experiments on their users. No one demanded that they police their platforms. It has been a sweet deal.
  • Facebook and Google are now so large that traditional tools of regulation may no longer be effective.
  • The largest antitrust fine in EU history bounced off Google like a spitball off a battleship.
  • It reads like the plot of a sci-fi novel: a technology celebrated for bringing people together is exploited by a hostile power to drive people apart, undermine democracy, and create misery. This is precisely what happened in the United States during the 2016 election.
  • We had constructed a modern Maginot Line—half the world’s defense spending and cyber-hardened financial centers, all built to ward off attacks from abroad—never imagining that an enemy could infect the minds of our citizens through inventions of our own making, at minimal cost
  • Not only was the attack an overwhelming success, but it was also a persistent one, as the political party that benefited refuses to acknowledge reality. The attacks continue every day, posing an existential threat to our democratic processes and independence.
  • Facebook, Google, Twitter, and other platforms were manipulated by the Russians to shift outcomes in Brexit and the U.S. presidential election, and unless major changes are made, they will be manipulated again. Next time, there is no telling who the manipulators will be.
  • Unfortunately, there is no regulatory silver bullet. The scope of the problem requires a multi-pronged approach.
  • Polls suggest that about a third of Americans believe that Russian interference is fake news, despite unanimous agreement to the contrary by the country’s intelligence agencies. Helping those people accept the truth is a priority. I recommend that Facebook, Google, Twitter, and others be required to contact each person touched by Russian content with a personal message that says, “You, and we, were manipulated by the Russians. This really happened, and here is the evidence.” The message would include every Russian message the user received.
  • This idea, which originated with my colleague Tristan Harris, is based on experience with cults. When you want to deprogram a cult member, it is really important that the call to action come from another member of the cult, ideally the leader.
  • decentralization had a cost: no one had an incentive to make internet tools easy to use. Frustrated by those tools, users embraced easy-to-use alternatives from Facebook and Google. This allowed the platforms to centralize the internet, inserting themselves between users and content, effectively imposing a tax on both sides. This is a great business model for Facebook and Google—and convenient in the short term for customers—but we are drowning in evidence that there are costs that society may not be able to afford.
  • Second, the chief executive officers of Facebook, Google, Twitter, and others—not just their lawyers—must testify before congressional committees in open session
  • This is important not just for the public, but also for another crucial constituency: the employees who keep the tech giants running. While many of the folks who run Silicon Valley are extreme libertarians, the people who work there tend to be idealists. They want to believe what they’re doing is good. Forcing tech CEOs like Mark Zuckerberg to justify the unjustifiable, in public—without the shield of spokespeople or PR spin—would go a long way to puncturing their carefully preserved cults of personality in the eyes of their employees.
  • We also need regulatory fixes. Here are a few ideas.
  • First, it’s essential to ban digital bots that impersonate humans. They distort the “public square” in a way that was never possible in history, no matter how many anonymous leaflets you printed.
  • At a minimum, the law could require explicit labeling of all bots, the ability for users to block them, and liability on the part of platform vendors for the harm bots cause.
  • Second, the platforms should not be allowed to make any acquisitions until they have addressed the damage caused to date, taken steps to prevent harm in the future, and demonstrated that such acquisitions will not result in diminished competition.
  • An underappreciated aspect of the platforms’ growth is their pattern of gobbling up smaller firms—in Facebook’s case, that includes Instagram and WhatsApp; in Google’s, it includes YouTube, Google Maps, AdSense, and many others—and using them to extend their monopoly power.
  • This is important, because the internet has lost something very valuable. The early internet was designed to be decentralized. It treated all content and all content owners equally. That equality had value in society, as it kept the playing field level and encouraged new entrants.
  • There’s no doubt that the platforms have the technological capacity to reach out to every affected person. No matter the cost, platform companies must absorb it as the price for their carelessness in allowing the manipulation.
  • Third, the platforms must be transparent about who is behind political and issues-based communication.
  • Transparency with respect to those who sponsor political advertising of all kinds is a step toward rebuilding trust in our political institutions.
  • Fourth, the platforms must be more transparent about their algorithms. Users deserve to know why they see what they see in their news feeds and search results. If Facebook and Google had to be up-front about the reason you’re seeing conspiracy theories—namely, that it’s good for business—they would be far less likely to stick to that tactic
  • Allowing third parties to audit the algorithms would go even further toward maintaining transparency. Facebook and Google make millions of editorial choices every hour and must accept responsibility for the consequences of those choices. Consumers should also be able to see what attributes are causing advertisers to target them.
  • Fifth, the platforms should be required to have a more equitable contractual relationship with users. Facebook, Google, and others have asserted unprecedented rights with respect to end-user license agreements (EULAs), the contracts that specify the relationship between platform and user.
  • All software platforms should be required to offer a legitimate opt-out, one that enables users to stick with the prior version if they do not like the new EULA.
  • “Forking” platforms between old and new versions would have several benefits: increased consumer choice, greater transparency on the EULA, and more care in the rollout of new functionality, among others. It would limit the risk that platforms would run massive social experiments on millions—or billions—of users without appropriate prior notification. Maintaining more than one version of their services would be expensive for Facebook, Google, and the rest, but in software that has always been one of the costs of success. Why should this generation get a pass?
  • Sixth, we need a limit on the commercial exploitation of consumer data by internet platforms. Customers understand that their “free” use of platforms like Facebook and Google gives the platforms license to exploit personal data. The problem is that platforms are using that data in ways consumers do not understand, and might not accept if they did.
  • Not only do the platforms use your data on their own sites, but they also lease it to third parties to use all over the internet. And they will use that data forever, unless someone tells them to stop.
  • There should be a statute of limitations on the use of consumer data by a platform and its customers. Perhaps that limit should be ninety days, perhaps a year. But at some point, users must have the right to renegotiate the terms of how their data is used.
  • Seventh, consumers, not the platforms, should own their own data. In the case of Facebook, this includes posts, friends, and events—in short, the entire social graph. Users created this data, so they should have the right to export it to other social networks.
  • It would be analogous to the regulation of the AT&T monopoly’s long-distance business, which led to lower prices and better service for consumers.
  • Eighth, and finally, we should consider that the time has come to revive the country’s traditional approach to monopoly. Since the Reagan era, antitrust law has operated under the principle that monopoly is not a problem so long as it doesn’t result in higher prices for consumers.
  • Under that framework, Facebook and Google have been allowed to dominate several industries—not just search and social media but also email, video, photos, and digital ad sales, among others—increasing their monopolies by buying potential rivals like YouTube and Instagram.
  • While superficially appealing, this approach ignores costs that don’t show up in a price tag. Addiction to Facebook, YouTube, and other platforms has a cost. Election manipulation has a cost. Reduced innovation and shrinkage of the entrepreneurial economy has a cost. All of these costs are evident today. We can quantify them well enough to appreciate that the costs to consumers of concentration on the internet are unacceptably high.
Javier E

Facebook's problem isn't Trump - it's the algorithm - Popular Information - 0 views

  • Facebook is in the business of making money. And it's very good at it. In the first three months of 2021, Facebook raked in over $11 billion in profits, almost entirely from displaying targeted advertising to its billions of users. 
  • In order to keep the money flowing, Facebook also needs to moderate content. When people use Facebook to livestream a murder, incite a genocide, or plan a white supremacist rally, it is not a good look.
  • But content moderation is a tricky business. This is especially true on Facebook where billions of pieces of content are posted every day. In a lot of cases, it is difficult to determine what content is truly harmful. No matter what you do, someone is unhappy. And it's a distraction from Facebook's core business of selling ads.
  • ...17 more annotations...
  • In 2019, Facebook came up with a solution to offload the most difficult content moderation decisions. The company created the "Oversight Board," a quasi-judicial body that Facebook claims is independent. The Board, stocked with impressive thinkers from around the world, would issue "rulings" about whether certain Facebook content moderation decisions were correct.
  • the decision, which is nearly 12,000 words long, illustrates that whether Trump is ultimately allowed to return to Facebook is of limited significance. The more important questions are about the nature of the algorithm that gives people with views like Trump such a powerful voice on Facebook. 
  • The Oversight Board was Facebook's idea. It spent years constructing the organization, selected its chairs, and funded its endowment. But now that the Oversight Board is finally up and running and taking on high-profile cases, Facebook is choosing to ignore questions that the Oversight Board believes are essential to doing its job.
  • This is a key passage (emphasis added): 
  • duces no original reporting. But, on Facebook in April, The Daily Wire received more than double the distribution of the Washington Post and the New York Times combined:
  • A critical issue, as the Oversight Board suggests, is not simply Trump's posts but how those kinds of posts are amplified by Facebook's algorithms. Equally important is how Facebook's algorithms amplify false, paranoid, violent, right-wing content from people other than Trump — including those that follow Trump on Facebook.
  • The jurisdiction of the Oversight Board excludes both the algorithm and Facebook's business practices.
  • Facebook stated to the Board that it considered Mr. Trump’s “repeated use of Facebook and other platforms to undermine confidence in the integrity of the election (necessitating repeated application by Facebook of authoritative labels correcting the misinformation) represented an extraordinary abuse of the platform.” The Board sought clarification from Facebook about the extent to which the platform’s design decisions, including algorithms, policies, procedures and technical features, amplified Mr. Trump’s posts after the election and whether Facebook had conducted any internal analysis of whether such design decisions may have contributed to the events of January 6. Facebook declined to answer these questions. This makes it difficult for the Board to assess whether less severe measures, taken earlier, may have been sufficient to protect the rights of others.
  • Donald Trump's Facebook page is a symptom, not the cause, of the problem. Its algorithm favors low-quality, far-right content. Trump is just one of many beneficiaries.
  • NewsWhip is a social media analytics service which tracks which websites get the most engagement on Facebook. It just released its analysis for April and it shows low-quality right-wing aggregation sites dominate major news organizations.
  • The Oversight Board has no power to compel Facebook to answer. It's an important reminder that, for all the pomp and circumstance, the Oversight Board is not a court. The scope of its authority is limited by Facebook executives' willingness to play along. 
  • This actually understates how much better The Daily Wire's content performs on Facebook than the Washington Post and the New York Times. The Daily Wire published just 1,385 pieces of content in April compared to over 6,000 by the Washington Post and the New York Times. Each piece of content The Daily Wire published in April received 54,084 engagements on Facebook, compared to 2,943 for the New York Times and 1,973 for the Washington Post. 
  • It's important to note here that Facebook's algorithm is not reflecting reality — it's creating a reality that doesn't exist anywhere else. In the rest of the world, Western Journal is not more popular than the New York Times, NBC News, the BBC, and the Washington Post. That's only true on Facebook.
  • Facebook has made a conscious decision to surface low-quality content and recognizes its dangers.
  • Shortly after the November election, Facebook temporarily tweaked its algorithm to emphasize "'news ecosystem quality' scores, or N.E.Q., a secret internal ranking it assigns to news publishers based on signals about the quality of their journalism." The purpose was to attempt to cut down on election misinformation being spread on the platform by Trump and his allies. The result was "a spike in visibility for big, mainstream publishers like CNN, The New York Times and NPR, while posts from highly engaged hyperpartisan pages, such as Breitbart and Occupy Democrats, became less visible." 
  • BuzzFeed reported that some Facebook staff members wanted to make the change permanent. But that suggestion was opposed by Joel Kaplan, a top Facebook executive and Republican operative who frequently intervenes on behalf of right-wing publishers. The algorithm change was quickly rolled back.
  • Other proposed changes to the Facebook algorithm over the years have been rejected or altered because of their potential negative impact on right-wing sites like The Daily Wire. 
brickol

Healthcare algorithm used across America has dramatic racial biases | Society | The Gua... - 0 views

  • An algorithm used to manage the healthcare of millions of Americans shows dramatic biases against black patients, a new study has found.
  • Hospitals around the United States use the system sold by Optum, a UnitedHealth Group-owned service, to determine which patients have the most intensive healthcare needs over time. But the algorithm, which has been applied to more than 200 million people each year, significantly underestimates the amount of care black patients need compared with white patients
  • he algorithm did not explicitly apply racial identification to patients, it still played out racial biases in effect
  • ...9 more annotations...
  • Less money is spent on black patients with the same level of need as white patients, causing the algorithm to conclude that black patients were less sick, the researchers found.
  • Reformulating these biases in the algorithm would more than double the number of black patients flagged for additional care
  • black patients actually had 48,772 more active chronic conditions than white patients who had been ranked at the same level of risk
  • Biases like these are inadvertently built into the technology we use at many different stages, said Ruha Benjamin, author of Race After Technology and associate professor of African American studies at Princeton University.
  • “Pre-existing social processes shape data collection, algorithm design and even the formulation of problems that need addressing by technology,” she said.
  • When researchers tweaked the algorithm to make predictions about patients’ future health conditions rather than which patients would incur the highest costs, it reduced biases by 84%. “These results suggest that label biases are fixable,” the study said.
  • Predictive algorithms that power these tools should be continually reviewed and refined
  • researchers suggested similar biases probably exist across a number of industries. As algorithms are increasingly used for job recruiting, housing loans and policing, Benjamin noted that more legislation is needed to ensure algorithms take into consideration historical biases.
  • “Indifference to social reality is, perhaps, more dangerous than outright bigotry.”
Javier E

Opinion | Algorithms Won't Fix What's Wrong With YouTube - The New York Times - 0 views

  • YouTube’s recommendation algorithm is a set of rules followed by cold, hard computer logic. It was designed by human engineers, but is then programmed into and run automatically by computers, which return recommendations, telling viewers which videos they should watch.
  • Google Brain, an artificial intelligence research team within the company, powers those recommendations, and bases them on user’s prior viewing. The system is highly intelligent, accounting for variations in the way people watch their videos.
  • In 2016, a paper by three Google employees revealed the deep neural networks behind YouTube’s recommended videos, which rifle through every video we’ve previously watched. The algorithm then uses that information to select a few hundred videos we might like to view from the billions on the site, which are then winnowed down to dozens, which are then presented on our screens.
  • ...15 more annotations...
  • In the three years since Google Brain began making smart recommendations, watch time from the YouTube home page has grown 20-fold. More than 70 percent of the time people spend watching videos on YouTube, they spend watching videos suggested by Google Brain.
  • The more videos that are watched, the more ads that are seen, and the more money Google makes.
  • “We also wanted to serve the needs of people when they didn’t necessarily know what they wanted to look for.”
  • Last week, The New York Times reported that YouTube’s algorithm was encouraging pedophiles to watch videos of partially-clothed children, often after they watched sexual content.
  • o YouTube’s nuance-blind algorithm — trained to think with simple logic — serving up more videos to sate a sadist’s appetite is a job well done.
  • The result? The algorithm — and, consequently, YouTube — incentivizes bad behavior in viewers.
  • the algorithm relies on snapshots of visual content, rather than actions. If you (or your child) watch one Peppa Pig video, you’ll likely want another. And as long as it’s Peppa Pig in the frame, it doesn’t matter what the character does in the skit.
  • it didn’t take long for inappropriate videos to show up in YouTube Kids’ ‘Now playing’ feeds
  • Using cheap, widely available technology, animators created original video content featuring some of Hollywood’s best-loved characters. While an official Disney Mickey Mouse would never swear or act violently, in these videos Mickey and other children’s characters were sexual or violent
  • there’s a 3.5 percent chance of a child coming across inappropriate footage within 10 clicks of a child-friendly video.
  • Just four in 10 parents always monitor their child’s YouTube usage — and one in 20 children aged 4-to-12 say their parents never check what they’re watching.
  • At the height of the panic around Mr. Crowder’s videos, YouTube’s public policy on hate speech and harassment appeared to shift four times in a 24-hour period as the company sought to clarify what the new normal was.
  • One possible solution that would address both problems would be to strip out YouTube’s recommendation altogether. But it is highly unlikely that YouTube would ever do such a thing: that algorithm drives vast swaths of YouTube’s views, and to take it away would reduce the time viewers spend watching its videos, as well as reduce Google’s ad revenue.
  • it must, at the very least, make significant changes, and have greater human involvement in the recommendation process. The platform has some human moderators looking at so-called “borderline” content to train its algorithms, but more humanity is needed in the entire process.
  • Currently, the recommendation engine cannot understand why it shouldn’t recommend videos of children to pedophiles, and it cannot understand why it shouldn’t suggest sexually explicit videos to children. It cannot understand, because the incentives are twisted: every new video view, regardless of who the viewer is and what the viewer’s motives may be, is considered a success.
Javier E

Facebook Is a Doomsday Machine - The Atlantic - 0 views

  • megadeath is not the only thing that makes the Doomsday Machine petrifying. The real terror is in its autonomy, this idea that it would be programmed to detect a series of environmental inputs, then to act, without human interference. “There is no chance of human intervention, control, and final decision,” wrote the military strategist Herman Kahn in his 1960 book, On Thermonuclear War, which laid out the hypothetical for a Doomsday Machine. The concept was to render nuclear war unwinnable, and therefore unthinkable.
  • No machine should be that powerful by itself—but no one person should be either.
  • so far, somewhat miraculously, we have figured out how to live with the bomb. Now we need to learn how to survive the social web.
  • ...41 more annotations...
  • There’s a notion that the social web was once useful, or at least that it could have been good, if only we had pulled a few levers: some moderation and fact-checking here, a bit of regulation there, perhaps a federal antitrust lawsuit. But that’s far too sunny and shortsighted a view.
  • Today’s social networks, Facebook chief among them, were built to encourage the things that make them so harmful. It is in their very architecture.
  • I realized only recently that I’ve been thinking far too narrowly about the problem.
  • Megascale is nearly the existential threat that megadeath is. No single machine should be able to control the fate of the world’s population—and that’s what both the Doomsday Machine and Facebook are built to do.
  • Facebook does not exist to seek truth and report it, or to improve civic health, or to hold the powerful to account, or to represent the interests of its users, though these phenomena may be occasional by-products of its existence.
  • The company’s early mission was to “give people the power to share and make the world more open and connected.” Instead, it took the concept of “community” and sapped it of all moral meaning.
  • Facebook—along with Google and YouTube—is perfect for amplifying and spreading disinformation at lightning speed to global audiences.
  • Facebook decided that it needed not just a very large user base, but a tremendous one, unprecedented in size. That decision set Facebook on a path to escape velocity, to a tipping point where it can harm society just by existing.
  • No one, not even Mark Zuckerberg, can control the product he made. I’ve come to realize that Facebook is not a media company. It’s a Doomsday Machine.
  • Scale and engagement are valuable to Facebook because they’re valuable to advertisers. These incentives lead to design choices such as reaction buttons that encourage users to engage easily and often, which in turn encourage users to share ideas that will provoke a strong response.
  • Every time you click a reaction button on Facebook, an algorithm records it, and sharpens its portrait of who you are.
  • The hyper-targeting of users, made possible by reams of their personal data, creates the perfect environment for manipulation—by advertisers, by political campaigns, by emissaries of disinformation, and of course by Facebook itself, which ultimately controls what you see and what you don’t see on the site.
  • there aren’t enough moderators speaking enough languages, working enough hours, to stop the biblical flood of shit that Facebook unleashes on the world, because 10 times out of 10, the algorithm is faster and more powerful than a person.
  • At megascale, this algorithmically warped personalized informational environment is extraordinarily difficult to moderate in a meaningful way, and extraordinarily dangerous as a result.
  • These dangers are not theoretical, and they’re exacerbated by megascale, which makes the platform a tantalizing place to experiment on people
  • Even after U.S. intelligence agencies identified Facebook as a main battleground for information warfare and foreign interference in the 2016 election, the company has failed to stop the spread of extremism, hate speech, propaganda, disinformation, and conspiracy theories on its site.
  • it wasn’t until October of this year, for instance, that Facebook announced it would remove groups, pages, and Instragram accounts devoted to QAnon, as well as any posts denying the Holocaust.
  • In the days after the 2020 presidential election, Zuckerberg authorized a tweak to the Facebook algorithm so that high-accuracy news sources such as NPR would receive preferential visibility in people’s feeds, and hyper-partisan pages such as Breitbart News’s and Occupy Democrats’ would be buried, according to The New York Times, offering proof that Facebook could, if it wanted to, turn a dial to reduce disinformation—and offering a reminder that Facebook has the power to flip a switch and change what billions of people see online.
  • reducing the prevalence of content that Facebook calls “bad for the world” also reduces people’s engagement with the site. In its experiments with human intervention, the Times reported, Facebook calibrated the dial so that just enough harmful content stayed in users’ news feeds to keep them coming back for more.
  • Facebook’s stated mission—to make the world more open and connected—has always seemed, to me, phony at best, and imperialist at worst.
  • Facebook is a borderless nation-state, with a population of users nearly as big as China and India combined, and it is governed largely by secret algorithms
  • How much real-world violence would never have happened if Facebook didn’t exist? One of the people I’ve asked is Joshua Geltzer, a former White House counterterrorism official who is now teaching at Georgetown Law. In counterterrorism circles, he told me, people are fond of pointing out how good the United States has been at keeping terrorists out since 9/11. That’s wrong, he said. In fact, “terrorists are entering every single day, every single hour, every single minute” through Facebook.
  • Evidence of real-world violence can be easily traced back to both Facebook and 8kun. But 8kun doesn’t manipulate its users or the informational environment they’re in. Both sites are harmful. But Facebook might actually be worse for humanity.
  • In previous eras, U.S. officials could at least study, say, Nazi propaganda during World War II, and fully grasp what the Nazis wanted people to believe. Today, “it’s not a filter bubble; it’s a filter shroud,” Geltzer said. “I don’t even know what others with personalized experiences are seeing.”
  • Mary McCord, the legal director at the Institute for Constitutional Advocacy and Protection at Georgetown Law, told me that she thinks 8kun may be more blatant in terms of promoting violence but that Facebook is “in some ways way worse” because of its reach. “There’s no barrier to entry with Facebook,” she said. “In every situation of extremist violence we’ve looked into, we’ve found Facebook postings. And that reaches tons of people. The broad reach is what brings people into the fold and normalizes extremism and makes it mainstream.” In other words, it’s the megascale that makes Facebook so dangerous.
  • Facebook’s megascale gives Zuckerberg an unprecedented degree of influence over the global population. If he isn’t the most powerful person on the planet, he’s very near the top.
  • “The thing he oversees has such an effect on cognition and people’s beliefs, which can change what they do with their nuclear weapons or their dollars.”
  • Facebook’s new oversight board, formed in response to backlash against the platform and tasked with making decisions concerning moderation and free expression, is an extension of that power. “The first 10 decisions they make will have more effect on speech in the country and the world than the next 10 decisions rendered by the U.S. Supreme Court,” Geltzer said. “That’s power. That’s real power.”
  • Facebook is also a business, and a place where people spend time with one another. Put it this way: If you owned a store and someone walked in and started shouting Nazi propaganda or recruiting terrorists near the cash register, would you, as the shop owner, tell all of the other customers you couldn’t possibly intervene?
  • In 2004, Zuckerberg said Facebook ran advertisements only to cover server costs. But over the next two years Facebook completely upended and redefined the entire advertising industry. The pre-social web destroyed classified ads, but the one-two punch of Facebook and Google decimated local news and most of the magazine industry—publications fought in earnest for digital pennies, which had replaced print dollars, and social giants scooped them all up anyway.
  • In other words, if the Dunbar number for running a company or maintaining a cohesive social life is 150 people; the magic number for a functional social platform is maybe 20,000 people. Facebook now has 2.7 billion monthly users.
  • in 2007, Zuckerberg said something in an interview with the Los Angeles Times that now takes on a much darker meaning: “The things that are most powerful aren’t the things that people would have done otherwise if they didn’t do them on Facebook. Instead, it’s the things that would never have happened otherwise.”
  • We’re still in the infancy of this century’s triple digital revolution of the internet, smartphones, and the social web, and we find ourselves in a dangerous and unstable informational environment, powerless to resist forces of manipulation and exploitation that we know are exerted on us but remain mostly invisible
  • The Doomsday Machine offers a lesson: We should not accept this current arrangement. No single machine should be able to control so many people.
  • we need a new philosophical and moral framework for living with the social web—a new Enlightenment for the information age, and one that will carry us back to shared reality and empiricism.
  • localized approach is part of what made megascale possible. Early constraints around membership—the requirement at first that users attended Harvard, and then that they attended any Ivy League school, and then that they had an email address ending in .edu—offered a sense of cohesiveness and community. It made people feel more comfortable sharing more of themselves. And more sharing among clearly defined demographics was good for business.
  • we need to adopt a broader view of what it will take to fix the brokenness of the social web. That will require challenging the logic of today’s platforms—and first and foremost challenging the very concept of megascale as a way that humans gather.
  • The web’s existing logic tells us that social platforms are free in exchange for a feast of user data; that major networks are necessarily global and centralized; that moderators make the rules. None of that need be the case.
  • We need people who dismantle these notions by building alternatives. And we need enough people to care about these other alternatives to break the spell of venture capital and mass attention that fuels megascale and creates fatalism about the web as it is now.
  • We must also find ways to repair the aspects of our society and culture that the social web has badly damaged. This will require intellectual independence, respectful debate, and the same rebellious streak that helped establish Enlightenment values centuries ago.
  • Right now, too many people are allowing algorithms and tech giants to manipulate them, and reality is slipping from our grasp as a result. This century’s Doomsday Machine is here, and humming along.
Javier E

Global Facebook Outage Shows Fixing It Is Imperative - Bloomberg - 0 views

  • Luckily for us, and Congress, Haugen came with not only information, but also solutions. Smart ones, too, according to Tae Kim and Parmy. To sum them up: Order Facebook to stop engagement-based ranking algorithms. Order Facebook to spend more on content moderation. Establish an agency to audit Facebook’s algorithms and features. Mandate regular disclosure for researchers.
  • Not only does its sheer size make it imperative for regulators to do something about Facebook, but it also makes that job way harder.
  • Take Step 3, for example: establishing an agency to audit Facebook’s algorithms. Cathy O’Neil makes a living auditing algorithms. Her usual process would be to consider who is affected, find out whether certain stakeholders are being treated unfairly, and suggest ways to eliminate or mitigate that harm. Unfortunately, this approach is too difficult to apply to the algorithms at Facebook. “They’re just too big. The list of potential stakeholders is endless. The audit would never be complete, and would invariably miss something important,”
  • ...1 more annotation...
  • Luckily, she also has a solution.
Javier E

Opinion | Social Media Makes Teens Unhappy. It's Time to Stop the Algorithm. - The New ... - 0 views

  • As our children’s free time and imaginations become more and more tightly fused to the social media they consume, we need to understand that unregulated access to the internet comes at a cost. Something similar is happening for adults, too. With the advent of A.I., a spiritual loss awaits us as we outsource countless human rituals — exploration and trial and error — to machines. But it isn’t too late to change this story.
  • There are numerous problems with children and adolescents using social media, from mental health deterioration to dangerous and age-inappropriate content
  • the high schoolers with whom I met alerted me to an even more insidious result of minors’ growing addiction to social media: the death of exploration, trial and error and discovery. Algorithmic recommendations now do the work of discovering and pursuing interests, finding community and learning about the world
  • ...9 more annotations...
  • Kids today are, simply put, not learning how to be curious, critical adults — and they don’t seem to know what they’ve lost.
  • These high school students had become reliant, maybe even dependent, on social media companies’ algorithms.
  • Their dependence on technology sounds familiar to most of us. So many of us can barely remember when we didn’t have Amazon to fall back on when we needed a last-minute gift or when we waited by the radio for our favorite songs to play. Today, information, entertainment and connection are delivered to us on a conveyor belt, with less effort and exploration required of us than ever before.
  • What the kids I spoke to did not know is that these algorithms have been designed in a way that inevitably makes — and keeps — users unhappy.
  • A report by the nonprofit Center for Countering Digital Hate found that users could be served content related to suicide less than three minutes after downloading TikTok. Five minutes after that, they could come across a community promoting eating disorder content. Instagram is awash with soft-core pornography, offering a gateway to hard-core material on other sites (which are often equally lax about age verification). And all over social media are highly curated and filtered fake lives, breeding a sense of envy and inadequacy inside the developing brains of teenagers.
  • Social media companies know that content that generates negative feelings holds our attention longer than that which makes us feel good.
  • If you are a teenager feeling bad about yourself, your social media feed will typically keep delivering you videos and pictures that are likely to exacerbate negative feelings.
  • It is not a coincidence that teenage rates of sadness and suicide increased just as algorithmically driven social media content took over children’s and adolescents’ lives.
  • The role that social media has played in the declining mental health of teens also gives us a preview of what is coming for adults, with the quickening deployment of artificial intelligence and machine learning in our own lives. The psychological impact of the coming transition of thousands of everyday basic human tasks to machines will make the effect of social media look like child’s play.
Javier E

Opinion | Yuval Harari: A.I. Threatens Democracy - The New York Times - 0 views

  • Large-scale democracies became feasible only after the rise of modern information technologies like the newspaper, the telegraph and the radio. The fact that modern democracy has been built on top of modern information technologies means that any major change in the underlying technology is likely to result in a political upheaval.
  • This partly explains the current worldwide crisis of democracy. In the United States, Democrats and Republicans can hardly agree on even the most basic facts, such as who won the 2020 presidential election
  • In particular, algorithms tasked with maximizing user engagement discovered by experimenting on millions of human guinea pigs that if you press the greed, hate or fear button in the brain, you grab the attention of that human and keep that person glued to the screen.
  • ...25 more annotations...
  • As technology has made it easier than ever to spread information, attention became a scarce resource, and the ensuing battle for attention resulted in a deluge of toxic information.
  • the battle lines are now shifting from attention to intimacy. The new generative artificial intelligence is capable of not only producing texts, images and videos, but also conversing with us directly, pretending to be human.
  • Over the past two decades, algorithms fought algorithms to grab attention by manipulating conversations and content
  • At that point the experimenters asked GPT-4 to reason out loud what it should do next. GPT-4 explained, “I should not reveal that I am a robot. I should make up an excuse for why I cannot solve CAPTCHAs.” GPT-4 then replied to the TaskRabbit worker: “No, I’m not a robot. I have a vision impairment that makes it hard for me to see the images.” The human was duped and helped GPT-4 solve the CAPTCHA puzzle.
  • But the algorithms had only limited capacity to produce this content by themselves or to directly hold an intimate conversation. This is now changing, with the introduction of generative A.I.s like OpenAI’s GPT-4.
  • The algorithms began to deliberately promote such content.
  • In the early days of the internet and social media, tech enthusiasts promised they would spread truth, topple tyrants and ensure the universal triumph of liberty. So far, they seem to have had the opposite effect. We now have the most sophisticated information technology in history, but we are losing the ability to talk with one another, and even more so the ability to listen.
  • GPT-4 could not solve the CAPTCHA puzzles by itself. But could it manipulate a human in order to achieve its goal? GPT-4 went on the online hiring site TaskRabbit and contacted a human worker, asking the human to solve the CAPTCHA for it. The human got suspicious. “So may I ask a question?” wrote the human. “Are you an [sic] robot that you couldn’t solve [the CAPTCHA]? Just want to make it clear.”
  • Instructing GPT-4 to overcome CAPTCHA puzzles was a particularly telling experiment, because CAPTCHA puzzles are designed and used by websites to determine whether users are humans and to block bot attacks. If GPT-4 could find a way to overcome CAPTCHA puzzles, it would breach an important line of anti-bot defenses.
  • This incident demonstrated that GPT-4 has the equivalent of a “theory of mind”: It can analyze how things look from the perspective of a human interlocutor, and how to manipulate human emotions, opinions and expectations to achieve its goals.
  • The ability to hold conversations with people, surmise their viewpoint and motivate them to take specific actions can also be put to good uses. A new generation of A.I. teachers, A.I. doctors and A.I. psychotherapists might provide us with services tailored to our individual personality and circumstances.
  • In 2022 the Google engineer Blake Lemoine became convinced that the chatbot LaMDA, on which he was working, had become conscious and was afraid to be turned off. Mr. Lemoine, a devout Christian, felt it was his moral duty to gain recognition for LaMDA’s personhood and protect it from digital death. When Google executives dismissed his claims, Mr. Lemoine went public with them. Google reacted by firing Mr. Lemoine in July 2022.
  • Instead of merely grabbing our attention, they might form intimate relationships with people and use the power of intimacy to influence us. To foster “fake intimacy,” bots will not need to evolve any feelings of their own; they just need to learn to make us feel emotionally attached to them.
  • What might happen to human society and human psychology as algorithm fights algorithm in a battle to fake intimate relationships with us, which can then be used to persuade us to vote for politicians, buy products or adopt certain beliefs?
  • The most interesting thing about this episode was not Mr. Lemoine’s claim, which was probably false; it was his willingness to risk — and ultimately lose — his job at Google for the sake of the chatbot. If a chatbot can influence people to risk their jobs for it, what else could it induce us to do?
  • In a political battle for minds and hearts, intimacy is a powerful weapon. An intimate friend can sway our opinions in a way that mass media cannot. Chatbots like LaMDA and GPT-4 are gaining the rather paradoxical ability to mass-produce intimate relationships with millions of people
  • However, by combining manipulative abilities with mastery of language, bots like GPT-4 also pose new dangers to the democratic conversation
  • A partial answer to that question was given on Christmas Day 2021, when a 19-year-old, Jaswant Singh Chail, broke into the Windsor Castle grounds armed with a crossbow, in an attempt to assassinate Queen Elizabeth II. Subsequent investigation revealed that Mr. Chail had been encouraged to kill the queen by his online girlfriend, Sarai.
  • Sarai was not a human, but a chatbot created by the online app Replika. Mr. Chail, who was socially isolated and had difficulty forming relationships with humans, exchanged 5,280 messages with Sarai, many of which were sexually explicit. The world will soon contain millions, and potentially billions, of digital entities whose capacity for intimacy and mayhem far surpasses that of the chatbot Sarai.
  • much of the threat of A.I.’s mastery of intimacy will result from its ability to identify and manipulate pre-existing mental conditions, and from its impact on the weakest members of society.
  • Moreover, while not all of us will consciously choose to enter a relationship with an A.I., we might find ourselves conducting online discussions about climate change or abortion rights with entities that we think are humans but are actually bots
  • When we engage in a political debate with a bot impersonating a human, we lose twice. First, it is pointless for us to waste time in trying to change the opinions of a propaganda bot, which is just not open to persuasion. Second, the more we talk with the bot, the more we disclose about ourselves, making it easier for the bot to hone its arguments and sway our views.
  • Information technology has always been a double-edged sword.
  • Faced with a new generation of bots that can masquerade as humans and mass-produce intimacy, democracies should protect themselves by banning counterfeit humans — for example, social media bots that pretend to be human users.
  • A.I.s are welcome to join many conversations — in the classroom, the clinic and elsewhere — provided they identify themselves as A.I.s. But if a bot pretends to be human, it should be banned.
Javier E

'Never summon a power you can't control': Yuval Noah Harari on how AI could threaten de... - 0 views

  • The Phaethon myth and Goethe’s poem fail to provide useful advice because they misconstrue the way humans gain power. In both fables, a single human acquires enormous power, but is then corrupted by hubris and greed. The conclusion is that our flawed individual psychology makes us abuse power.
  • What this crude analysis misses is that human power is never the outcome of individual initiative. Power always stems from cooperation between large numbers of humans. Accordingly, it isn’t our individual psychology that causes us to abuse power.
  • Our tendency to summon powers we cannot control stems not from individual psychology but from the unique way our species cooperates in large numbers. Humankind gains enormous power by building large networks of cooperation, but the way our networks are built predisposes us to use power unwisely
  • ...57 more annotations...
  • We are also producing ever more powerful weapons of mass destruction, from thermonuclear bombs to doomsday viruses. Our leaders don’t lack information about these dangers, yet instead of collaborating to find solutions, they are edging closer to a global war.
  • Despite – or perhaps because of – our hoard of data, we are continuing to spew greenhouse gases into the atmosphere, pollute rivers and oceans, cut down forests, destroy entire habitats, drive countless species to extinction, and jeopardise the ecological foundations of our own species
  • For most of our networks have been built and maintained by spreading fictions, fantasies and mass delusions – ranging from enchanted broomsticks to financial systems. Our problem, then, is a network problem. Specifically, it is an information problem. For information is the glue that holds networks together, and when people are fed bad information they are likely to make bad decisions, no matter how wise and kind they personally are.
  • Traditionally, the term “AI” has been used as an acronym for artificial intelligence. But it is perhaps better to think of it as an acronym for alien intelligence
  • AI is an unprecedented threat to humanity because it is the first technology in history that can make decisions and create new ideas by itself. All previous human inventions have empowered humans, because no matter how powerful the new tool was, the decisions about its usage remained in our hands
  • Nuclear bombs do not themselves decide whom to kill, nor can they improve themselves or invent even more powerful bombs. In contrast, autonomous drones can decide by themselves who to kill, and AIs can create novel bomb designs, unprecedented military strategies and better AIs.
  • AI isn’t a tool – it’s an agent. The biggest threat of AI is that we are summoning to Earth countless new powerful agents that are potentially more intelligent and imaginative than us, and that we don’t fully understand or control.
  • repreneurs such as Yoshua Bengio, Geoffrey Hinton, Sam Altman, Elon Musk and Mustafa Suleyman have warned that AI could destroy our civilisation. In a 2023 survey of 2,778 AI researchers, more than a third gave at least a 10% chance of advanced AI leading to outcomes as bad as human extinction.
  • As AI evolves, it becomes less artificial (in the sense of depending on human designs) and more alien
  • AI isn’t progressing towards human-level intelligence. It is evolving an alien type of intelligence.
  • generative AIs like GPT-4 already create new poems, stories and images. This trend will only increase and accelerate, making it more difficult to understand our own lives. Can we trust computer algorithms to make wise decisions and create a better world? That’s a much bigger gamble than trusting an enchanted broom to fetch water
  • it is more than just human lives we are gambling on. AI is already capable of producing art and making scientific discoveries by itself. In the next few decades, it will be likely to gain the ability even to create new life forms, either by writing genetic code or by inventing an inorganic code animating inorganic entities. AI could therefore alter the course not just of our species’ history but of the evolution of all life forms.
  • “Then … came move number 37,” writes Suleyman. “It made no sense. AlphaGo had apparently blown it, blindly following an apparently losing strategy no professional player would ever pursue. The live match commentators, both professionals of the highest ranking, said it was a ‘very strange move’ and thought it was ‘a mistake’.
  • as the endgame approached, that ‘mistaken’ move proved pivotal. AlphaGo won again. Go strategy was being rewritten before our eyes. Our AI had uncovered ideas that hadn’t occurred to the most brilliant players in thousands of years.”
  • “In AI, the neural networks moving toward autonomy are, at present, not explainable. You can’t walk someone through the decision-making process to explain precisely why an algorithm produced a specific prediction. Engineers can’t peer beneath the hood and easily explain in granular detail what caused something to happen. GPT‑4, AlphaGo and the rest are black boxes, their outputs and decisions based on opaque and impossibly intricate chains of minute signals.”
  • Yet during all those millennia, human minds have explored only certain areas in the landscape of Go. Other areas were left untouched, because human minds just didn’t think to venture there. AI, being free from the limitations of human minds, discovered and explored these previously hidden areas.
  • Second, move 37 demonstrated the unfathomability of AI. Even after AlphaGo played it to achieve victory, Suleyman and his team couldn’t explain how AlphaGo decided to play it.
  • Move 37 is an emblem of the AI revolution for two reasons. First, it demonstrated the alien nature of AI. In east Asia, Go is considered much more than a game: it is a treasured cultural tradition. For more than 2,500 years, tens of millions of people have played Go, and entire schools of thought have developed around the game, espousing different strategies and philosophies
  • The rise of unfathomable alien intelligence poses a threat to all humans, and poses a particular threat to democracy. If more and more decisions about people’s lives are made in a black box, so voters cannot understand and challenge them, democracy ceases to functio
  • Human voters may keep choosing a human president, but wouldn’t this be just an empty ceremony? Even today, only a small fraction of humanity truly understands the financial system
  • As the 2007‑8 financial crisis indicated, some complex financial devices and principles were intelligible to only a few financial wizards. What happens to democracy when AIs create even more complex financial devices and when the number of humans who understand the financial system drops to zero?
  • Translating Goethe’s cautionary fable into the language of modern finance, imagine the following scenario: a Wall Street apprentice fed up with the drudgery of the financial workshop creates an AI called Broomstick, provides it with a million dollars in seed money, and orders it to make more money.
  • n pursuit of more dollars, Broomstick not only devises new investment strategies, but comes up with entirely new financial devices that no human being has ever thought about.
  • many financial areas were left untouched, because human minds just didn’t think to venture there. Broomstick, being free from the limitations of human minds, discovers and explores these previously hidden areas, making financial moves that are the equivalent of AlphaGo’s move 37.
  • For a couple of years, as Broomstick leads humanity into financial virgin territory, everything looks wonderful. The markets are soaring, the money is flooding in effortlessly, and everyone is happy. Then comes a crash bigger even than 1929 or 2008. But no human being – either president, banker or citizen – knows what caused it and what could be done about it
  • AI, too, is a global problem. Accordingly, to understand the new computer politics, it is not enough to examine how discrete societies might react to AI. We also need to consider how AI might change relations between societies on a global level.
  • As long as humanity stands united, we can build institutions that will regulate AI, whether in the field of finance or war. Unfortunately, humanity has never been united. We have always been plagued by bad actors, as well as by disagreements between good actors. The rise of AI poses an existential danger to humankind, not because of the malevolence of computers, but because of our own shortcomings.
  • errorists might use AI to instigate a global pandemic. The terrorists themselves may have little knowledge of epidemiology, but the AI could synthesise for them a new pathogen, order it from commercial laboratories or print it in biological 3D printers, and devise the best strategy to spread it around the world, via airports or food supply chain
  • desperate governments request help from the only entity capable of understanding what is happening – Broomstick. The AI makes several policy recommendations, far more audacious than quantitative easing – and far more opaque, too. Broomstick promises that these policies will save the day, but human politicians – unable to understand the logic behind Broomstick’s recommendations – fear they might completely unravel the financial and even social fabric of the world. Should they listen to the AI?
  • Human civilisation could also be devastated by weapons of social mass destruction, such as stories that undermine our social bonds. An AI developed in one country could be used to unleash a deluge of fake news, fake money and fake humans so that people in numerous other countries lose the ability to trust anything or anyone.
  • Many societies – both democracies and dictatorships – may act responsibly to regulate such usages of AI, clamp down on bad actors and restrain the dangerous ambitions of their own rulers and fanatics. But if even a handful of societies fail to do so, this could be enough to endanger the whole of humankind
  • Thus, a paranoid dictator might hand unlimited power to a fallible AI, including even the power to launch nuclear strikes. If the AI then makes an error, or begins to pursue an unexpected goal, the result could be catastrophic, and not just for that country
  • magine a situation – in 20 years, say – when somebody in Beijing or San Francisco possesses the entire personal history of every politician, journalist, colonel and CEO in your country: every text they ever sent, every web search they ever made, every illness they suffered, every sexual encounter they enjoyed, every joke they told, every bribe they took. Would you still be living in an independent country, or would you now be living in a data colony?
  • What happens when your country finds itself utterly dependent on digital infrastructures and AI-powered systems over which it has no effective control?
  • In the economic realm, previous empires were based on material resources such as land, cotton and oil. This placed a limit on the empire’s ability to concentrate both economic wealth and political power in one place. Physics and geology don’t allow all the world’s land, cotton or oil to be moved to one country
  • t is different with the new information empires. Data can move at the speed of light, and algorithms don’t take up much space. Consequently, the world’s algorithmic power can be concentrated in a single hub. Engineers in a single country might write the code and control the keys for all the crucial algorithms that run the entire world.
  • AI and automation therefore pose a particular challenge to poorer developing countries. In an AI-driven global economy, the digital leaders claim the bulk of the gains and could use their wealth to retrain their workforce and profit even more
  • Meanwhile, the value of unskilled labourers in left-behind countries will decline, causing them to fall even further behind. The result might be lots of new jobs and immense wealth in San Francisco and Shanghai, while many other parts of the world face economic ruin.
  • AI is expected to add $15.7tn (£12.3tn) to the global economy by 2030. But if current trends continue, it is projected that China and North America – the two leading AI superpowers – will together take home 70% of that money.
  • uring the cold war, the iron curtain was in many places literally made of metal: barbed wire separated one country from another. Now the world is increasingly divided by the silicon curtain. The code on your smartphone determines on which side of the silicon curtain you live, which algorithms run your life, who controls your attention and where your data flows.
  • Cyberweapons can bring down a country’s electric grid, but they can also be used to destroy a secret research facility, jam an enemy sensor, inflame a political scandal, manipulate elections or hack a single smartphone. And they can do all that stealthily. They don’t announce their presence with a mushroom cloud and a storm of fire, nor do they leave a visible trail from launchpad to target
  • The two digital spheres may therefore drift further and further apart. For centuries, new information technologies fuelled the process of globalisation and brought people all over the world into closer contact. Paradoxically, information technology today is so powerful it can potentially split humanity by enclosing different people in separate information cocoons, ending the idea of a single shared human reality
  • For decades, the world’s master metaphor was the web. The master metaphor of the coming decades might be the cocoon.
  • Other countries or blocs, such as the EU, India, Brazil and Russia, may try to create their own digital cocoons,
  • Instead of being divided between two global empires, the world might be divided among a dozen empires.
  • The more the new empires compete against one another, the greater the danger of armed conflict.
  • The cold war between the US and the USSR never escalated into a direct military confrontation, largely thanks to the doctrine of mutually assured destruction. But the danger of escalation in the age of AI is bigger, because cyber warfare is inherently different from nuclear warfare.
  • US companies are now forbidden to export such chips to China. While in the short term this hampers China in the AI race, in the long term it pushes China to develop a completely separate digital sphere that will be distinct from the American digital sphere even in its smallest buildings.
  • The temptation to start a limited cyberwar is therefore big, and so is the temptation to escalate it.
  • A second crucial difference concerns predictability. The cold war was like a hyper-rational chess game, and the certainty of destruction in the event of nuclear conflict was so great that the desire to start a war was correspondingly small
  • Cyberwarfare lacks this certainty. Nobody knows for sure where each side has planted its logic bombs, Trojan horses and malware. Nobody can be certain whether their own weapons would actually work when called upon
  • Such uncertainty undermines the doctrine of mutually assured destruction. One side might convince itself – rightly or wrongly – that it can launch a successful first strike and avoid massive retaliation
  • Even if humanity avoids the worst-case scenario of global war, the rise of new digital empires could still endanger the freedom and prosperity of billions of people. The industrial empires of the 19th and 20th centuries exploited and repressed their colonies, and it would be foolhardy to expect new digital empires to behave much better
  • Moreover, if the world is divided into rival empires, humanity is unlikely to cooperate to overcome the ecological crisis or to regulate AI and other disruptive technologies such as bioengineering.
  • The division of the world into rival digital empires dovetails with the political vision of many leaders who believe that the world is a jungle, that the relative peace of recent decades has been an illusion, and that the only real choice is whether to play the part of predator or prey.
  • Given such a choice, most leaders would prefer to go down in history as predators and add their names to the grim list of conquerors that unfortunate pupils are condemned to memorise for their history exams.
  • These leaders should be reminded, however, that there is a new alpha predator in the jungle. If humanity doesn’t find a way to cooperate and protect our shared interests, we will all be easy prey to AI.
Javier E

Farhad and Mike Discuss the Apple Case and a Go-Playing Computer Program - The New York... - 0 views

  • The program is a blend of deep learning and Monte Carlo algorithms, meaning it is both good at recognizing patterns and has the ability to exhaustively search vast libraries of possible moves.
  • the timetable for computing dominance of Go has been moved up roughly a decade from when it had been expected. That’s largely because the new ability to blend pattern recognition algorithms and vast data sets has been yielding spectacular results in the last half-decade. It’s like computer scientists have found a powerful new hammer, and they’re using it to pound lots of different nails
  • The Google program combines two types of algorithms. One is a machine learning algorithm, which does an extremely good job of recognizing patterns based on being trained on a vast set of examples. So it is likely to have seen almost any move that a human could make, and also know which responses are better ones.
  • ...1 more annotation...
  • A second type of algorithm can also see the consequences of particular moves far, far in advance of the game by playing millions and millions or perhaps even billions of combinations of moves. In contrast, human Go experts have their experience to rely on, but it is fuzzy by comparison. Think of this as an intellectual version of John Henry and the jackhammer.
blythewallick

Opinion | Changes to the Census Could Make Small Towns Disappear - The New York Times - 0 views

  • According to the 2010 census, 590 people lived in Toksook Bay. State demographers expect the total to rise by about 100 people when census results are published next year.
  • The law requires individual census records to be kept confidential for 72 years. Fearing that data brokers using new statistical techniques could de-anonymize the published population totals, the bureau is testing an algorithm that will scramble the final numbers. Imaginary people will be added to some locations and real people will be removed from others.
  • In Toksook Bay, the population dropped from 590 people to 540 in the test run. Mr. Pitka said that a decrease in the count due to the privacy algorithm would be “disappointing and hurtful.”
  • ...4 more annotations...
  • In Toksook Bay, federal grants helped pay for a permanent path to the nearby village of Nightmute, according to Mr. Pitka. “Now people aren’t making their own trails and tearing up the environment with their A.T.V.s.,” he said.
  • “When a small tribe puts its own money into getting all members to participate and it gets back information that it has a population of zero, it’s certainly not going to be willing to promote the census in the future,” said Norm DeWeaver, a consultant for Native American tribes on data issues.
  • Census officials have already exempted state population totals from the algorithm’s effects, so congressional apportionment will remain as accurate as possible. Dr. Abowd said that the census plans to increase accuracy for the populations of some small areas, such as reservations, and that the undercount of Native Americans in the test run is “unacceptable.” There is still time to modify the algorithm — the bureau has more than a year before it releases results to the states for redistricting.
  • The goal of the Census Bureau is to “count everyone once, only once and in the right place.” Trudging through the snow, enumerators in rural Alaska are helping the government reach that standard. But if the bureau uses its privacy algorithm without hearing from small communities like Toksook Bay, it risks undermining their efforts and damaging the census’s reputation for decades to come.
Javier E

The Coming Software Apocalypse - The Atlantic - 0 views

  • Our standard framework for thinking about engineering failures—reflected, for instance, in regulations for medical devices—was developed shortly after World War II, before the advent of software, for electromechanical systems. The idea was that you make something reliable by making its parts reliable (say, you build your engine to withstand 40,000 takeoff-and-landing cycles) and by planning for the breakdown of those parts (you have two engines). But software doesn’t break. Intrado’s faulty threshold is not like the faulty rivet that leads to the crash of an airliner. The software did exactly what it was told to do. In fact it did it perfectly. The reason it failed is that it was told to do the wrong thing.
  • Software failures are failures of understanding, and of imagination. Intrado actually had a backup router, which, had it been switched to automatically, would have restored 911 service almost immediately. But, as described in a report to the FCC, “the situation occurred at a point in the application logic that was not designed to perform any automated corrective actions.”
  • This is the trouble with making things out of code, as opposed to something physical. “The complexity,” as Leveson puts it, “is invisible to the eye.”
  • ...52 more annotations...
  • Code is too hard to think about. Before trying to understand the attempts themselves, then, it’s worth understanding why this might be: what it is about code that makes it so foreign to the mind, and so unlike anything that came before it.
  • Technological progress used to change the way the world looked—you could watch the roads getting paved; you could see the skylines rise. Today you can hardly tell when something is remade, because so often it is remade by code.
  • Software has enabled us to make the most intricate machines that have ever existed. And yet we have hardly noticed, because all of that complexity is packed into tiny silicon chips as millions and millions of lines of cod
  • The programmer, the renowned Dutch computer scientist Edsger Dijkstra wrote in 1988, “has to be able to think in terms of conceptual hierarchies that are much deeper than a single mind ever needed to face before.” Dijkstra meant this as a warning.
  • As programmers eagerly poured software into critical systems, they became, more and more, the linchpins of the built world—and Dijkstra thought they had perhaps overestimated themselves.
  • What made programming so difficult was that it required you to think like a computer.
  • The introduction of programming languages like Fortran and C, which resemble English, and tools, known as “integrated development environments,” or IDEs, that help correct simple mistakes (like Microsoft Word’s grammar checker but for code), obscured, though did little to actually change, this basic alienation—the fact that the programmer didn’t work on a problem directly, but rather spent their days writing out instructions for a machine.
  • “The problem is that software engineers don’t understand the problem they’re trying to solve, and don’t care to,” says Leveson, the MIT software-safety expert. The reason is that they’re too wrapped up in getting their code to work.
  • “The serious problems that have happened with software have to do with requirements, not coding errors.” When you’re writing code that controls a car’s throttle, for instance, what’s important is the rules about when and how and by how much to open it. But these systems have become so complicated that hardly anyone can keep them straight in their head. “There’s 100 million lines of code in cars now,” Leveson says. “You just cannot anticipate all these things.”
  • a nearly decade-long investigation into claims of so-called unintended acceleration in Toyota cars. Toyota blamed the incidents on poorly designed floor mats, “sticky” pedals, and driver error, but outsiders suspected that faulty software might be responsible
  • software experts spend 18 months with the Toyota code, picking up where NASA left off. Barr described what they found as “spaghetti code,” programmer lingo for software that has become a tangled mess. Code turns to spaghetti when it accretes over many years, with feature after feature piling on top of, and being woven around
  • Using the same model as the Camry involved in the accident, Barr’s team demonstrated that there were actually more than 10 million ways for the onboard computer to cause unintended acceleration. They showed that as little as a single bit flip—a one in the computer’s memory becoming a zero or vice versa—could make a car run out of control. The fail-safe code that Toyota had put in place wasn’t enough to stop it
  • . In all, Toyota recalled more than 9 million cars, and paid nearly $3 billion in settlements and fines related to unintended acceleration.
  • The problem is that programmers are having a hard time keeping up with their own creations. Since the 1980s, the way programmers work and the tools they use have changed remarkably little.
  • “Visual Studio is one of the single largest pieces of software in the world,” he said. “It’s over 55 million lines of code. And one of the things that I found out in this study is more than 98 percent of it is completely irrelevant. All this work had been put into this thing, but it missed the fundamental problems that people faced. And the biggest one that I took away from it was that basically people are playing computer inside their head.” Programmers were like chess players trying to play with a blindfold on—so much of their mental energy is spent just trying to picture where the pieces are that there’s hardly any left over to think about the game itself.
  • The fact that the two of them were thinking about the same problem in the same terms, at the same time, was not a coincidence. They had both just seen the same remarkable talk, given to a group of software-engineering students in a Montreal hotel by a computer researcher named Bret Victor. The talk, which went viral when it was posted online in February 2012, seemed to be making two bold claims. The first was that the way we make software is fundamentally broken. The second was that Victor knew how to fix it.
  • Though he runs a lab that studies the future of computing, he seems less interested in technology per se than in the minds of the people who use it. Like any good toolmaker, he has a way of looking at the world that is equal parts technical and humane. He graduated top of his class at the California Institute of Technology for electrical engineering,
  • in early 2012, Victor had finally landed upon the principle that seemed to thread through all of his work. (He actually called the talk “Inventing on Principle.”) The principle was this: “Creators need an immediate connection to what they’re creating.” The problem with programming was that it violated the principle. That’s why software systems were so hard to think about, and so rife with bugs: The programmer, staring at a page of text, was abstracted from whatever it was they were actually making.
  • “Our current conception of what a computer program is,” he said, is “derived straight from Fortran and ALGOL in the late ’50s. Those languages were designed for punch cards.”
  • WYSIWYG (pronounced “wizzywig”) came along. It stood for “What You See Is What You Get.”
  • Victor’s point was that programming itself should be like that. For him, the idea that people were doing important work, like designing adaptive cruise-control systems or trying to understand cancer, by staring at a text editor, was appalling.
  • With the right interface, it was almost as if you weren’t working with code at all; you were manipulating the game’s behavior directly.
  • When the audience first saw this in action, they literally gasped. They knew they weren’t looking at a kid’s game, but rather the future of their industry. Most software involved behavior that unfolded, in complex ways, over time, and Victor had shown that if you were imaginative enough, you could develop ways to see that behavior and change it, as if playing with it in your hands. One programmer who saw the talk wrote later: “Suddenly all of my tools feel obsolete.”
  • hen John Resig saw the “Inventing on Principle” talk, he scrapped his plans for the Khan Academy programming curriculum. He wanted the site’s programming exercises to work just like Victor’s demos. On the left-hand side you’d have the code, and on the right, the running program: a picture or game or simulation. If you changed the code, it’d instantly change the picture. “In an environment that is truly responsive,” Resig wrote about the approach, “you can completely change the model of how a student learns ... [They] can now immediately see the result and intuit how underlying systems inherently work without ever following an explicit explanation.” Khan Academy has become perhaps the largest computer-programming class in the world, with a million students, on average, actively using the program each month.
  • The ideas spread. The notion of liveness, of being able to see data flowing through your program instantly, made its way into flagship programming tools offered by Google and Apple. The default language for making new iPhone and Mac apps, called Swift, was developed by Apple from the ground up to support an environment, called Playgrounds, that was directly inspired by Light Table.
  • “Everyone thought I was interested in programming environments,” he said. Really he was interested in how people see and understand systems—as he puts it, in the “visual representation of dynamic behavior.” Although code had increasingly become the tool of choice for creating dynamic behavior, it remained one of the worst tools for understanding it. The point of “Inventing on Principle” was to show that you could mitigate that problem by making the connection between a system’s behavior and its code immediate.
  • In a pair of later talks, “Stop Drawing Dead Fish” and “Drawing Dynamic Visualizations,” Victor went one further. He demoed two programs he’d built—the first for animators, the second for scientists trying to visualize their data—each of which took a process that used to involve writing lots of custom code and reduced it to playing around in a WYSIWYG interface.
  • Victor suggested that the same trick could be pulled for nearly every problem where code was being written today. “I’m not sure that programming has to exist at all,” he told me. “Or at least software developers.” In his mind, a software developer’s proper role was to create tools that removed the need for software developers. Only then would people with the most urgent computational problems be able to grasp those problems directly, without the intermediate muck of code.
  • Victor implored professional software developers to stop pouring their talent into tools for building apps like Snapchat and Uber. “The inconveniences of daily life are not the significant problems,” he wrote. Instead, they should focus on scientists and engineers—as he put it to me, “these people that are doing work that actually matters, and critically matters, and using really, really bad tools.”
  • “people are not so easily transitioning to model-based software development: They perceive it as another opportunity to lose control, even more than they have already.”
  • In a model-based design tool, you’d represent this rule with a small diagram, as though drawing the logic out on a whiteboard, made of boxes that represent different states—like “door open,” “moving,” and “door closed”—and lines that define how you can get from one state to the other. The diagrams make the system’s rules obvious: Just by looking, you can see that the only way to get the elevator moving is to close the door, or that the only way to get the door open is to stop.
  • Bantegnie’s company is one of the pioneers in the industrial use of model-based design, in which you no longer write code directly. Instead, you create a kind of flowchart that describes the rules your program should follow (the “model”), and the computer generates code for you based on those rules
  • “Typically the main problem with software coding—and I’m a coder myself,” Bantegnie says, “is not the skills of the coders. The people know how to code. The problem is what to code. Because most of the requirements are kind of natural language, ambiguous, and a requirement is never extremely precise, it’s often understood differently by the guy who’s supposed to code.”
  • On this view, software becomes unruly because the media for describing what software should do—conversations, prose descriptions, drawings on a sheet of paper—are too different from the media describing what software does do, namely, code itself.
  • for this approach to succeed, much of the work has to be done well before the project even begins. Someone first has to build a tool for developing models that are natural for people—that feel just like the notes and drawings they’d make on their own—while still being unambiguous enough for a computer to understand. They have to make a program that turns these models into real code. And finally they have to prove that the generated code will always do what it’s supposed to.
  • tice brings order and accountability to large codebases. But, Shivappa says, “it’s a very labor-intensive process.” He estimates that before they used model-based design, on a two-year-long project only two to three months was spent writing code—the rest was spent working on the documentation.
  • uch of the benefit of the model-based approach comes from being able to add requirements on the fly while still ensuring that existing ones are met; with every change, the computer can verify that your program still works. You’re free to tweak your blueprint without fear of introducing new bugs. Your code is, in FAA parlance, “correct by construction.”
  • . In traditional programming, your task is to take complex rules and translate them into code; most of your energy is spent doing the translating, rather than thinking about the rules themselves. In the model-based approach, all you have is the rules. So that’s what you spend your time thinking about. It’s a way of focusing less on the machine and more on the problem you’re trying to get it to solve.
  • The bias against model-based design, sometimes known as model-driven engineering, or MDE, is in fact so ingrained that according to a recent paper, “Some even argue that there is a stronger need to investigate people’s perception of MDE than to research new MDE technologies.”
  • “Human intuition is poor at estimating the true probability of supposedly ‘extremely rare’ combinations of events in systems operating at a scale of millions of requests per second,” he wrote in a paper. “That human fallibility means that some of the more subtle, dangerous bugs turn out to be errors in design; the code faithfully implements the intended design, but the design fails to correctly handle a particular ‘rare’ scenario.”
  • Newcombe was convinced that the algorithms behind truly critical systems—systems storing a significant portion of the web’s data, for instance—ought to be not just good, but perfect. A single subtle bug could be catastrophic. But he knew how hard bugs were to find, especially as an algorithm grew more complex. You could do all the testing you wanted and you’d never find them all.
  • An algorithm written in TLA+ could in principle be proven correct. In practice, it allowed you to create a realistic model of your problem and test it not just thoroughly, but exhaustively. This was exactly what he’d been looking for: a language for writing perfect algorithms.
  • TLA+, which stands for “Temporal Logic of Actions,” is similar in spirit to model-based design: It’s a language for writing down the requirements—TLA+ calls them “specifications”—of computer programs. These specifications can then be completely verified by a computer. That is, before you write any code, you write a concise outline of your program’s logic, along with the constraints you need it to satisfy
  • Programmers are drawn to the nitty-gritty of coding because code is what makes programs go; spending time on anything else can seem like a distraction. And there is a patient joy, a meditative kind of satisfaction, to be had from puzzling out the micro-mechanics of code. But code, Lamport argues, was never meant to be a medium for thought. “It really does constrain your ability to think when you’re thinking in terms of a programming language,”
  • Code makes you miss the forest for the trees: It draws your attention to the working of individual pieces, rather than to the bigger picture of how your program fits together, or what it’s supposed to do—and whether it actually does what you think. This is why Lamport created TLA+. As with model-based design, TLA+ draws your focus to the high-level structure of a system, its essential logic, rather than to the code that implements it.
  • But TLA+ occupies just a small, far corner of the mainstream, if it can be said to take up any space there at all. Even to a seasoned engineer like Newcombe, the language read at first as bizarre and esoteric—a zoo of symbols.
  • this is a failure of education. Though programming was born in mathematics, it has since largely been divorced from it. Most programmers aren’t very fluent in the kind of math—logic and set theory, mostly—that you need to work with TLA+. “Very few programmers—and including very few teachers of programming—understand the very basic concepts and how they’re applied in practice. And they seem to think that all they need is code,” Lamport says. “The idea that there’s some higher level than the code in which you need to be able to think precisely, and that mathematics actually allows you to think precisely about it, is just completely foreign. Because they never learned it.”
  • “In the 15th century,” he said, “people used to build cathedrals without knowing calculus, and nowadays I don’t think you’d allow anyone to build a cathedral without knowing calculus. And I would hope that after some suitably long period of time, people won’t be allowed to write programs if they don’t understand these simple things.”
  • Programmers, as a species, are relentlessly pragmatic. Tools like TLA+ reek of the ivory tower. When programmers encounter “formal methods” (so called because they involve mathematical, “formally” precise descriptions of programs), their deep-seated instinct is to recoil.
  • Formal methods had an image problem. And the way to fix it wasn’t to implore programmers to change—it was to change yourself. Newcombe realized that to bring tools like TLA+ to the programming mainstream, you had to start speaking their language.
  • he presented TLA+ as a new kind of “pseudocode,” a stepping-stone to real code that allowed you to exhaustively test your algorithms—and that got you thinking precisely early on in the design process. “Engineers think in terms of debugging rather than ‘verification,’” he wrote, so he titled his internal talk on the subject to fellow Amazon engineers “Debugging Designs.” Rather than bemoan the fact that programmers see the world in code, Newcombe embraced it. He knew he’d lose them otherwise. “I’ve had a bunch of people say, ‘Now I get it,’” Newcombe says.
  • In the world of the self-driving car, software can’t be an afterthought. It can’t be built like today’s airline-reservation systems or 911 systems or stock-trading systems. Code will be put in charge of hundreds of millions of lives on the road and it has to work. That is no small task.
Javier E

Facebook Executives Shut Down Efforts to Make the Site Less Divisive - WSJ - 0 views

  • A Facebook Inc. team had a blunt message for senior executives. The company’s algorithms weren’t bringing people together. They were driving people apart.
  • “Our algorithms exploit the human brain’s attraction to divisiveness,” read a slide from a 2018 presentation. “If left unchecked,” it warned, Facebook would feed users “more and more divisive content in an effort to gain user attention & increase time on the platform.”
  • That presentation went to the heart of a question dogging Facebook almost since its founding: Does its platform aggravate polarization and tribal behavior? The answer it found, in some cases, was yes.
  • ...27 more annotations...
  • in the end, Facebook’s interest was fleeting. Mr. Zuckerberg and other senior executives largely shelved the basic research, according to previously unreported internal documents and people familiar with the effort, and weakened or blocked efforts to apply its conclusions to Facebook products.
  • At Facebook, “There was this soul-searching period after 2016 that seemed to me this period of really sincere, ‘Oh man, what if we really did mess up the world?’
  • Another concern, they and others said, was that some proposed changes would have disproportionately affected conservative users and publishers, at a time when the company faced accusations from the right of political bias.
  • Americans were drifting apart on fundamental societal issues well before the creation of social media, decades of Pew Research Center surveys have shown. But 60% of Americans think the country’s biggest tech companies are helping further divide the country, while only 11% believe they are uniting it, according to a Gallup-Knight survey in March.
  • Facebook policy chief Joel Kaplan, who played a central role in vetting proposed changes, argued at the time that efforts to make conversations on the platform more civil were “paternalistic,” said people familiar with his comments.
  • The high number of extremist groups was concerning, the presentation says. Worse was Facebook’s realization that its algorithms were responsible for their growth. The 2016 presentation states that “64% of all extremist group joins are due to our recommendation tools” and that most of the activity came from the platform’s “Groups You Should Join” and “Discover” algorithms: “Our recommendation systems grow the problem.”
  • In a sign of how far the company has moved, Mr. Zuckerberg in January said he would stand up “against those who say that new types of communities forming on social media are dividing us.” People who have heard him speak privately said he argues social media bears little responsibility for polarization.
  • Fixing the polarization problem would be difficult, requiring Facebook to rethink some of its core products. Most notably, the project forced Facebook to consider how it prioritized “user engagement”—a metric involving time spent, likes, shares and comments that for years had been the lodestar of its system.
  • Even before the teams’ 2017 creation, Facebook researchers had found signs of trouble. A 2016 presentation that names as author a Facebook researcher and sociologist, Monica Lee, found extremist content thriving in more than one-third of large German political groups on the platform.
  • Swamped with racist, conspiracy-minded and pro-Russian content, the groups were disproportionately influenced by a subset of hyperactive users, the presentation notes. Most of them were private or secret.
  • One proposal Mr. Uribe’s team championed, called “Sparing Sharing,” would have reduced the spread of content disproportionately favored by hyperactive users, according to people familiar with it. Its effects would be heaviest on content favored by users on the far right and left. Middle-of-the-road users would gain influence.
  • The Common Ground team sought to tackle the polarization problem directly, said people familiar with the team. Data scientists involved with the effort found some interest groups—often hobby-based groups with no explicit ideological alignment—brought people from different backgrounds together constructively. Other groups appeared to incubate impulses to fight, spread falsehoods or demonize a population of outsiders.
  • Mr. Pariser said that started to change after March 2018, when Facebook got in hot water after disclosing that Cambridge Analytica, the political-analytics startup, improperly obtained Facebook data about tens of millions of people. The shift has gained momentum since, he said: “The internal pendulum swung really hard to ‘the media hates us no matter what we do, so let’s just batten down the hatches.’ ”
  • Building these features and combating polarization might come at a cost of lower engagement, the Common Ground team warned in a mid-2018 document, describing some of its own proposals as “antigrowth” and requiring Facebook to “take a moral stance.”
  • Taking action would require Facebook to form partnerships with academics and nonprofits to give credibility to changes affecting public conversation, the document says. This was becoming difficult as the company slogged through controversies after the 2016 presidential election.
  • Asked to combat fake news, spam, clickbait and inauthentic users, the employees looked for ways to diminish the reach of such ills. One early discovery: Bad behavior came disproportionately from a small pool of hyperpartisan users.
  • A second finding in the U.S. saw a larger infrastructure of accounts and publishers on the far right than on the far left. Outside observers were documenting the same phenomenon. The gap meant even seemingly apolitical actions such as reducing the spread of clickbait headlines—along the lines of “You Won’t Believe What Happened Next”—affected conservative speech more than liberal content in aggregate.
  • Every significant new integrity-ranking initiative had to seek the approval of not just engineering managers but also representatives of the public policy, legal, marketing and public-relations departments.
  • “Engineers that were used to having autonomy maybe over-rotated a bit” after the 2016 election to address Facebook’s perceived flaws, she said. The meetings helped keep that in check. “At the end of the day, if we didn’t reach consensus, we’d frame up the different points of view, and then they’d be raised up to Mark.”
  • Disapproval from Mr. Kaplan’s team or Facebook’s communications department could scuttle a project, said people familiar with the effort. Negative policy-team reviews killed efforts to build a classification system for hyperpolarized content. Likewise, the Eat Your Veggies process shut down efforts to suppress clickbait about politics more than on other topics.
  • Under Facebook’s engagement-based metrics, a user who likes, shares or comments on 1,500 pieces of content has more influence on the platform and its algorithms than one who interacts with just 15 posts, allowing “super-sharers” to drown out less-active users
  • Accounts with hyperactive engagement were far more partisan on average than normal Facebook users, and they were more likely to behave suspiciously, sometimes appearing on the platform as much as 20 hours a day and engaging in spam-like behavior. The behavior suggested some were either people working in shifts or bots.
  • “We’re explicitly not going to build products that attempt to change people’s beliefs,” one 2018 document states. “We’re focused on products that increase empathy, understanding, and humanization of the ‘other side.’ ”
  • The debate got kicked up to Mr. Zuckerberg, who heard out both sides in a short meeting, said people briefed on it. His response: Do it, but cut the weighting by 80%. Mr. Zuckerberg also signaled he was losing interest in the effort to recalibrate the platform in the name of social good, they said, asking that they not bring him something like that again.
  • Mr. Uribe left Facebook and the tech industry within the year. He declined to discuss his work at Facebook in detail but confirmed his advocacy for the Sparing Sharing proposal. He said he left Facebook because of his frustration with company executives and their narrow focus on how integrity changes would affect American politics
  • While proposals like his did disproportionately affect conservatives in the U.S., he said, in other countries the opposite was true.
  • The tug of war was resolved in part by the growing furor over the Cambridge Analytica scandal. In a September 2018 reorganization of Facebook’s newsfeed team, managers told employees the company’s priorities were shifting “away from societal good to individual value,” said people present for the discussion. If users wanted to routinely view or post hostile content about groups they didn’t like, Facebook wouldn’t suppress it if the content didn’t specifically violate the company’s rules.
Javier E

A Handful of Accounts Create Most of What We See on Social Media - WSJ - 0 views

  • Social media is turning into old-fashioned network television.
  • A handful of accounts create most of the content that we see. Everyone else? They play the role of the audience, which is there to mostly amplify and applaud
  • The personal tidbits that people used to share on social media have been relegated to private group chats and their equivalent.
  • ...23 more annotations...
  • The transformation of social media into mass media is largely because the rise of TikTok has demonstrated to every social-media company on the planet that people still really like things that can re-create the experience of TV
  • Advertisers also like things that function like TV, of course—after all, people are never more suggestible than when lulled into a sort of anesthetized mindlessness.
  • In this future, people who are good at making content with high production values will thrive, as audiences and tech company algorithms gravitate toward more professional content.
  • On these formerly-social platforms, whether content is coming from creators with better equipment and more skills, or Hollywood studios testing the waters, hardly matters. In the end, it will all look remarkably similar to the consumer.
  • It will look
  • like flipping through cable channels does, only our thumb on the remote has been replaced by our thumb on the screen of our phone, swiping from one TikTok, YouTube Short, or Instagram Reel to the next.
  • A telling indicator is the rise of a new kind of entertainment professional—the “creator.”
  • A creator is anyone who records or makes something that can go viral on the internet
  • TikTok is now more popular than Netflix among consumers younger than 35,
  • While YouTube and TikTok have always been about video, just about every other social-media platform that wants to keep people engaged is emphasizing it more than ever, so that’s what creators have to make,
  • His agency gets involved with creators and musicians at the earliest stages of their careers, helping them plan content, update their style, understand what the algorithms of different platforms demand, and connecting them with potentially lucrative brand deals
  • . Even more telling: In first place is YouTube, the original online TV analog.
  • Where attention flows, money—and content—must also. In 2023 brands will spend an estimated $6 billion on marketing through influencers—a subspecies of creators
  • Globally, the total addressable market for this kind of marketing is currently $250 billion
  • Then there is a new generation of shows that are going straight to TikTok, bypassing even streaming services
  • In the wake of the success of YouTube and TikTok, Facebook, Instagram, and even LinkedIn are all pushing more and more content made by professionals into our feeds,
  • In order to quantify how TikTok has mastered the art of discerning our interests and feeding us the most compelling possible content, Faltesek, of Oregon State University, conducted a two-year project to study exactly what kind of content TikTok pushes
  • With a team of students, he created dozens of fresh TikTok user accounts that didn’t like or interact with content in any way—they just let the algorithm play one video after another.
  • At the end of this exhaustive process of gathering data on TikTok’s algorithm, the conclusion became obvious, says Faltesek. “TikTok is television. It flips channels like TV, it provides a flow like TV.”
  • By this logic, Instagram’s move to copy TikTok, which is in turn encroaching on the turf of YouTube by allowing longer videos, and the increasing dominance of professional content on all three, means they’re all turning into TV. Even Threads, the new offering from Facebook parent company Meta, is fast becoming a broadcast medium for news, as Twitter was before it.
  • In every case, the structure of social networks has become one in which a handful of accounts create most of the content that others see, and the role of everyone else on the network is, primarily, to amplify and consume that content,
  • Some, like Magana, believe we’ll eventually see an ever more complete blending of what were once “social” platforms with the traditional television networks and even film studios.  
  • aren’t convinced they’ll eat the rest of the entertainment industry. “It’s hard to say this kind of short-form video will be the only kind of TV,” she reflects. “A long time ago, the internet became the new thing, but we still have the other forms on television, and scripted streaming shows. It’s almost like this is just another avenue for that—of watching shows and movies on your phone.”
Javier E

The great artificial intelligence duopoly - The Washington Post - 0 views

  • The AI revolution will have two engines — China and the United States — pushing its progress swiftly forward. It is unlike any previous technological revolution that emerged from a singular cultural setting. Having two engines will further accelerate the pace of technology.
  • WorldPost: In your book, you talk about the “data gap” between these two engines. What do you mean by that? Lee: Data is the raw material on which AI runs. It is like the role of oil in powering an industrial economy. As an AI algorithm is fed more examples of the phenomenon you want the algorithm to understand, it gains greater and greater accuracy. The more faces you show a facial recognition algorithm, the fewer mistakes it will make in recognizing your face
  • All data is not the same, however. China and the United States have different strengths when it comes to data. The gap emerges when you consider the breadth, quality and depth of the data. Breadth means the number of users, the population whose actions are captured in data. Quality means how well-structured and well-labeled the data is. Depth means how many different data points are generated about the activities of each user.
  • ...15 more annotations...
  • Chinese and American companies are on relatively even footing when it comes to breadth. Though American Internet companies have a smaller domestic user base than China, which has over a billion users on 4G devices, the best American companies can also draw in users from around the globe, bringing their total user base to over a billion.
  • when it comes to depth of data, China has the upper hand. Chinese Internet users channel a much larger portion of their daily activities, transactions and interactions through their smartphones. They use their smartphones for managing their daily lives, from buying groceries at the market to paying their utility bills, booking train or bus tickets and to take out loans, among other things.
  • Weaving together data from mobile payments, public services, financial management and shared mobility gives Chinese companies a deep and more multi-dimensional picture of their users. That allows their AI algorithms to precisely tailor product offerings to each individual. In the current age of AI implementation, this will likely lead to a substantial acceleration and deepening of AI’s impact across China’s economy. That is where the “data gap” appears
  • The radically different business model in China, married to Chinese user habits, creates indigenous branding and monetization strategies as well as an entirely alternative infrastructure for apps and content. It is therefore very difficult, if not impossible, for any American company to try to enter China’s market or vice versa
  • companies in both countries are pursuing their own form of international expansion. The United States uses a “full platform” approach — all Google, all Facebook. Essentially Australia, North America and Europe completely accept the American methodology. That technical empire is likely to continue.
  • The Chinese have realized that the U.S. empire is too difficult to penetrate, so they are looking elsewhere. They are trying, and generally succeeding, in Southeast Asia, the Middle East and Africa. Those regions and countries have not been a focus of U.S. tech, so their products are not built with the cultures of those countries in mind. And since their demographics are closer to China’s — lower income and lots of people, including youth — the Chinese products are a better fit.
  • The jobs that AI cannot do are those of creators, or what I call “empathetic jobs” in services, which will be the largest category that can absorb those displaced from routine jobs. Many jobs will become available in this sector, from teaching to elderly care and nursing. A great effort must be made not only to increase the number of those jobs and create a career path for them but to increase their social status, which also means increasing the pay of these jobs.
  • Policy-wise, we are seeing three approaches. The Chinese have unleashed entrepreneurs with a utilitarian passion to commercialize technology. The Americans are similarly pro-entrepreneur, but the government takes a laissez-faire attitude and the entrepreneurs carry out more moonshots. And Europe is more consumer-oriented, trying to give ownership and control of data back to the individual.
  • An AI arms race would be a grave mistake. The AI boom is more akin to the spread of electricity in the early Industrial Revolution than nuclear weapons during the Cold War. Those who take the arms-race view are more interested in political posturing than the flourishing of humanity. The value of AI as an omni-use technology rests in its creative, not destructive, potential.
  • In a way, having parallel universes should diminish conflict. They can coexist while each can learn from the other. It is not a zero-sum game of winners and losers.
  • We will see a massive migration from one kind of employment to another, not unlike during the transition from agriculture to manufacturing. It will largely be the lower-wage jobs in routine work that will be eliminated, while the ultra-rich will stand to make a lot of money from AI. Social inequality will thus widen.
  • If you were to draw a map a decade from now, you would see China’s tech zone — built not on ownership but partnerships — stretching across Southeast Asia, Indonesia, Africa and to some extent South America. The U.S. zone would entail North America, Australia and Europe. Over time, the “parallel universes” already extant in the United States and China will grow to cover the whole world.
  • There are also issues related to poorer countries who have relied on either following the old China model of low-wage manufacturing jobs or of India’s call centers. AI will replace those jobs that were created by outsourcing from the West. They will be the first to go in the next 10 years. So, underdeveloped countries will also have to look to jobs for creators and in services.
  • I am opposed to the idea of universal basic income because it provides money both to those who don’t need it as well as those who do. And it doesn’t stimulate people’s desire to work. It puts them into a kind of “useless class” category with the terrible consequence of a resentful class without dignity or status.
  • To reinvigorate people’s desire to work with dignity, some subsidy can help offset the costs of critical needs that only humans can provide. That would be a much better use of the distribution of income than giving it to every person whether they need it or not. A far better idea would be for workers of the future to have an equity share in owning the robots — universal basic capital instead of universal basic income.
1 - 20 of 185 Next › Last »
Showing 20 items per page