Skip to main content

Home/ TOK Friends/ Group items tagged Facebook

Rss Feed Group items tagged

kushnerha

Facebook's Bias Is Built-In, and Bears Watching - The New York Times - 2 views

  • Facebook is the world’s most influential source of news.That’s true according to every available measure of size — the billion-plus people who devour its News Feed every day, the cargo ships of profit it keeps raking in, and the tsunami of online traffic it sends to other news sites.
  • But Facebook has also acquired a more subtle power to shape the wider news business. Across the industry, reporters, editors and media executives now look to Facebook the same way nesting baby chicks look to their engorged mother — as the source of all knowledge and nourishment, the model for how to behave in this scary new-media world. Case in point: The New York Times, among others, recently began an initiative to broadcast live video. Why do you suppose that might be? Yup, the F word. The deal includes payments from Facebook to news outlets, including The Times.
  • Yet few Americans think of Facebook as a powerful media organization, one that can alter events in the real world. When blowhards rant about the mainstream media, they do not usually mean Facebook, the mainstreamiest of all social networks. That’s because Facebook operates under a veneer of empiricism. Many people believe that what you see on Facebook represents some kind of data-mined objective truth unmolested by the subjective attitudes of fair-and-balanced human beings.
  • ...11 more annotations...
  • None of that is true. This week, Facebook rushed to deny a report in Gizmodo that said the team in charge of its “trending” news list routinely suppressed conservative points of view. Last month, Gizmodo also reported that Facebook employees asked Mark Zuckerberg, the social network’s chief executive, if the company had a responsibility to “help prevent President Trump in 2017.” Facebook denied it would ever try to manipulate elections.
  • Even if you believe that Facebook isn’t monkeying with the trending list or actively trying to swing the vote, the reports serve as timely reminders of the ever-increasing potential dangers of Facebook’s hold on the news.
  • The question isn’t whether Facebook has outsize power to shape the world — of course it does, and of course you should worry about that power. If it wanted to, Facebook could try to sway elections, favor certain policies, or just make you feel a certain way about the world, as it once proved it could do in an experiment devised to measure how emotions spread online.
  • There is no evidence Facebook is doing anything so alarming now. The danger is nevertheless real. The biggest worry is that Facebook doesn’t seem to recognize its own power, and doesn’t think of itself as a news organization with a well-developed sense of institutional ethics and responsibility, or even a potential for bias. Neither does its audience, which might believe that Facebook is immune to bias because it is run by computers.
  • That myth should die. It’s true that beyond the Trending box, most of the stories Facebook presents to you are selected by its algorithms, but those algorithms are as infused with bias as any other human editorial decision.
  • “With Facebook, humans are never not involved. Humans are in every step of the process — in terms of what we’re clicking on, who’s shifting the algorithms behind the scenes, what kind of user testing is being done, and the initial training data provided by humans.”Everything you see on Facebook is therefore the product of these people’s expertise and considered judgment, as well as their conscious and unconscious biases apart from possible malfeasance or potential corruption. It’s often hard to know which, because Facebook’s editorial sensibilities are secret. So are its personalities: Most of the engineers, designers and others who decide what people see on Facebook will remain forever unknown to its audience.
  • Facebook also has an unmistakable corporate ethos and point of view. The company is staffed mostly by wealthy coastal Americans who tend to support Democrats, and it is wholly controlled by a young billionaire who has expressed policy preferences that many people find objectionable.
  • You could argue that none of this is unusual. Many large media outlets are powerful, somewhat opaque, operated for profit, and controlled by wealthy people who aren’t shy about their policy agendas — Bloomberg News, The Washington Post, Fox News and The New York Times, to name a few.But there are some reasons to be even more wary of Facebook’s bias. One is institutional. Many mainstream outlets have a rigorous set of rules and norms about what’s acceptable and what’s not in the news business.
  • Those algorithms could have profound implications for society. For instance, one persistent worry about algorithmic-selected news is that it might reinforce people’s previously held points of view. If News Feed shows news that we’re each likely to Like, it could trap us into echo chambers and contribute to rising political polarization. In a study last year, Facebook’s scientists asserted the echo chamber effect was muted.
  • are Facebook’s engineering decisions subject to ethical review? Nobody knows.
  • The other reason to be wary of Facebook’s bias has to do with sheer size. Ms. Caplan notes that when studying bias in traditional media, scholars try to make comparisons across different news outlets. To determine if The Times is ignoring a certain story unfairly, look at competitors like The Washington Post and The Wall Street Journal. If those outlets are covering a story and The Times isn’t, there could be something amiss about the Times’s news judgment.Such comparative studies are nearly impossible for Facebook. Facebook is personalized, in that what you see on your News Feed is different from what I see on mine, so the only entity in a position to look for systemic bias across all of Facebook is Facebook itself. Even if you could determine the spread of stories across all of Facebook’s readers, what would you compare it to?
Javier E

Is Facebook Bad for You? It Is for About 360 Million Users, Company Surveys Suggest - WSJ - 0 views

  • Facebook FB 1.57% researchers have found that 1 in 8 of its users report engaging in compulsive use of social media that impacts their sleep, work, parenting or relationships, according to documents reviewed by The Wall Street Journal.
  • These patterns of what the company calls problematic use mirror what is popularly known as internet addiction. They were perceived by users to be worse on Facebook than any other major social-media platform
  • A Facebook team focused on user well-being suggested a range of fixes, and the company implemented some, building in optional features to encourage breaks from social media and to dial back the notifications that can serve as a lure to bring people back to the platform.
  • ...25 more annotations...
  • Facebook shut down the team in late 2019.
  • “We have a role to play, which is why we’ve built tools and controls to help people manage when and how they use our services,” she said in the statement. “Furthermore, we have a dedicated team working across our platforms to better understand these issues and ensure people are using our apps in ways that are meaningful to them.”
  • They wrote that they don’t consider the behavior to be a clinical addiction because it doesn’t affect the brain in the same way as gambling or substance abuse. In one document, they noted that “activities like shopping, sex and Facebook use, when repetitive and excessive, may cause problems for some people.”
  • In March 2020, several months after the well-being team was dissolved, researchers who had been on the team shared a slide deck internally with some of the findings and encouraged other teams to pick up the work.
  • The researchers estimated these issues affect about 12.5% of the flagship app’s more than 2.9 billion users, or more than 360 million people. About 10% of users in the U.S., one of Facebook’s most lucrative markets, exhibit this behavior
  • In the Philippines and in India, which is the company’s largest market, the employees put the figure higher, at around 25%.
  • “Why should we care?” the researchers wrote in the slide deck. “People perceive the impact. In a comparative study with competitors, people perceived lower well-being and higher problematic use on Facebook compared to any other service.
  • Facebook’s findings are consistent with what many external researchers have observed for years,
  • said Brian Primack, a professor of public health and medicine and dean of the College of Education and Health Professions at the University of Arkansas
  • His research group followed about a thousand people over six months in a nationally representative survey and found that the amount of social media that a person used was the No. 1 predictor of the variables they measured for who became depressed.
  • In late 2017, a Facebook executive and a researcher wrote a public blog post that outlined some of the issues with social-media addiction. According to the post, the company had found that while passive consumption of social media could make you feel worse, the opposite was true of more active social-media use.
  • Inside Facebook, the researchers registered concern about the direction of Facebook’s focus on certain metrics, including the number of times a person logs into the app, which the company calls a session. “One of the worries with using sessions as a north star is we want to be extra careful not to game them by creating bad experiences for vulnerable populations,” a researcher wrote, referring to elements designed to draw people back to Facebook frequently, such as push notifications.
  • Facebook then made a switch to more heavily weigh “meaningful social interactions” in its news feed as a way to combat passive consumption. One side effect of that change, as outlined in a previous Journal article in The Facebook Files, was that the company’s algorithms rewarded content that was angry or sensational, because those posts increased engagement from users.
  • Facebook said any algorithm can promote objectionable or harmful content and that the company is doing its best to mitigate the problem.
  • “Every second that I wasn’t occupied by something I had to do I was fooling around on my phone scrolling through Facebook,” Ms. Gandy said. “Facebook took over my brain.”
  • “Actively interacting with people—especially sharing messages, posts and comments with close friends and reminiscing about past interactions—is linked to improvements in well-being,” the company said.
  • The well-being team, according to people familiar with the matter, was reshuffled at least twice since late 2017 before it was disbanded, and could get only about half of the resources the team requested to do its work.
  • In 2018, Facebook’s researchers surveyed 20,000 U.S. users and paired their answers with data about their behavior on Facebook. The researchers found about 3% of these users said they experienced “serious problems” in their sleep, work or relationships related to their time on Facebook that they found difficult to change. Some of the researchers’ work was published in a 2019 paper.
  • According to that study, the researchers also said that a liberal interpretation of the results would be that 14% of respondents spent “a lot more time on Facebook than they want to,” although they didn’t label this group problematic users.
  • In 2019, the researchers had come to a new figure: What they called problematic use affects 12.5% of people on Facebook, they said. This survey used a broader definition for the issue, including users who reported negative results on key aspects of their life as well as feelings of guilt or a loss of control, according to the documents.
  • The researchers also asked Facebook users what aspects of Facebook triggered them most. The users said the app’s many notifications sucked them in. “Red dots are toxic on the home screen,” a male young adult in the U.S. told the researchers, referring to the symbol that alerts a user to new content.
  • One entrepreneur came up with his own solution to some of these issues. In 2016, software developer Louis Barclay manually unfollowed all the people, pages and groups he saw on Facebook in an attempt to be more deliberate about how he used technology. The process, which isn’t the same as unfriending, took him days, but he was happy with the result: an empty newsfeed that no longer sucked him in for hours. He could still visit the profile pages of everyone he wanted to connect with on Facebook, but their content would no longer appear in the never-ending scroll of posts.
  • Thinking other people might benefit from a similar experience on Facebook, he built a tool that would enable anyone to automate the process. He created it as a piece of add-on software called a browser extension that anyone could download. He called it Unfollow Everything and made it available on Chrome’s web store free of charge.
  • In July, Facebook sent Mr. Barclay a cease-and-desist letter, which the inventor earlier wrote about for Slate, saying his tool was a breach of its terms of service for automating user interactions. It also permanently disabled Mr. Barclay’s personal Facebook and Instagram accounts.
  • Ms. Lever, the company spokeswoman, said Mr. Barclay’s extension could pose risks if abused, and said Facebook offers its own unfollow tool that allows users to manually unfollow accounts.
Javier E

Facebook's Troubling One-Way Mirror - The New York Times - 1 views

  • If you bothered to read the fine print when you created your Facebook account, you would have noticed just how much of yourself you were giving over to Mark Zuckerberg and his $340 billion social network.
  • In exchange for an admittedly magical level of connectivity, you were giving them your life as content — the right to run ads around video from your daughter’s basketball game; pictures from your off-the-chain birthday party, or an emotional note about your return to health after serious illness. You also gave them the right to use your information to help advertisers market to you
  • at the heart of the relationship is a level of trust and a waiving of privacy that Facebook requires from its users as it pursues its mission to “make the world more open and connected.”
  • ...13 more annotations...
  • how open is Facebook willing to be in return?
  • not very.
  • that should concern anyone of any political persuasion as Facebook continues to gain influence over the national — and international — conversation
  • Increasingly, those users are spending time on Facebook not only to share personal nuggets with friends, but, for more than 40 percent of American adults, according to Pew Research Center, to stay on top of news
  • It now has an inordinate power to control a good part of the national discussion should it choose to do so, a role it shares with Sili
  • There was the initial statement that Facebook could find “no evidence” supporting the allegations; Facebook said it did not “insert stories artificially” into the Trending list, and that it had “rigorous guidelines” to ensure neutrality. But when journalists like my colleague Farhad Manjoo asked for more details about editorial guidelines, the company declined to discuss them.
  • Only after The Guardian newspaper obtained an old copy of the Trending Topics guidelines did Facebook provide more information, and an up-to-date copy of them. (They showed that humans work with algorithms to shape the lists and introduce headlines on their own under some circumstances, contradicting Facebook’s initial statement, Recode noted.) It was openness by way of a bullet to the foot.
  • a more important issue emerged during the meeting that had been lying beneath the surface, and has been for a while now: the power of the algorithms that determine what goes into individual Facebook pages.
  • “What they have is a disproportionate amount of power, and that’s the real story,” Mr. Carlson told me. “It’s just concentrated in a way you’ve never seen before in media.”
  • What most people don’t realize is that not everything they like or share necessarily gets a prominent place in their friends’ newsfeeds: The Facebook algorithm sends it to those it determines will find it most engaging.
  • For outlets like The Daily Caller, The Huffington Post, The Washington Post or The New York Times — for whom Facebook’s audience is vital to growth — any algorithmic change can affect how many people see their journalism.
  • This gives Facebook enormous influence over how newsrooms, almost universally eager for Facebook exposure, make decisions and money. Alan Rusbridger, a former editor of The Guardian, called this a “profound and alarming” development in a column in The New Statesman last week.
  • , Facebook declines to talk in great detail about its algorithms, noting that it does not want to make it easy to game its system. That system, don’t forget, is devised to keep people on Facebook by giving them what they want
Javier E

I Downloaded the Information That Facebook Has on Me. Yikes. - The New York Times - 0 views

  • When I downloaded a copy of my Facebook data last week, I didn’t expect to see much. My profile is sparse, I rarely post anything on the site, and I seldom click on ads
  • With a few clicks, I learned that about 500 advertisers — many that I had never heard of, like Bad Dad, a motorcycle parts store, and Space Jesus, an electronica band — had my contact information
  • Facebook also had my entire phone book, including the number to ring my apartment buzzer. The social network had even kept a permanent record of the roughly 100 people I had deleted from my friends list over the last 14 years, including my exes.
  • ...16 more annotations...
  • During his testimony, Mr. Zuckerberg repeatedly said Facebook has a tool for downloading your data that “allows people to see and take out all the information they’ve put into Facebook.”
  • Most basic information, like my birthday, could not be deleted. More important, the pieces of data that I found objectionable, like the record of people I had unfriended, could not be removed from Facebook, either.
  • “They don’t delete anything, and that’s a general policy,” said Gabriel Weinberg, the founder of DuckDuckGo, which offers internet privacy tools. He added that data was kept around to eventually help brands serve targeted ads.
  • When you download a copy of your Facebook data, you will see a folder containing multiple subfolders and files. The most important one is the “index” file, which is essentially a raw data set of your Facebook account, where you can click through your profile, friends list, timeline and messages, among other features.
  • Upon closer inspection, it turned out that Facebook had stored my entire phone book because I had uploaded it when setting up Facebook’s messaging app, Messenger.
  • Facebook also kept a history of each time I opened Facebook over the last two years, including which device and web browser I used. On some days, it even logged my locations, like when I was at a hospital two years ago or when I visited Tokyo last year.
  • what bothered me was the data that I had explicitly deleted but that lingered in plain sight. On my friends list, Facebook had a record of “Removed Friends,” a dossier of the 112 people I had removed along with the date I clicked the “Unfriend” button. Why should Facebook remember the people I’ve cut off from my life?
  • Facebook said unfamiliar advertisers might appear on the list because they might have obtained my contact information from elsewhere, compiled it into a list of people they wanted to target and uploaded that list into Facebook
  • Brands can obtain your information in many different ways. Those include:
  • ■ Buying information from a data provider like Acxiom, which has amassed one of the world’s largest commercial databases on consumers. Brands can buy different types of customer data sets from a provider, like contact information for people who belong to a certain demographic, and take that information to Facebook to serve targeted ads
  • ■ Using tracking technologies like web cookies and invisible pixels that load in your web browser to collect information about your browsing activities. There are many different trackers on the web, and Facebook offers 10 different trackers to help brands harvest your information, according to Ghostery, which offers privacy tools that block ads and trackers.
  • ■ Getting your information in simpler ways, too. Someone you shared information with could share it with another entity. Your credit card loyalty program, for example
  • I also downloaded copies of my Google data with a tool called Google Takeout. The data sets were exponentially larger than my Facebook data.
  • For my personal email account alone, Google’s archive of my data measured eight gigabytes, enough to hold about 2,000 hours of music. By comparison, my Facebook data was about 650 megabytes, the equivalent of about 160 hours of music.
  • In a folder labeled Ads, Google kept a history of many news articles I had read, like a Newsweek story about Apple employees walking into glass walls and a New York Times story about the editor of our Modern Love column. I didn’t click on ads for either of these stories, but the search giant logged them because the sites had loaded ads served by Google.
  • In another folder, labeled Android, Google had a record of apps I had opened on an Android phone since 2015, along with the date and time. This felt like an extraordinary level of detail.
Javier E

Don't Be Surprised About Facebook and Teen Girls. That's What Facebook Is. | Talking Po... - 0 views

  • First, set aside all morality. Let’s say we have a 16 year old girl who’s been doing searches about average weights, whether boys care if a girl is overweight and maybe some diets. She’s also spent some time on a site called AmIFat.com. Now I set you this task. You’re on the other side of the Facebook screen and I want you to get her to click on as many things as possible and spend as much time clicking or reading as possible. Are you going to show her movie reviews? Funny cat videos? Homework tips? Of course, not.
  • If you’re really trying to grab her attention you’re going to show her content about really thin girls, how their thinness has gotten them the attention of boys who turn out to really love them, and more diets
  • We both know what you’d do if you were operating within the goals and structure of the experiment.
  • ...17 more annotations...
  • This is what artificial intelligence and machine learning are. Facebook is a series of algorithms and goals aimed at maximizing engagement with Facebook. That’s why it’s worth hundreds of billions of dollars. It has a vast army of computer scientists and programmers whose job it is to make that machine more efficient.
  • the Facebook engine is designed to scope you out, take a psychographic profile of who you are and then use its data compiled from literally billions of humans to serve you content designed to maximize your engagement with Facebook.
  • Put in those terms, you barely have a chance.
  • Of course, Facebook can come in and say, this is damaging so we’re going to add some code that says don’t show this dieting/fat-shaming content but girls 18 and under. But the algorithms will find other vulnerabilities
  • So what to do? The decision of all the companies, if not all individuals, was just to lie. What else are you going to do? Say we’re closing down our multi-billion dollar company because our product shouldn’t exist?
  • why exactly are you creating a separate group of subroutines that yanks Facebook back when it does what it’s supposed to do particularly well? This, indeed, was how the internal dialog at Facebook developed, as described in the article I read. Basically, other executives said: Our business is engagement, why are we suggesting people log off for a while when they get particularly engaged?
  • what it makes me think about more is the conversations at Tobacco companies 40 or 50 years ago. At a certain point you realize: our product is bad. If used as intended it causes lung cancer, heart disease and various other ailments in a high proportion of the people who use the product. And our business model is based on the fact that the product is chemically addictive. Our product is getting people addicted to tobacco so that they no longer really have a choice over whether to buy it. And then a high proportion of them will die because we’ve succeeded.
  • . The algorithms can be taught to find and address an infinite numbers of behaviors. But really you’re asking the researchers and programmers to create an alternative set of instructions where Instagram (or Facebook, same difference) jumps in and does exactly the opposite of its core mission, which is to drive engagement
  • You can add filters and claim you’re not marketing to kids. But really you’re only ramping back the vast social harm marginally at best. That’s the product. It is what it is.
  • there is definitely an analogy inasmuch as what you’re talking about here aren’t some glitches in the Facebook system. These aren’t some weird unintended consequences that can be ironed out of the product. It’s also in most cases not bad actors within Facebook. It’s what the product is. The product is getting attention and engagement against which advertising is sold
  • How good is the machine learning? Well, trial and error with between 3 and 4 billion humans makes you pretty damn good. That’s the product. It is inherently destructive, though of course the bad outcomes aren’t distributed evenly throughout the human population.
  • The business model is to refine this engagement engine, getting more attention and engagement and selling ads against the engagement. Facebook gets that revenue and the digital roadkill created by the product gets absorbed by the society at large
  • Facebook is like a spectacularly profitable nuclear energy company which is so profitable because it doesn’t build any of the big safety domes and dumps all the radioactive waste into the local river.
  • in the various articles describing internal conversations at Facebook, the shrewder executives and researchers seem to get this. For the company if not every individual they seem to be following the tobacco companies’ lead.
  • Ed. Note: TPM Reader AS wrote in to say I was conflating Facebook and Instagram and sometimes referring to one or the other in a confusing way. This is a fair
  • I spoke of them as the same intentionally. In part I’m talking about Facebook’s corporate ownership. Both sites are owned and run by the same parent corporation and as we saw during yesterday’s outage they are deeply hardwired into each other.
  • the main reason I spoke of them in one breath is that they are fundamentally the same. AS points out that the issues with Instagram are distinct because Facebook has a much older demographic and Facebook is a predominantly visual medium. (Indeed, that’s why Facebook corporate is under such pressure to use Instagram to drive teen and young adult engagement.) But they are fundamentally the same: AI and machine learning to drive engagement. Same same. Just different permutations of the same dynamic.
Javier E

How 2020 Forced Facebook and Twitter to Step In - The Atlantic - 0 views

  • mainstream platforms learned their lesson, accepting that they should intervene aggressively in more and more cases when users post content that might cause social harm.
  • During the wildfires in the American West in September, Facebook and Twitter took down false claims about their cause, even though the platforms had not done the same when large parts of Australia were engulfed in flames at the start of the year
  • Twitter, Facebook, and YouTube cracked down on QAnon, a sprawling, incoherent, and constantly evolving conspiracy theory, even though its borders are hard to delineate.
  • ...15 more annotations...
  • Content moderation comes to every content platform eventually, and platforms are starting to realize this faster than ever.
  • Nothing symbolizes this shift as neatly as Facebook’s decision in October (and Twitter’s shortly after) to start banning Holocaust denial. Almost exactly a year earlier, Zuckerberg had proudly tied himself to the First Amendment in a widely publicized “stand for free expression” at Georgetown University.
  • The evolution continues. Facebook announced earlier this month that it will join platforms such as YouTube and TikTok in removing, not merely labeling or down-ranking, false claims about COVID-19 vaccines.
  • the pandemic also showed that complete neutrality is impossible. Even though it’s not clear that removing content outright is the best way to correct misperceptions, Facebook and other platforms plainly want to signal that, at least in the current crisis, they don’t want to be seen as feeding people information that might kill them.
  • When internet platforms announce new policies, assessing whether they can and will enforce them consistently has always been difficult. In essence, the companies are grading their own work. But too often what can be gleaned from the outside suggests that they’re failing.
  • It tweaked its algorithm to boost authoritative sources in the news feed and turned off recommendations to join groups based around political or social issues. Facebook is reversing some of these steps now, but it cannot make people forget this toolbox exists in the future
  • As platforms grow more comfortable with their power, they are recognizing that they have options beyond taking posts down or leaving them up. In addition to warning labels, Facebook implemented other “break glass” measures to stem misinformation as the election approached.
  • Platforms don’t deserve praise for belatedly noticing dumpster fires that they helped create and affixing unobtrusive labels to them
  • Warning labels for misinformation might make some commentators feel a little better, but whether labels actually do much to contain the spread of false information is still unknown.
  • News reporting suggests that insiders at Facebook knew they could and should do more about misinformation, but higher-ups vetoed their ideas. YouTube barely acted to stem the flood of misinformation about election results on its platform.
  • Even before the pandemic, YouTube had begun adjusting its recommendation algorithm to reduce the spread of borderline and harmful content, and is introducing pop-up nudges to encourage user
  • And if 2020 finally made clear to platforms the need for greater content moderation, it also exposed the inevitable limits of content moderation.
  • Down-ranking, labeling, or deleting content on an internet platform does not address the social or political circumstances that caused it to be posted in the first place
  • even the most powerful platform will never be able to fully compensate for the failures of other governing institutions or be able to stop the leader of the free world from constructing an alternative reality when a whole media ecosystem is ready and willing to enable him. As Renée DiResta wrote in The Atlantic last month, “reducing the supply of misinformation doesn’t eliminate the demand.”
  • Even so, this year’s events showed that nothing is innate, inevitable, or immutable about platforms as they currently exist. The possibilities for what they might become—and what role they will play in society—are limited more by imagination than any fixed technological constraint, and the companies appear more willing to experiment than ever.
Javier E

Dark social traffic in the mobile app era -- Fusion - 1 views

  • over the last two years, the Internet landscape has been changing. People use their phones differently from their computers, and that has made Facebook more dominant.
  • people spend about as much time in apps as they do on the desktop and mobile webs combined.
  • The takeaway is this: if you’re a media company, you are almost certainly underestimating your Facebook traffic. The only question is how much Facebook traffic you’re not counting.
  • ...11 more annotations...
  • it should be even more clear now: Facebook owns web media distribution.
  • The mobile web has exploded. This is due to the falling cost and rising quality of smartphones. Now, both Apple and Google have huge numbers of great apps, and people love them.
  • a good chunk of what we might have called dark social visits are actually Facebook mobile app visitors in disguise.
  • beginning last October, Facebook made changes in its algorithm that started pushing massive amounts of traffic to media publishers. In some cases, as at The Atlantic, where I last worked, our Facebook traffic went up triple-digit percentages. Facebook simultaneously also pushed users to like pages from media companies, which drove up the fan-counts at all kinds of media sites. If you see a page with a million followers, there is a 99 percent chance that it got a push from Facebook.
  • Chief among the non-gaming apps is Facebook. They’ve done a remarkable job building a mobile app that keeps people using it.
  • when people are going through their news feeds on the Facebook app and they click on a link, it’s as if someone cut and pasted that link into the browser, meaning that the Facebook app and the target website don’t do the normal handshaking that they do on the web. In the desktop scenario, the incoming visitor has a tout that runs ahead to the website and says, “Hey, I’m coming from Facebook.com.” In the mobile app scenario that communication, known as the referrer, does not happen.
  • Facebook—which every media publisher already knows owns them—actually has a much tighter grip on web traffic than anyone had thought. Which would make their big-footing among publishers that much more interesting. Because they certainly know how much traffic they’re sending to all your favorite websites, even if those websites themselves do not.
  • Whenever you go to a website, you take along a little profile called a “user agent.” It says what my operating system is and what kind of browser I use, along with some other information.
  • A story’s shareability is now largely determined by its shareability on Facebook, with all its attendant quirks and feedback loops. We’re all optimizing for Facebook now,
  • the social networks—by which I mostly mean Facebook—have begun to eat away at the roots of the old ways of sharing on non-commercial platforms.
  • what people like to do with their phones, en masse, is open up the Facebook app and thumb through their news feeds.
jlessner

Why Facebook's News Experiment Matters to Readers - NYTimes.com - 0 views

  • Facebook’s new plan to host news publications’ stories directly is not only about page views, advertising revenue or the number of seconds it takes for an article to load. It is about who owns the relationship with readers.
  • It’s why Google, a search engine, started a social network and why Facebook, a social network, started a search engine. It’s why Amazon, a shopping site, made a phone and why Apple, a phone maker, got into shopping.
  • Facebook’s experiment, called instant articles, is small to start — just a few articles from nine media companies, including The New York Times. But it signals a major shift in the relationship between publications and their readers. If you want to read the news, Facebook is saying, come to Facebook, not to NBC News or The Atlantic or The Times — and when you come, don’t leave. (For now, these articles can be viewed on an iPhone running the Facebook app.)
  • ...6 more annotations...
  • The front page of a newspaper and the cover of a magazine lost their dominance long ago.
  • But news reports, like albums before them, have not been created that way. One of the services that editors bring to readers has been to use their news judgment, considering a huge range of factors, when they decide how articles fit together and where they show up. The news judgment of The New York Times is distinct from that of The New York Post, and for generations readers appreciated that distinction.
  • “In digital, every story becomes unbundled from each other, so if you’re not thinking of each story as living on its own, it’s tying yourself back to an analog era,” Mr. Kim said.
  • Facebook executives have insisted that they intend to exert no editorial control because they leave the makeup of the news feed to the algorithm. But an algorithm is not autonomous. It is written by humans and tweaked all the time. Advertisement Continue reading the main story Advertisement Continue reading the main story
  • That raises some journalistic questions. The news feed algorithm works, in part, by showing people more of what they have liked in the past. Some studies have suggested that means they might not see as wide a variety of news or points of view, though others, including one by Facebook researchers, have found they still do.
  • Tech companies, Facebook included, are notoriously fickle with their algorithms. Publications became so dependent on Facebook in the first place because of a change in its algorithm that sent more traffic their way. Later, another change demoted articles from sites that Facebook deemed to run click-bait headlines. Then last month, Facebook decided to prioritize some posts from friends over those from publications.
Javier E

When Your Facebook Friend Is Racist - Megan Garber - The Atlantic - 0 views

  • Psychologists Shannon Rauch and Kimberley Schanz published their work in the journal Computers in Human Behavior. They sampled 623 Internet users (all white, 70 percent students), asking them to indicate the frequency of their Facebook usage. The group then read one of three versions of a Facebook Notes page they were told was written by a 26-year-old named Jack Brown. "Jack" was white and male. The first version of Jack's message contained what the researchers call a "superiority message": It "contrasted the behaviors of black and white individuals, only to find consistent superiority of the whites."
  • The researchers then asked participants, for each version of the post, to rate factors like "how much they agreed with the message," "how accurate they found it," "how much they liked the writer," and, significantly, how likely they were to share the post with others
  • Their findings? "Frequent users are particularly disposed to be influenced by negative racial messages." The group of more-frequent Facebook users didn't differ from others in their reaction to the egalitarian message. But those users "were more positive toward the messages with racist content -- particularly the superiority message." 
  • ...8 more annotations...
  • Facebook, for all the unprecedented connection it fosters among previously atomized people, fosters a very particular kind of connection: one that is mediated, at all times, by Facebook. And one that therefore makes very particular kinds of assumptions about how and why people connect in the first place. Facebook "connection" is defined -- semantically, at least -- by friendship. ("Facebook friends," "friending people," etc.) While it doesn't assume that every connection is an actual friend, in the narrow and maybe even old-fashioned sense of the word, Facebook's infrastructure does assume esteem among people who friend each other.
  • The study itself, in fact, is confirming the hypothesis that Rauch and Schanz started with: "We predict," they noted, "that due to potential chronic traits and/or their adaptation to a Facebook culture of shallow processing and agreement, frequent Facebook users are highly susceptible to persuasive messages compared to less frequent users.
  • This is, to say the least, troubling.
  • Facebook, as a result, is structured as an aggressively upbeat place.
  • social complicity. You can argue on Facebook, but it is not really encouraged. And the interactions Facebook fosters as it expands -- the status updates, the information sharing, the news consumption -- stem from that default-positive place. "Like," but not "Dislike." "Recommend," but not "Reject."
  • That's significant, because Facebook wants to expand from social connection into informational connection. The News Feed as the "personalized newspaper"; the just-introduced Home as a mobile locus of that newspaper.
  • Heavy users of Facebook tend to use the site because of a desire for social inclusion. In that context, the study suggests, those users are primed to agree with fellow users rather than to criticize the information those users share. And not just in terms of their public interactions, but in terms of their private beliefs. This potent combination -- "a need to connect and an ethos of shallow processing" -- provides a warm, moist breeding ground for the spread of opinions, publicly and not-so-publicly. Racist ones among them.
  • What will happen if information gets fully social -- according to Facebook's definition of "fully social"? What will take place when the Jack Browns of the world aren't just our friends, but our news sources?
Javier E

Why Silicon Valley can't fix itself | News | The Guardian - 1 views

  • After decades of rarely apologising for anything, Silicon Valley suddenly seems to be apologising for everything. They are sorry about the trolls. They are sorry about the bots. They are sorry about the fake news and the Russians, and the cartoons that are terrifying your kids on YouTube. But they are especially sorry about our brains.
  • Sean Parker, the former president of Facebook – who was played by Justin Timberlake in The Social Network – has publicly lamented the “unintended consequences” of the platform he helped create: “God only knows what it’s doing to our children’s brains.”
  • Parker, Rosenstein and the other insiders now talking about the harms of smartphones and social media belong to an informal yet influential current of tech critics emerging within Silicon Valley. You could call them the “tech humanists”. Amid rising public concern about the power of the industry, they argue that the primary problem with its products is that they threaten our health and our humanity.
  • ...52 more annotations...
  • It is clear that these products are designed to be maximally addictive, in order to harvest as much of our attention as they can. Tech humanists say this business model is both unhealthy and inhumane – that it damages our psychological well-being and conditions us to behave in ways that diminish our humanity
  • The main solution that they propose is better design. By redesigning technology to be less addictive and less manipulative, they believe we can make it healthier – we can realign technology with our humanity and build products that don’t “hijack” our minds.
  • its most prominent spokesman is executive director Tristan Harris, a former “design ethicist” at Google who has been hailed by the Atlantic magazine as “the closest thing Silicon Valley has to a conscience”. Harris has spent years trying to persuade the industry of the dangers of tech addiction.
  • In February, Pierre Omidyar, the billionaire founder of eBay, launched a related initiative: the Tech and Society Solutions Lab, which aims to “maximise the tech industry’s contributions to a healthy society”.
  • the tech humanists are making a bid to become tech’s loyal opposition. They are using their insider credentials to promote a particular diagnosis of where tech went wrong and of how to get it back on track
  • The real reason tech humanism matters is because some of the most powerful people in the industry are starting to speak its idiom. Snap CEO Evan Spiegel has warned about social media’s role in encouraging “mindless scrambles for friends or unworthy distractions”,
  • In short, the effort to humanise computing produced the very situation that the tech humanists now consider dehumanising: a wilderness of screens where digital devices chase every last instant of our attention.
  • After years of ignoring their critics, industry leaders are finally acknowledging that problems exist. Tech humanists deserve credit for drawing attention to one of those problems – the manipulative design decisions made by Silicon Valley.
  • these decisions are only symptoms of a larger issue: the fact that the digital infrastructures that increasingly shape our personal, social and civic lives are owned and controlled by a few billionaires
  • Because it ignores the question of power, the tech-humanist diagnosis is incomplete – and could even help the industry evade meaningful reform
  • Taken up by leaders such as Zuckerberg, tech humanism is likely to result in only superficial changes
  • they will not address the origin of that anger. If anything, they will make Silicon Valley even more powerful.
  • To the litany of problems caused by “technology that extracts attention and erodes society”, the text asserts that “humane design is the solution”. Drawing on the rhetoric of the “design thinking” philosophy that has long suffused Silicon Valley, the website explains that humane design “starts by understanding our most vulnerable human instincts so we can design compassionately”
  • this language is not foreign to Silicon Valley. On the contrary, “humanising” technology has long been its central ambition and the source of its power. It was precisely by developing a “humanised” form of computing that entrepreneurs such as Steve Jobs brought computing into millions of users’ everyday lives
  • Facebook had a new priority: maximising “time well spent” on the platform, rather than total time spent. By “time well spent”, Zuckerberg means time spent interacting with “friends” rather than businesses, brands or media sources. He said the News Feed algorithm was already prioritising these “more meaningful” activities.
  • Tech humanists say they want to align humanity and technology. But this project is based on a deep misunderstanding of the relationship between humanity and technology: namely, the fantasy that these two entities could ever exist in separation.
  • They believe we can use better design to make technology serve human nature rather than exploit and corrupt it. But this idea is drawn from the same tradition that created the world that tech humanists believe is distracting and damaging us.
  • The story of our species began when we began to make tools
  • All of which is to say: humanity and technology are not only entangled, they constantly change together.
  • This is not just a metaphor. Recent research suggests that the human hand evolved to manipulate the stone tools that our ancestors used
  • The ways our bodies and brains change in conjunction with the tools we make have long inspired anxieties that “we” are losing some essential qualities
  • Yet as we lose certain capacities, we gain new ones.
  • The nature of human nature is that it changes. It can not, therefore, serve as a stable basis for evaluating the impact of technology
  • Yet the assumption that it doesn’t change serves a useful purpose. Treating human nature as something static, pure and essential elevates the speaker into a position of power. Claiming to tell us who we are, they tell us how we should be.
  • Messaging, for instance, is considered the strongest signal. It’s reasonable to assume that you’re closer to somebody you exchange messages with than somebody whose post you once liked.
  • Harris and his fellow tech humanists also frequently invoke the language of public health. The Center for Humane Technology’s Roger McNamee has gone so far as to call public health “the root of the whole thing”, and Harris has compared using Snapchat to smoking cigarettes
  • The public-health framing casts the tech humanists in a paternalistic role. Resolving a public health crisis requires public health expertise. It also precludes the possibility of democratic debate. You don’t put the question of how to treat a disease up for a vote – you call a doctor.
  • They also remain confined to the personal level, aiming to redesign how the individual user interacts with technology rather than tackling the industry’s structural failures. Tech humanism fails to address the root cause of the tech backlash: the fact that a small handful of corporations own our digital lives and strip-mine them for profit.
  • This is a fundamentally political and collective issue. But by framing the problem in terms of health and humanity, and the solution in terms of design, the tech humanists personalise and depoliticise it.
  • Far from challenging Silicon Valley, tech humanism offers Silicon Valley a useful way to pacify public concerns without surrendering any of its enormous wealth and power.
  • these principles could make Facebook even more profitable and powerful, by opening up new business opportunities. That seems to be exactly what Facebook has planned.
  • reported that total time spent on the platform had dropped by around 5%, or about 50m hours per day. But, Zuckerberg said, this was by design: in particular, it was in response to tweaks to the News Feed that prioritised “meaningful” interactions with “friends” rather than consuming “public content” like video and news. This would ensure that “Facebook isn’t just fun, but also good for people’s well-being”
  • Zuckerberg said he expected those changes would continue to decrease total time spent – but “the time you do spend on Facebook will be more valuable”. This may describe what users find valuable – but it also refers to what Facebook finds valuable
  • not all data is created equal. One of the most valuable sources of data to Facebook is used to inform a metric called “coefficient”. This measures the strength of a connection between two users – Zuckerberg once called it “an index for each relationship”
  • Facebook records every interaction you have with another user – from liking a friend’s post or viewing their profile, to sending them a message. These activities provide Facebook with a sense of how close you are to another person, and different activities are weighted differently.
  • Holding humanity and technology separate clears the way for a small group of humans to determine the proper alignment between them
  • Why is coefficient so valuable? Because Facebook uses it to create a Facebook they think you will like: it guides algorithmic decisions about what content you see and the order in which you see it. It also helps improve ad targeting, by showing you ads for things liked by friends with whom you often interact
  • emphasising time well spent means creating a Facebook that prioritises data-rich personal interactions that Facebook can use to make a more engaging platform.
  • “time well spent” means Facebook can monetise more efficiently. It can prioritise the intensity of data extraction over its extensiveness. This is a wise business move, disguised as a concession to critics
  • industrialists had to find ways to make the time of the worker more valuable – to extract more money from each moment rather than adding more moments. They did this by making industrial production more efficient: developing new technologies and techniques that squeezed more value out of the worker and stretched that value further than ever before.
  • there is another way of thinking about how to live with technology – one that is both truer to the history of our species and useful for building a more democratic future. This tradition does not address “humanity” in the abstract, but as distinct human beings, whose capacities are shaped by the tools they use.
  • It sees us as hybrids of animal and machine – as “cyborgs”, to quote the biologist and philosopher of science Donna Haraway.
  • The cyborg way of thinking, by contrast, tells us that our species is essentially technological. We change as we change our tools, and our tools change us. But even though our continuous co-evolution with our machines is inevitable, the way it unfolds is not. Rather, it is determined by who owns and runs those machines. It is a question of power
  • The various scandals that have stoked the tech backlash all share a single source. Surveillance, fake news and the miserable working conditions in Amazon’s warehouses are profitable. If they were not, they would not exist. They are symptoms of a profound democratic deficit inflicted by a system that prioritises the wealth of the few over the needs and desires of the many.
  • If being technological is a feature of being human, then the power to shape how we live with technology should be a fundamental human right
  • The decisions that most affect our technological lives are far too important to be left to Mark Zuckerberg, rich investors or a handful of “humane designers”. They should be made by everyone, together.
  • Rather than trying to humanise technology, then, we should be trying to democratise it. We should be demanding that society as a whole gets to decide how we live with technology
  • What does this mean in practice? First, it requires limiting and eroding Silicon Valley’s power.
  • Antitrust laws and tax policy offer useful ways to claw back the fortunes Big Tech has built on common resources
  • democratic governments should be making rules about how those firms are allowed to behave – rules that restrict how they can collect and use our personal data, for instance, like the General Data Protection Regulation
  • This means developing publicly and co-operatively owned alternatives that empower workers, users and citizens to determine how they are run.
  • we might demand that tech firms pay for the privilege of extracting our data, so that we can collectively benefit from a resource we collectively create.
Javier E

Facebook Has 50 Minutes of Your Time Each Day. It Wants More. - The New York Times - 0 views

  • Fifty minutes.That’s the average amount of time, the company said, that users spend each day on its Facebook, Instagram and Messenger platforms
  • there are only 24 hours in a day, and the average person sleeps for 8.8 of them. That means more than one-sixteenth of the average user’s waking time is spent on Facebook.
  • That’s more than any other leisure activity surveyed by the Bureau of Labor Statistics, with the exception of watching television programs and movies (an average per day of 2.8 hours)
  • ...19 more annotations...
  • It’s more time than people spend reading (19 minutes); participating in sports or exercise (17 minutes); or social events (four minutes). It’s almost as much time as people spend eating and drinking (1.07 hours).
  • the average time people spend on Facebook has gone up — from around 40 minutes in 2014 — even as the number of monthly active users has surged. And that’s just the average. Some users must be spending many hours a day on the site,
  • time has become the holy grail of digital media.
  • Time is the best measure of engagement, and engagement correlates with advertising effectiveness. Time also increases the supply of impressions that Facebook can sell, which brings in more revenue (a 52 percent increase last quarter to $5.4 billion).
  • And time enables Facebook to learn more about its users — their habits and interests — and thus better target its ads. The result is a powerful network effect that competitors will be hard pressed to match.
  • the only one that comes close is Alphabet’s YouTube, where users spent an average of 17 minutes a day on the site. That’s less than half the 35 minutes a day users spent on Facebook
  • ComScore reported that television viewing (both live and recorded) dropped 2 percent last year, and it said younger viewers in particular are abandoning traditional live television. People ages 18-34 spent just 47 percent of their viewing time on television screens, and 40 percent on mobile devices.
  • People spending the most time on Facebook also tend to fall into the prized 18-to-34 demographic sought by advertisers.
  • “You hear a narrative that young people are fleeing Facebook. The data show that’s just not true. Younger users have a wider appetite for social media, and they spend a lot of time on multiple networks. But they spend more time on Facebook by a wide margin.”
  • What aren’t Facebook users doing during the 50 minutes they spend there? Is it possibly interfering with work (and productivity), or, in the case of young people, studying and reading?
  • While the Bureau of Labor Statistics surveys nearly every conceivable time-occupying activity (even fencing and spelunking), it doesn’t specifically tally the time spent on social media, both because the activity may have multiple purposes — both work and leisure — and because people often do it at the same time they are ostensibly engaged in other activities
  • The closest category would be “computer use for leisure,” which has grown from eight minutes in 2006, when the bureau began collecting the data, to 14 minutes in 2014, the most recent survey. Or perhaps it would be “socializing and communicating with others,” which slipped from 40 minutes to 38 minutes.
  • But time spent on most leisure activities hasn’t changed much in those eight years of the bureau’s surveys. Time spent reading dropped from an average of 22 minutes to 19 minutes. Watching television and movies increased from 2.57 hours to 2.8. Average time spent working declined from 3.4 hours to 3.25. (Those hours seem low because much of the population, which includes both young people and the elderly, does not work.)
  • The bureau’s numbers, since they cover the entire population, may be too broad to capture important shifts among important demographic groups
  • Users spent an average of nine minutes on all of Yahoo’s sites, two minutes on LinkedIn and just one minute on Twitter
  • Among those 55 and older, 70 percent of their viewing time was on television, according to comScore. So among young people, much social media time may be coming at the expense of traditional television.
  • comScore’s data suggests that people are spending on average just six to seven minutes a day using social media on their work computers. “I don’t think Facebook is displacing other activity,” he said. “People use it during downtime during the course of their day, in the elevator, or while commuting, or waiting.
  • Facebook, naturally, is busy cooking up ways to get us to spend even more time on the platform
  • A crucial initiative is improving its News Feed, tailoring it more precisely to the needs and interests of its users, based on how long people spend reading particular posts. For people who demonstrate a preference for video, more video will appear near the top of their news feed. The more time people spend on Facebook, the more data they will generate about themselves, and the better the company will get at the task.
Javier E

Facebook Has All the Power - Julie Posetti - The Atlantic - 0 views

  • scholars covet thy neighbor's data. They're attracted to the very large and often fascinating data sets that private companies have developed.
  • It's the companies that own and manage this data. The only standards we know they have to follow are in the terms-of-service that users accept to create an account, and the law as it stands in different countries.
  • the "sexiness" of the Facebook data that led Cornell University and the Proceedings of the National Academy of Sciences (PNAS) into an ethically dubious arrangement, where, for example, Facebook's unreadable 9,000-word terms-of-service are said to be good enough to meet the standard for "informed consent."
  • ...9 more annotations...
  • When the study drew attention and controversy, there was a moment when they both could have said: "We didn't look carefully enough at this the first time. Now we can see that it doesn't meet our standards." Instead they allowed Facebook and the PR people to take the lead in responding to the controversy.
  • What should this reality signal to Facebook users? Is it time to pull-back? You have (almost) no rights. You have (almost) no control. You have no idea what they're doing to you or with you. You don't even know who's getting the stuff you are posting, and you're not allowed to know. Trade secret!
  • Are there any particular warnings here for journalists and editors in terms of their exposure on Facebook? Yeah. Facebook has all the power. You have almost none. Just keep that in mind in all your dealings with it, as an individual with family and friends, as a journalist with a story to file, and as a news organization that is "on" Facebook.
  • I am not in a commercial situation where I have to maximize my traffic, so I can opt out. Right now my choice is to keep my account, but use it cynically. 
  • does this level of experimentation indicate the prospect of a further undermining of audience-driven news priorities and traditional news values? The right way to think about it is a loss of power—for news producers and their priorities. As I said, Facebook thinks it knows better than I do what "my" 180,000 subscribers should get from me.
  • Facebook has "where else are they going to go?" logic now. And they have good reason for this confidence. (It's called network effects.) But "where else are they going to go?" is a long way from trust and loyalty. It is less a durable business model than a statement of power. 
  • I distinguished between the "thin" legitimacy that Facebook operates under and the "thick" legitimacy that the university requires to be the institution it was always supposed to be. (Both are distinct from il-legitimacy.) News organizations should learn to make this distinction more often. Normal PR exists to muddle it. Which is why you don't hand a research crisis over to university PR people.
  • some commentators have questioned the practice of A/B headline testing in the aftermath of this scandal—is there a clear connection? The connection to me is that both are forms of behaviourism. Behaviourism is a view of human beings in which, as Hannah Arendt said, they are reduced to the level of a conditioned and "behaving" animal—an animal that responds to these stimuli but not those. This is why a popular shorthand for Facebook's study was that users were being treated as lab rats.
  • Journalism is supposed to be about informing people so they can understand the world and take action when necessary. Action and behaviour are not the same thing at all. One is a conscious choice, the other a human tendency. There's a tension, then, between commercial behaviourism, which may be deeply functional in some ways for the news industry, and informing people as citizens capable of understanding their world well enough to improve it, which is the deepest purpose of journalism. A/B testing merely highlights this tension.
Javier E

The Tech Industry's Psychological War on Kids - Member Feature Stories - Medium - 0 views

  • she cried, “They took my f***ing phone!” Attempting to engage Kelly in conversation, I asked her what she liked about her phone and social media. “They make me happy,” she replied.
  • Even though they were loving and involved parents, Kelly’s mom couldn’t help feeling that they’d failed their daughter and must have done something terribly wrong that led to her problems.
  • My practice as a child and adolescent psychologist is filled with families like Kelly’s. These parents say their kids’ extreme overuse of phones, video games, and social media is the most difficult parenting issue they face — and, in many cases, is tearing the family apart.
  • ...88 more annotations...
  • What none of these parents understand is that their children’s and teens’ destructive obsession with technology is the predictable consequence of a virtually unrecognized merger between the tech industry and psychology.
  • Dr. B.J. Fogg, is a psychologist and the father of persuasive technology, a discipline in which digital machines and apps — including smartphones, social media, and video games — are configured to alter human thoughts and behaviors. As the lab’s website boldly proclaims: “Machines designed to change humans.”
  • These parents have no idea that lurking behind their kids’ screens and phones are a multitude of psychologists, neuroscientists, and social science experts who use their knowledge of psychological vulnerabilities to devise products that capture kids’ attention for the sake of industry profit.
  • psychology — a discipline that we associate with healing — is now being used as a weapon against children.
  • This alliance pairs the consumer tech industry’s immense wealth with the most sophisticated psychological research, making it possible to develop social media, video games, and phones with drug-like power to seduce young users.
  • Likewise, social media companies use persuasive design to prey on the age-appropriate desire for preteen and teen kids, especially girls, to be socially successful. This drive is built into our DNA, since real-world relational skills have fostered human evolution.
  • Called “the millionaire maker,” Fogg has groomed former students who have used his methods to develop technologies that now consume kids’ lives. As he recently touted on his personal website, “My students often do groundbreaking projects, and they continue having impact in the real world after they leave Stanford… For example, Instagram has influenced the behavior of over 800 million people. The co-founder was a student of mine.”
  • Persuasive technology (also called persuasive design) works by deliberately creating digital environments that users feel fulfill their basic human drives — to be social or obtain goals — better than real-world alternatives.
  • Kids spend countless hours in social media and video game environments in pursuit of likes, “friends,” game points, and levels — because it’s stimulating, they believe that this makes them happy and successful, and they find it easier than doing the difficult but developmentally important activities of childhood.
  • While persuasion techniques work well on adults, they are particularly effective at influencing the still-maturing child and teen brain.
  • “Video games, better than anything else in our culture, deliver rewards to people, especially teenage boys,” says Fogg. “Teenage boys are wired to seek competency. To master our world and get better at stuff. Video games, in dishing out rewards, can convey to people that their competency is growing, you can get better at something second by second.”
  • it’s persuasive design that’s helped convince this generation of boys they are gaining “competency” by spending countless hours on game sites, when the sad reality is they are locked away in their rooms gaming, ignoring school, and not developing the real-world competencies that colleges and employers demand.
  • Persuasive technologies work because of their apparent triggering of the release of dopamine, a powerful neurotransmitter involved in reward, attention, and addiction.
  • As she says, “If you don’t get 100 ‘likes,’ you make other people share it so you get 100…. Or else you just get upset. Everyone wants to get the most ‘likes.’ It’s like a popularity contest.”
  • there are costs to Casey’s phone obsession, noting that the “girl’s phone, be it Facebook, Instagram or iMessage, is constantly pulling her away from her homework, sleep, or conversations with her family.
  • Casey says she wishes she could put her phone down. But she can’t. “I’ll wake up in the morning and go on Facebook just… because,” she says. “It’s not like I want to or I don’t. I just go on it. I’m, like, forced to. I don’t know why. I need to. Facebook takes up my whole life.”
  • B.J. Fogg may not be a household name, but Fortune Magazine calls him a “New Guru You Should Know,” and his research is driving a worldwide legion of user experience (UX) designers who utilize and expand upon his models of persuasive design.
  • “No one has perhaps been as influential on the current generation of user experience (UX) designers as Stanford researcher B.J. Fogg.”
  • the core of UX research is about using psychology to take advantage of our human vulnerabilities.
  • As Fogg is quoted in Kosner’s Forbes article, “Facebook, Twitter, Google, you name it, these companies have been using computers to influence our behavior.” However, the driving force behind behavior change isn’t computers. “The missing link isn’t the technology, it’s psychology,” says Fogg.
  • UX researchers not only follow Fogg’s design model, but also his apparent tendency to overlook the broader implications of persuasive design. They focus on the task at hand, building digital machines and apps that better demand users’ attention, compel users to return again and again, and grow businesses’ bottom line.
  • the “Fogg Behavior Model” is a well-tested method to change behavior and, in its simplified form, involves three primary factors: motivation, ability, and triggers.
  • “We can now create machines that can change what people think and what people do, and the machines can do that autonomously.”
  • Regarding ability, Fogg suggests that digital products should be made so that users don’t have to “think hard.” Hence, social networks are designed for ease of use
  • Finally, Fogg says that potential users need to be triggered to use a site. This is accomplished by a myriad of digital tricks, including the sending of incessant notifications
  • moral questions about the impact of turning persuasive techniques on children and teens are not being asked. For example, should the fear of social rejection be used to compel kids to compulsively use social media? Is it okay to lure kids away from school tasks that demand a strong mental effort so they can spend their lives on social networks or playing video games that don’t make them think much at all?
  • Describing how his formula is effective at getting people to use a social network, the psychologist says in an academic paper that a key motivator is users’ desire for “social acceptance,” although he says an even more powerful motivator is the desire “to avoid being socially rejected.”
  • the startup Dopamine Labs boasts about its use of persuasive techniques to increase profits: “Connect your app to our Persuasive AI [Artificial Intelligence] and lift your engagement and revenue up to 30% by giving your users our perfect bursts of dopamine,” and “A burst of Dopamine doesn’t just feel good: it’s proven to re-wire user behavior and habits.”
  • Ramsay Brown, the founder of Dopamine Labs, says in a KQED Science article, “We have now developed a rigorous technology of the human mind, and that is both exciting and terrifying. We have the ability to twiddle some knobs in a machine learning dashboard we build, and around the world hundreds of thousands of people are going to quietly change their behavior in ways that, unbeknownst to them, feel second-nature but are really by design.”
  • Programmers call this “brain hacking,” as it compels users to spend more time on sites even though they mistakenly believe it’s strictly due to their own conscious choices.
  • Banks of computers employ AI to “learn” which of a countless number of persuasive design elements will keep users hooked
  • A persuasion profile of a particular user’s unique vulnerabilities is developed in real time and exploited to keep users on the site and make them return again and again for longer periods of time. This drives up profits for consumer internet companies whose revenue is based on how much their products are used.
  • “The leaders of Internet companies face an interesting, if also morally questionable, imperative: either they hijack neuroscience to gain market share and make large profits, or they let competitors do that and run away with the market.”
  • Social media and video game companies believe they are compelled to use persuasive technology in the arms race for attention, profits, and survival.
  • Children’s well-being is not part of the decision calculus.
  • one breakthrough occurred in 2017 when Facebook documents were leaked to The Australian. The internal report crafted by Facebook executives showed the social network boasting to advertisers that by monitoring posts, interactions, and photos in real time, the network is able to track when teens feel “insecure,” “worthless,” “stressed,” “useless” and a “failure.”
  • The report also bragged about Facebook’s ability to micro-target ads down to “moments when young people need a confidence boost.”
  • These design techniques provide tech corporations a window into kids’ hearts and minds to measure their particular vulnerabilities, which can then be used to control their behavior as consumers. This isn’t some strange future… this is now.
  • The official tech industry line is that persuasive technologies are used to make products more engaging and enjoyable. But the revelations of industry insiders can reveal darker motives.
  • Revealing the hard science behind persuasive technology, Hopson says, “This is not to say that players are the same as rats, but that there are general rules of learning which apply equally to both.”
  • After penning the paper, Hopson was hired by Microsoft, where he helped lead the development of the Xbox Live, Microsoft’s online gaming system
  • “If game designers are going to pull a person away from every other voluntary social activity or hobby or pastime, they’re going to have to engage that person at a very deep level in every possible way they can.”
  • This is the dominant effect of persuasive design today: building video games and social media products so compelling that they pull users away from the real world to spend their lives in for-profit domains.
  • Persuasive technologies are reshaping childhood, luring kids away from family and schoolwork to spend more and more of their lives sitting before screens and phones.
  • “Since we’ve figured to some extent how these pieces of the brain that handle addiction are working, people have figured out how to juice them further and how to bake that information into apps.”
  • Today, persuasive design is likely distracting adults from driving safely, productive work, and engaging with their own children — all matters which need urgent attention
  • Still, because the child and adolescent brain is more easily controlled than the adult mind, the use of persuasive design is having a much more hurtful impact on kids.
  • But to engage in a pursuit at the expense of important real-world activities is a core element of addiction.
  • younger U.S. children now spend 5 ½ hours each day with entertainment technologies, including video games, social media, and online videos.
  • Even more, the average teen now spends an incredible 8 hours each day playing with screens and phones
  • U.S. kids only spend 16 minutes each day using the computer at home for school.
  • Quietly, using screens and phones for entertainment has become the dominant activity of childhood.
  • Younger kids spend more time engaging with entertainment screens than they do in school
  • teens spend even more time playing with screens and phones than they do sleeping
  • kids are so taken with their phones and other devices that they have turned their backs to the world around them.
  • many children are missing out on real-life engagement with family and school — the two cornerstones of childhood that lead them to grow up happy and successful
  • persuasive technologies are pulling kids into often toxic digital environments
  • A too frequent experience for many is being cyberbullied, which increases their risk of skipping school and considering suicide.
  • And there is growing recognition of the negative impact of FOMO, or the fear of missing out, as kids spend their social media lives watching a parade of peers who look to be having a great time without them, feeding their feelings of loneliness and being less than.
  • The combined effects of the displacement of vital childhood activities and exposure to unhealthy online environments is wrecking a generation.
  • as the typical age when kids get their first smartphone has fallen to 10, it’s no surprise to see serious psychiatric problems — once the domain of teens — now enveloping young kids
  • Self-inflicted injuries, such as cutting, that are serious enough to require treatment in an emergency room, have increased dramatically in 10- to 14-year-old girls, up 19% per year since 2009.
  • While girls are pulled onto smartphones and social media, boys are more likely to be seduced into the world of video gaming, often at the expense of a focus on school
  • it’s no surprise to see this generation of boys struggling to make it to college: a full 57% of college admissions are granted to young women compared with only 43% to young men.
  • Economists working with the National Bureau of Economic Research recently demonstrated how many young U.S. men are choosing to play video games rather than join the workforce.
  • The destructive forces of psychology deployed by the tech industry are making a greater impact on kids than the positive uses of psychology by mental health providers and child advocates. Put plainly, the science of psychology is hurting kids more than helping them.
  • Hope for this wired generation has seemed dim until recently, when a surprising group has come forward to criticize the tech industry’s use of psychological manipulation: tech executives
  • Tristan Harris, formerly a design ethicist at Google, has led the way by unmasking the industry’s use of persuasive design. Interviewed in The Economist’s 1843 magazine, he says, “The job of these companies is to hook people, and they do that by hijacking our psychological vulnerabilities.”
  • Marc Benioff, CEO of the cloud computing company Salesforce, is one of the voices calling for the regulation of social media companies because of their potential to addict children. He says that just as the cigarette industry has been regulated, so too should social media companies. “I think that, for sure, technology has addictive qualities that we have to address, and that product designers are working to make those products more addictive, and we need to rein that back as much as possible,”
  • “If there’s an unfair advantage or things that are out there that are not understood by parents, then the government’s got to come forward and illuminate that.”
  • Since millions of parents, for example the parents of my patient Kelly, have absolutely no idea that devices are used to hijack their children’s minds and lives, regulation of such practices is the right thing to do.
  • Another improbable group to speak out on behalf of children is tech investors.
  • How has the consumer tech industry responded to these calls for change? By going even lower.
  • Facebook recently launched Messenger Kids, a social media app that will reach kids as young as five years old. Suggestive that harmful persuasive design is now honing in on very young children is the declaration of Messenger Kids Art Director, Shiu Pei Luu, “We want to help foster communication [on Facebook] and make that the most exciting thing you want to be doing.”
  • the American Psychological Association (APA) — which is tasked with protecting children and families from harmful psychological practices — has been essentially silent on the matter
  • APA Ethical Standards require the profession to make efforts to correct the “misuse” of the work of psychologists, which would include the application of B.J. Fogg’s persuasive technologies to influence children against their best interests
  • Manipulating children for profit without their own or parents’ consent, and driving kids to spend more time on devices that contribute to emotional and academic problems is the embodiment of unethical psychological practice.
  • “Never before in history have basically 50 mostly men, mostly 20–35, mostly white engineer designer types within 50 miles of where we are right now [Silicon Valley], had control of what a billion people think and do.”
  • Some may argue that it’s the parents’ responsibility to protect their children from tech industry deception. However, parents have no idea of the powerful forces aligned against them, nor do they know how technologies are developed with drug-like effects to capture kids’ minds
  • Others will claim that nothing should be done because the intention behind persuasive design is to build better products, not manipulate kids
  • similar circumstances exist in the cigarette industry, as tobacco companies have as their intention profiting from the sale of their product, not hurting children. Nonetheless, because cigarettes and persuasive design predictably harm children, actions should be taken to protect kids from their effects.
  • in a 1998 academic paper, Fogg describes what should happen if things go wrong, saying, if persuasive technologies are “deemed harmful or questionable in some regard, a researcher should then either take social action or advocate that others do so.”
  • I suggest turning to President John F. Kennedy’s prescient guidance: He said that technology “has no conscience of its own. Whether it will become a force for good or ill depends on man.”
  • The APA should begin by demanding that the tech industry’s behavioral manipulation techniques be brought out of the shadows and exposed to the light of public awareness
  • Changes should be made in the APA’s Ethics Code to specifically prevent psychologists from manipulating children using digital machines, especially if such influence is known to pose risks to their well-being.
  • Moreover, the APA should follow its Ethical Standards by making strong efforts to correct the misuse of psychological persuasion by the tech industry and by user experience designers outside the field of psychology.
  • It should join with tech executives who are demanding that persuasive design in kids’ tech products be regulated
  • The APA also should make its powerful voice heard amongst the growing chorus calling out tech companies that intentionally exploit children’s vulnerabilities.
Javier E

Opinion | You Are the Object of Facebook's Secret Extraction Operation - The New York T... - 0 views

  • Facebook is not just any corporation. It reached trillion-dollar status in a single decade by applying the logic of what I call surveillance capitalism — an economic system built on the secret extraction and manipulation of human data
  • Facebook and other leading surveillance capitalist corporations now control information flows and communication infrastructures across the world.
  • These infrastructures are critical to the possibility of a democratic society, yet our democracies have allowed these companies to own, operate and mediate our information spaces unconstrained by public law.
  • ...56 more annotations...
  • The result has been a hidden revolution in how information is produced, circulated and acted upon
  • The world’s liberal democracies now confront a tragedy of the “un-commons.” Information spaces that people assume to be public are strictly ruled by private commercial interests for maximum profit.
  • The internet as a self-regulating market has been revealed as a failed experiment. Surveillance capitalism leaves a trail of social wreckage in its wake: the wholesale destruction of privacy, the intensification of social inequality, the poisoning of social discourse with defactualized information, the demolition of social norms and the weakening of democratic institutions.
  • These social harms are not random. They are tightly coupled effects of evolving economic operations. Each harm paves the way for the next and is dependent on what went before.
  • There is no way to escape the machine systems that surveil u
  • All roads to economic and social participation now lead through surveillance capitalism’s profit-maximizing institutional terrain, a condition that has intensified during nearly two years of global plague.
  • Will Facebook’s digital violence finally trigger our commitment to take back the “un-commons”?
  • Will we confront the fundamental but long ignored questions of an information civilization: How should we organize and govern the information and communication spaces of the digital century in ways that sustain and advance democratic values and principles?
  • Mark Zuckerberg’s start-up did not invent surveillance capitalism. Google did that. In 2000, when only 25 percent of the world’s information was stored digitally, Google was a tiny start-up with a great search product but little revenue.
  • By 2001, in the teeth of the dot-com bust, Google’s leaders found their breakthrough in a series of inventions that would transform advertising. Their team learned how to combine massive data flows of personal information with advanced computational analyses to predict where an ad should be placed for maximum “click through.”
  • Google’s scientists learned how to extract predictive metadata from this “data exhaust” and use it to analyze likely patterns of future behavior.
  • Prediction was the first imperative that determined the second imperative: extraction.
  • Lucrative predictions required flows of human data at unimaginable scale. Users did not suspect that their data was secretly hunted and captured from every corner of the internet and, later, from apps, smartphones, devices, cameras and sensors
  • User ignorance was understood as crucial to success. Each new product was a means to more “engagement,” a euphemism used to conceal illicit extraction operations.
  • When asked “What is Google?” the co-founder Larry Page laid it out in 2001,
  • “Storage is cheap. Cameras are cheap. People will generate enormous amounts of data,” Mr. Page said. “Everything you’ve ever heard or seen or experienced will become searchable. Your whole life will be searchable.”
  • Instead of selling search to users, Google survived by turning its search engine into a sophisticated surveillance medium for seizing human data
  • Company executives worked to keep these economic operations secret, hidden from users, lawmakers, and competitors. Mr. Page opposed anything that might “stir the privacy pot and endanger our ability to gather data,” Mr. Edwards wrote.
  • As recently as 2017, Eric Schmidt, the executive chairman of Google’s parent company, Alphabet, acknowledged the role of Google’s algorithmic ranking operations in spreading corrupt information. “There is a line that we can’t really get across,” he said. “It is very difficult for us to understand truth.” A company with a mission to organize and make accessible all the world’s information using the most sophisticated machine systems cannot discern corrupt information.
  • This is the economic context in which disinformation wins
  • In March 2008, Mr. Zuckerberg hired Google’s head of global online advertising, Sheryl Sandberg, as his second in command. Ms. Sandberg had joined Google in 2001 and was a key player in the surveillance capitalism revolution. She led the build-out of Google’s advertising engine, AdWords, and its AdSense program, which together accounted for most of the company’s $16.6 billion in revenue in 2007.
  • A Google multimillionaire by the time she met Mr. Zuckerberg, Ms. Sandberg had a canny appreciation of Facebook’s immense opportunities for extraction of rich predictive data. “We have better information than anyone else. We know gender, age, location, and it’s real data as opposed to the stuff other people infer,” Ms. Sandberg explained
  • The company had “better data” and “real data” because it had a front-row seat to what Mr. Page had called “your whole life.”
  • Facebook paved the way for surveillance economics with new privacy policies in late 2009. The Electronic Frontier Foundation warned that new “Everyone” settings eliminated options to restrict the visibility of personal data, instead treating it as publicly available information.
  • Mr. Zuckerberg “just went for it” because there were no laws to stop him from joining Google in the wholesale destruction of privacy. If lawmakers wanted to sanction him as a ruthless profit-maximizer willing to use his social network against society, then 2009 to 2010 would have been a good opportunity.
  • Facebook was the first follower, but not the last. Google, Facebook, Amazon, Microsoft and Apple are private surveillance empires, each with distinct business models.
  • In 2021 these five U.S. tech giants represent five of the six largest publicly traded companies by market capitalization in the world.
  • As we move into the third decade of the 21st century, surveillance capitalism is the dominant economic institution of our time. In the absence of countervailing law, this system successfully mediates nearly every aspect of human engagement with digital information
  • Today all apps and software, no matter how benign they appear, are designed to maximize data collection.
  • Historically, great concentrations of corporate power were associated with economic harms. But when human data are the raw material and predictions of human behavior are the product, then the harms are social rather than economic
  • The difficulty is that these novel harms are typically understood as separate, even unrelated, problems, which makes them impossible to solve. Instead, each new stage of harm creates the conditions for the next stage.
  • Fifty years ago the conservative economist Milton Friedman exhorted American executives, “There is one and only one social responsibility of business — to use its resources and engage in activities designed to increase its profits so long as it stays within the rules of the game.” Even this radical doctrine did not reckon with the possibility of no rules.
  • With privacy out of the way, ill-gotten human data are concentrated within private corporations, where they are claimed as corporate assets to be deployed at will.
  • The sheer size of this knowledge gap is conveyed in a leaked 2018 Facebook document, which described its artificial intelligence hub, ingesting trillions of behavioral data points every day and producing six million behavioral predictions each second.
  • Next, these human data are weaponized as targeting algorithms, engineered to maximize extraction and aimed back at their unsuspecting human sources to increase engagement
  • Targeting mechanisms change real life, sometimes with grave consequences. For example, the Facebook Files depict Mr. Zuckerberg using his algorithms to reinforce or disrupt the behavior of billions of people. Anger is rewarded or ignored. News stories become more trustworthy or unhinged. Publishers prosper or wither. Political discourse turns uglier or more moderate. People live or die.
  • Occasionally the fog clears to reveal the ultimate harm: the growing power of tech giants willing to use their control over critical information infrastructure to compete with democratically elected lawmakers for societal dominance.
  • when it comes to the triumph of surveillance capitalism’s revolution, it is the lawmakers of every liberal democracy, especially in the United States, who bear the greatest burden of responsibility. They allowed private capital to rule our information spaces during two decades of spectacular growth, with no laws to stop it.
  • All of it begins with extraction. An economic order founded on the secret massive-scale extraction of human data assumes the destruction of privacy as a nonnegotiable condition of its business operations.
  • We can’t fix all our problems at once, but we won’t fix any of them, ever, unless we reclaim the sanctity of information integrity and trustworthy communications
  • The abdication of our information and communication spaces to surveillance capitalism has become the meta-crisis of every republic, because it obstructs solutions to all other crises.
  • Neither Google, nor Facebook, nor any other corporate actor in this new economic order set out to destroy society, any more than the fossil fuel industry set out to destroy the earth.
  • like global warming, the tech giants and their fellow travelers have been willing to treat their destructive effects on people and society as collateral damage — the unfortunate but unavoidable byproduct of perfectly legal economic operations that have produced some of the wealthiest and most powerful corporations in the history of capitalism.
  • Where does that leave us?
  • Democracy is the only countervailing institutional order with the legitimate authority and power to change our course. If the ideal of human self-governance is to survive the digital century, then all solutions point to one solution: a democratic counterrevolution.
  • instead of the usual laundry lists of remedies, lawmakers need to proceed with a clear grasp of the adversary: a single hierarchy of economic causes and their social harms.
  • We can’t rid ourselves of later-stage social harms unless we outlaw their foundational economic causes
  • This means we move beyond the current focus on downstream issues such as content moderation and policing illegal content. Such “remedies” only treat the symptoms without challenging the illegitimacy of the human data extraction that funds private control over society’s information spaces
  • Similarly, structural solutions like “breaking up” the tech giants may be valuable in some cases, but they will not affect the underlying economic operations of surveillance capitalism.
  • Instead, discussions about regulating big tech should focus on the bedrock of surveillance economics: the secret extraction of human data from realms of life once called “private.
  • No secret extraction means no illegitimate concentrations of knowledge about people. No concentrations of knowledge means no targeting algorithms. No targeting means that corporations can no longer control and curate information flows and social speech or shape human behavior to favor their interests
  • the sober truth is that we need lawmakers ready to engage in a once-a-century exploration of far more basic questions:
  • How should we structure and govern information, connection and communication in a democratic digital century?
  • What new charters of rights, legislative frameworks and institutions are required to ensure that data collection and use serve the genuine needs of individuals and society?
  • What measures will protect citizens from unaccountable power over information, whether it is wielded by private companies or governments?
  • The corporation that is Facebook may change its name or its leaders, but it will not voluntarily change its economics.
Javier E

Mark Zuckerberg, Let Me Pay for Facebook - NYTimes.com - 0 views

  • 93 percent of the public believes that “being in control of who can get information about them is important,” and yet the amount of information we generate online has exploded and we seldom know where it all goes.
  • the pop-up and the ad-financed business model. The former is annoying but it’s the latter that is helping destroy the fabric of a rich, pluralistic Internet.
  • Facebook makes about 20 cents per user per month in profit. This is a pitiful sum, especially since the average user spends an impressive 20 hours on Facebook every month, according to the company. This paltry profit margin drives the business model: Internet ads are basically worthless unless they are hyper-targeted based on tracking and extensive profiling of users. This is a bad bargain, especially since two-thirds of American adults don’t want ads that target them based on that tracking and analysis of personal behavior.
  • ...10 more annotations...
  • This way of doing business rewards huge Internet platforms, since ads that are worth so little can support only companies with hundreds of millions of users.
  • Ad-based businesses distort our online interactions. People flock to Internet platforms because they help us connect with one another or the world’s bounty of information — a crucial, valuable function. Yet ad-based financing means that the companies have an interest in manipulating our attention on behalf of advertisers, instead of letting us connect as we wish.
  • Many users think their feed shows everything that their friends post. It doesn’t. Facebook runs its billion-plus users’ newsfeed by a proprietary, ever-changing algorithm that decides what we see. If Facebook didn’t have to control the feed to keep us on the site longer and to inject ads into our stream, it could instead offer us control over this algorithm.
  • we’re not starting from scratch. Micropayment systems that would allow users to spend a few cents here and there, not be so easily tracked by all the Big Brothers, and even allow personalization were developed in the early days of the Internet. Big banks and large Internet platforms didn’t show much interest in this micropayment path, which would limit their surveillance abilities. We can revive it.
  • What to do? It’s simple: Internet sites should allow their users to be the customers. I would, as I bet many others would, happily pay more than 20 cents per month for a Facebook or a Google that did not track me, upgraded its encryption and treated me as a customer whose preferences and privacy matter.
  • Many people say that no significant number of users will ever pay directly for Internet services. But that is because we are misled by the mantra that these services are free. With growing awareness of the privacy cost of ads, this may well change. Millions of people pay for Netflix despite the fact that pirated copies of many movies are available free. We eventually pay for ads, anyway, as that cost is baked into products we purchase
  • A seamless, secure micropayment system that spreads a few pennies at a time as we browse a social network, up to a preset monthly limit, would alter the whole landscape for the better.
  • Many nonprofits and civic groups that were initially thrilled about their success in using Facebook to reach people are now despondent as their entries are less and less likely to reach people who “liked” their posts unless they pay Facebook to help boost their updates.
  • If even a quarter of Facebook’s 1.5 billion users were willing to pay $1 per month in return for not being tracked or targeted based on their data, that would yield more than $4 billion per year — surely a number worth considering.
  • Mr. Zuckerberg has reportedly spent more than $30 million to buy the homes around his in Palo Alto, Calif., and more than $100 million for a secluded parcel of land in Hawaii. He knows privacy is worth paying for. So he should let us pay a few dollars to protect ours.
Javier E

Facebook Conceded It Might Make You Feel Bad. Here's How to Interpret That. - The New Y... - 0 views

  • Facebook published a quietly groundbreaking admission on Friday. Social media, the company said in a blog post, can often make you feel good — but sometimes it can also make you feel bad.
  • Facebook’s using a corporate blog post to point to independent research that shows its product can sometimes lead to lower measures of physical and mental well-being should be regarded as a big deal. The post stands as a direct affront to the company’s reason for being
  • Its business model and its more airy social mission depend on the idea that social media is a new and permanently dominant force in the human condition.
  • ...8 more annotations...
  • Facebook’s leap into the ranks of the world’s most valuable companies less than 14 years after its founding can be attributed to this simple truth: Humans have shown no limit, so far, in their appetite for more Facebook.
  • For several years, people have asked whether social media, on an individual level and globally, might be altering society and psychology in negative ways.
  • Then came 2017. The concerns over social-media-born misinformation and propaganda during last year’s presidential race were one flavor of this worry. Another is what Facebook might be doing to our psychology and social relationships — whether it has addicted us to “short-term, dopamine-driven feedback loops” that “are destroying how society works,” to quote Chamath Palihapitiya, one of several former Facebook executives who have expressed some version of this concern over the last few months.
  • Though it is quite abstruse, the post, by David Ginsberg and Moira Burke, two company researchers, takes readers through a tour of the nuances on whether Facebook can be bad for you.
  • The cynical take is that Facebook is conceding the most obvious downsides of its product in order to convince us it really does care.
  • It showed that using Facebook more deeply and meaningfully, for instance by posting comments and engaging in back-and-forth chats on the service, improved people’s scores on well-being.
  • You can see the issue here: Facebook is saying that if you feel bad about Facebook, it’s because you’re holding it wrong, to quote Steve Jobs. And the cure for your malaise may be to just use Facebook more.
  • If you think Facebook is ruining the world, you should be a little glad that even Facebook agrees that we need a better Facebook — and that it is pledging to build one.
Javier E

The Fall of Facebook - The Atlantic - 0 views

  • When a research company looked at how people use their phones, it found that they spend more time on Facebook than they do browsing the entire rest of the Web.
  • Digital-media companies have grown reliant on Facebook’s powerful distribution capabilities.
  • this weakens the basic idea of a publication. The media bundles known as magazines and newspapers were built around letting advertisers reach an audience. But now virtually all of the audiences are in the same place, and media entities and advertisers alike know how to target them: they go to Facebook, select some options from a drop-down menu—18-to-24-year-old men in Maryland who are college-football fans—and their ads materialize in the feeds of that demographic.
  • ...9 more annotations...
  • when Google was the dominant distribution force on the Web, that fact was reflected in the kinds of content media companies produced—fact-filled, keyword-stuffed posts that Google’s software seemed to prefer.
  • while, once upon a time, everyone with a TV and an antenna could see “what was on,” Facebook news feeds are personalized, so no one outside the company actually knows what anyone else is seeing. This opacity would have been impossible to imagine in previous eras.
  • it is the most powerful information gatekeeper the world has ever known. It is only slightly hyperbolic to say that Facebook is like all the broadcast-television networks put together.
  • Facebook is different, though. It measures what is “engaging”—what you (and people you resemble, according to its databases) like, comment on, and share. Then it shows you more things related to that.
  • Facebook has built a self-perpetuating optimization machine. It’s as if every time you turned on the TV, your cable box ranked every episode of every show just for you. Or when you went to a bar, only the people you’d been hanging out with regularly showed up
  • It’s all enough to make you wonder whether Facebook, unlike AOL or MySpace, really might be forever
  • “In three years of research and talking to hundreds of people and everyday users, I  don’t think I heard anyone say once, ‘I love Facebook,’ ”
  • The software’s primary attributes—its omniscience, its solicitousness—all too easily provoke claustrophobia.
  • users are spreading themselves around, maintaining Facebook as their social spine, but investing in and loving a wide variety of other social apps. None of them seems likely to supplant Facebook on its own, but taken together, they form a pretty decent network of networks, a dispersed alternative to Facebook life.
Javier E

'Our minds can be hijacked': the tech insiders who fear a smartphone dystopia | Technol... - 0 views

  • Rosenstein belongs to a small but growing band of Silicon Valley heretics who complain about the rise of the so-called “attention economy”: an internet shaped around the demands of an advertising economy.
  • “It is very common,” Rosenstein says, “for humans to develop things with the best of intentions and for them to have unintended, negative consequences.”
  • most concerned about the psychological effects on people who, research shows, touch, swipe or tap their phone 2,617 times a day.
  • ...43 more annotations...
  • There is growing concern that as well as addicting users, technology is contributing toward so-called “continuous partial attention”, severely limiting people’s ability to focus, and possibly lowering IQ. One recent study showed that the mere presence of smartphones damages cognitive capacity – even when the device is turned off. “Everyone is distracted,” Rosenstein says. “All of the time.”
  • Drawing a straight line between addiction to social media and political earthquakes like Brexit and the rise of Donald Trump, they contend that digital forces have completely upended the political system and, left unchecked, could even render democracy as we know it obsolete.
  • Without irony, Eyal finished his talk with some personal tips for resisting the lure of technology. He told his audience he uses a Chrome extension, called DF YouTube, “which scrubs out a lot of those external triggers” he writes about in his book, and recommended an app called Pocket Points that “rewards you for staying off your phone when you need to focus”.
  • “One reason I think it is particularly important for us to talk about this now is that we may be the last generation that can remember life before,” Rosenstein says. It may or may not be relevant that Rosenstein, Pearlman and most of the tech insiders questioning today’s attention economy are in their 30s, members of the last generation that can remember a world in which telephones were plugged into walls.
  • One morning in April this year, designers, programmers and tech entrepreneurs from across the world gathered at a conference centre on the shore of the San Francisco Bay. They had each paid up to $1,700 to learn how to manipulate people into habitual use of their products, on a course curated by conference organiser Nir Eyal.
  • Eyal, 39, the author of Hooked: How to Build Habit-Forming Products, has spent several years consulting for the tech industry, teaching techniques he developed by closely studying how the Silicon Valley giants operate.
  • “The technologies we use have turned into compulsions, if not full-fledged addictions,” Eyal writes. “It’s the impulse to check a message notification. It’s the pull to visit YouTube, Facebook, or Twitter for just a few minutes, only to find yourself still tapping and scrolling an hour later.” None of this is an accident, he writes. It is all “just as their designers intended”
  • He explains the subtle psychological tricks that can be used to make people develop habits, such as varying the rewards people receive to create “a craving”, or exploiting negative emotions that can act as “triggers”. “Feelings of boredom, loneliness, frustration, confusion and indecisiveness often instigate a slight pain or irritation and prompt an almost instantaneous and often mindless action to quell the negative sensation,” Eyal writes.
  • The most seductive design, Harris explains, exploits the same psychological susceptibility that makes gambling so compulsive: variable rewards. When we tap those apps with red icons, we don’t know whether we’ll discover an interesting email, an avalanche of “likes”, or nothing at all. It is the possibility of disappointment that makes it so compulsive.
  • Finally, Eyal confided the lengths he goes to protect his own family. He has installed in his house an outlet timer connected to a router that cuts off access to the internet at a set time every day. “The idea is to remember that we are not powerless,” he said. “We are in control.
  • But are we? If the people who built these technologies are taking such radical steps to wean themselves free, can the rest of us reasonably be expected to exercise our free will?
  • Not according to Tristan Harris, a 33-year-old former Google employee turned vocal critic of the tech industry. “All of us are jacked into this system,” he says. “All of our minds can be hijacked. Our choices are not as free as we think they are.”
  • Harris, who has been branded “the closest thing Silicon Valley has to a conscience”, insists that billions of people have little choice over whether they use these now ubiquitous technologies, and are largely unaware of the invisible ways in which a small number of people in Silicon Valley are shaping their lives.
  • “I don’t know a more urgent problem than this,” Harris says. “It’s changing our democracy, and it’s changing our ability to have the conversations and relationships that we want with each other.” Harris went public – giving talks, writing papers, meeting lawmakers and campaigning for reform after three years struggling to effect change inside Google’s Mountain View headquarters.
  • He explored how LinkedIn exploits a need for social reciprocity to widen its network; how YouTube and Netflix autoplay videos and next episodes, depriving users of a choice about whether or not they want to keep watching; how Snapchat created its addictive Snapstreaks feature, encouraging near-constant communication between its mostly teenage users.
  • The techniques these companies use are not always generic: they can be algorithmically tailored to each person. An internal Facebook report leaked this year, for example, revealed that the company can identify when teens feel “insecure”, “worthless” and “need a confidence boost”. Such granular information, Harris adds, is “a perfect model of what buttons you can push in a particular person”.
  • Tech companies can exploit such vulnerabilities to keep people hooked; manipulating, for example, when people receive “likes” for their posts, ensuring they arrive when an individual is likely to feel vulnerable, or in need of approval, or maybe just bored. And the very same techniques can be sold to the highest bidder. “There’s no ethics,” he says. A company paying Facebook to use its levers of persuasion could be a car business targeting tailored advertisements to different types of users who want a new vehicle. Or it could be a Moscow-based troll farm seeking to turn voters in a swing county in Wisconsin.
  • It was Rosenstein’s colleague, Leah Pearlman, then a product manager at Facebook and on the team that created the Facebook “like”, who announced the feature in a 2009 blogpost. Now 35 and an illustrator, Pearlman confirmed via email that she, too, has grown disaffected with Facebook “likes” and other addictive feedback loops. She has installed a web browser plug-in to eradicate her Facebook news feed, and hired a social media manager to monitor her Facebook page so that she doesn’t have to.
  • Harris believes that tech companies never deliberately set out to make their products addictive. They were responding to the incentives of an advertising economy, experimenting with techniques that might capture people’s attention, even stumbling across highly effective design by accident.
  • It’s this that explains how the pull-to-refresh mechanism, whereby users swipe down, pause and wait to see what content appears, rapidly became one of the most addictive and ubiquitous design features in modern technology. “Each time you’re swiping down, it’s like a slot machine,” Harris says. “You don’t know what’s coming next. Sometimes it’s a beautiful photo. Sometimes it’s just an ad.”
  • The reality TV star’s campaign, he said, had heralded a watershed in which “the new, digitally supercharged dynamics of the attention economy have finally crossed a threshold and become manifest in the political realm”.
  • “Smartphones are useful tools,” he says. “But they’re addictive. Pull-to-refresh is addictive. Twitter is addictive. These are not good things. When I was working on them, it was not something I was mature enough to think about. I’m not saying I’m mature now, but I’m a little bit more mature, and I regret the downsides.”
  • All of it, he says, is reward-based behaviour that activates the brain’s dopamine pathways. He sometimes finds himself clicking on the red icons beside his apps “to make them go away”, but is conflicted about the ethics of exploiting people’s psychological vulnerabilities. “It is not inherently evil to bring people back to your product,” he says. “It’s capitalism.”
  • He identifies the advent of the smartphone as a turning point, raising the stakes in an arms race for people’s attention. “Facebook and Google assert with merit that they are giving users what they want,” McNamee says. “The same can be said about tobacco companies and drug dealers.”
  • McNamee chooses his words carefully. “The people who run Facebook and Google are good people, whose well-intentioned strategies have led to horrific unintended consequences,” he says. “The problem is that there is nothing the companies can do to address the harm unless they abandon their current advertising models.”
  • But how can Google and Facebook be forced to abandon the business models that have transformed them into two of the most profitable companies on the planet?
  • McNamee believes the companies he invested in should be subjected to greater regulation, including new anti-monopoly rules. In Washington, there is growing appetite, on both sides of the political divide, to rein in Silicon Valley. But McNamee worries the behemoths he helped build may already be too big to curtail.
  • Rosenstein, the Facebook “like” co-creator, believes there may be a case for state regulation of “psychologically manipulative advertising”, saying the moral impetus is comparable to taking action against fossil fuel or tobacco companies. “If we only care about profit maximisation,” he says, “we will go rapidly into dystopia.”
  • James Williams does not believe talk of dystopia is far-fetched. The ex-Google strategist who built the metrics system for the company’s global search advertising business, he has had a front-row view of an industry he describes as the “largest, most standardised and most centralised form of attentional control in human history”.
  • It is a journey that has led him to question whether democracy can survive the new technological age.
  • He says his epiphany came a few years ago, when he noticed he was surrounded by technology that was inhibiting him from concentrating on the things he wanted to focus on. “It was that kind of individual, existential realisation: what’s going on?” he says. “Isn’t technology supposed to be doing the complete opposite of this?
  • That discomfort was compounded during a moment at work, when he glanced at one of Google’s dashboards, a multicoloured display showing how much of people’s attention the company had commandeered for advertisers. “I realised: this is literally a million people that we’ve sort of nudged or persuaded to do this thing that they weren’t going to otherwise do,” he recalls.
  • Williams and Harris left Google around the same time, and co-founded an advocacy group, Time Well Spent, that seeks to build public momentum for a change in the way big tech companies think about design. Williams finds it hard to comprehend why this issue is not “on the front page of every newspaper every day.
  • “Eighty-seven percent of people wake up and go to sleep with their smartphones,” he says. The entire world now has a new prism through which to understand politics, and Williams worries the consequences are profound.
  • g. “The attention economy incentivises the design of technologies that grab our attention,” he says. “In so doing, it privileges our impulses over our intentions.”
  • That means privileging what is sensational over what is nuanced, appealing to emotion, anger and outrage. The news media is increasingly working in service to tech companies, Williams adds, and must play by the rules of the attention economy to “sensationalise, bait and entertain in order to survive”.
  • It is not just shady or bad actors who were exploiting the internet to change public opinion. The attention economy itself is set up to promote a phenomenon like Trump, who is masterly at grabbing and retaining the attention of supporters and critics alike, often by exploiting or creating outrage.
  • All of which has left Brichter, who has put his design work on the backburner while he focuses on building a house in New Jersey, questioning his legacy. “I’ve spent many hours and weeks and months and years thinking about whether anything I’ve done has made a net positive impact on society or humanity at all,” he says. He has blocked certain websites, turned off push notifications, restricted his use of the Telegram app to message only with his wife and two close friends, and tried to wean himself off Twitter. “I still waste time on it,” he confesses, “just reading stupid news I already know about.” He charges his phone in the kitchen, plugging it in at 7pm and not touching it until the next morning.
  • He stresses these dynamics are by no means isolated to the political right: they also play a role, he believes, in the unexpected popularity of leftwing politicians such as Bernie Sanders and Jeremy Corbyn, and the frequent outbreaks of internet outrage over issues that ignite fury among progressives.
  • All of which, Williams says, is not only distorting the way we view politics but, over time, may be changing the way we think, making us less rational and more impulsive. “We’ve habituated ourselves into a perpetual cognitive style of outrage, by internalising the dynamics of the medium,” he says.
  • It was another English science fiction writer, Aldous Huxley, who provided the more prescient observation when he warned that Orwellian-style coercion was less of a threat to democracy than the more subtle power of psychological manipulation, and “man’s almost infinite appetite for distractions”.
  • If the attention economy erodes our ability to remember, to reason, to make decisions for ourselves – faculties that are essential to self-governance – what hope is there for democracy itself?
  • “The dynamics of the attention economy are structurally set up to undermine the human will,” he says. “If politics is an expression of our human will, on individual and collective levels, then the attention economy is directly undermining the assumptions that democracy rests on.”
Javier E

Facebook's Other Critics: Its Viral Stars - The New York Times - 0 views

  • In 2015, the social network began testing a revenue-sharing program with a limited group of creators, and last November, it rolled out Facebook Creator, a special app designed for professional users. Recently, the social network announced that it was testing some additional tools for creators, including a way for users to purchase monthly subscriptions to their favorite creators’ pages.But some of these features are still not widely available, and many influencers say that Facebook’s charm campaign amounts to too little, too late.
  • “It feels like they’ve pulled the biggest bait-and-switch of all time,” said Dan Shaba, a co-founder of The Pun Guys, a Facebook page with 1.2 million followers. “They’ve been promising monetization from the moment we got in.”Mr. Hamilton, he of the hot-pepper thong video, said, “I did 1.8 billion views last year. I made no money from Facebook. Not even a dollar.”
  • While waiting for Facebook to invite them into a revenue-sharing program, some influencers struck deals with viral publishers such as Diply and LittleThings, which paid the creators to share links on their pages. Those publishers paid top influencers around $500 per link, often with multiple links being posted per day, according to a person who reached such deals.
  • ...1 more annotation...
  • In January, Facebook threw a wrench into that media economy by changing its branded content policy to prohibit creators from accepting money for such link-sharing deals, and re-engineering its News Feed algorithms. Traffic to many viral publishers plummeted overnight. LittleThings, a female-focused digital publisher that had amassed more than 12 million Facebook followers, announced that it was shutting down and blamed Facebook’s News Feed changes for cratering its organic traffic.
Javier E

The Facebook Fallacy: Privacy Is Up to You - The New York Times - 0 views

  • As Facebook’s co-founder and chief executive parried questions from members of Congress about how the social network would protect its users’ privacy, he returned time and again to what probably sounded like an unimpeachable proposition.
  • By providing its users with greater and more transparent controls over the personal data they share and how it is used for targeted advertising, he insisted, Facebook could empower them to make their own call and decide how much privacy they were willing to put on the block.
  • providing a greater sense of control over their personal data won’t make Facebook users more cautious. It will instead encourage them to share more.
  • ...21 more annotations...
  • “Disingenuous is the adjective I had in my mind,”
  • “Fifteen years ago it would have been legitimate to propose this argument,” he added. “But it is no longer legitimate to ignore the behavioral problems and propose simply more transparency and controls.”
  • Professor Acquisti and two colleagues, Laura Brandimarte and the behavioral economist George Loewenstein, published research on this behavior nearly six years ago. “Providing users of modern information-sharing technologies with more granular privacy controls may lead them to share more sensitive information with larger, and possibly riskier, audiences,” they concluded.
  • the critical question is whether, given the tools, we can be trusted to manage the experience. The increasing body of research into how we behave online suggests not.
  • “Privacy control settings give people more rope to hang themselves,” Professor Loewenstein told me. “Facebook has figured this out, so they give you incredibly granular controls.”
  • This paradox is hardly the only psychological quirk for the social network to exploit. Consider default settings. Tons of research in behavioral economics has found that people tend to stick to the default setting of whatever is offered to them, even when they could change it easily.
  • “Facebook is acutely aware of this,” Professor Loewenstein told me. In 2005, its default settings shared most profile fields with, at most, friends of friends. Nothing was shared by default with the full internet.
  • By 2010, however, likes, name, gender, picture and a lot of other things were shared with everybody online. “Facebook changed the defaults because it appreciated their power,” Professor Loewenstein added.
  • The phenomenon even has a name: the “control paradox.”
  • people who profess concern about privacy will provide the emails of their friends in exchange for some pizza.
  • They also found that providing consumers reassuring though irrelevant information about their ability to protect their privacy will make them less likely to avoid surveillance.
  • Another experiment revealed that people are more willing to come clean about their engagement in illicit or questionable behavior when they believe others have done so, too
  • Those in the industry often argue that people don’t really care about their privacy — that they may seem concerned when they answer surveys, but still routinely accept cookies and consent to have their data harvested in exchange for cool online experiences
  • Professor Acquisti thinks this is a fallacy. The cognitive hurdles to manage our privacy online are simply too steep.
  • While we are good at handling our privacy in the offline world, lowering our voices or closing the curtains as the occasion may warrant, there are no cues online to alert us to a potential privacy invasion
  • Even if we were to know precisely what information companies like Facebook have about us and how it will be used, which we don’t, it would be hard for us to assess potential harms
  • Members of Congress have mostly let market forces prevail online, unfettered by government meddling. Privacy protection in the internet economy has relied on the belief that consumers will make rational choices
  • Europe’s stringent new privacy protection law, which Facebook has promised to apply in the United States, may do better than the American system of disclosure and consen
  • the European system also relies mostly on faith that consumers will make rational choices.
  • The more that psychologists and behavioral economists study psychological biases and quirks, the clearer it seems that rational choices alone won’t work. “I don’t think any kind of disclosure or opt in or opt out is going to protect us from our worst instincts,”
  • What to do? Professor Acquisti suggests flipping the burden of proof. The case for privacy regulation rests on consumers’ proving that data collection is harmful. Why not ask the big online platforms like Facebook to prove they can’t work without it? If reducing data collection imposes a cost, we could figure out who bears it — whether consumers, advertisers or Facebook’s bottom line.
1 - 20 of 349 Next › Last »
Showing 20 items per page