Skip to main content

Home/ TOK Friends/ Group items tagged post

Rss Feed Group items tagged

Javier E

Facebook Conceded It Might Make You Feel Bad. Here's How to Interpret That. - The New Y... - 0 views

  • Facebook published a quietly groundbreaking admission on Friday. Social media, the company said in a blog post, can often make you feel good — but sometimes it can also make you feel bad.
  • Facebook’s using a corporate blog post to point to independent research that shows its product can sometimes lead to lower measures of physical and mental well-being should be regarded as a big deal. The post stands as a direct affront to the company’s reason for being
  • Its business model and its more airy social mission depend on the idea that social media is a new and permanently dominant force in the human condition.
  • ...8 more annotations...
  • Then came 2017. The concerns over social-media-born misinformation and propaganda during last year’s presidential race were one flavor of this worry. Another is what Facebook might be doing to our psychology and social relationships — whether it has addicted us to “short-term, dopamine-driven feedback loops” that “are destroying how society works,” to quote Chamath Palihapitiya, one of several former Facebook executives who have expressed some version of this concern over the last few months.
  • For several years, people have asked whether social media, on an individual level and globally, might be altering society and psychology in negative ways.
  • Facebook’s leap into the ranks of the world’s most valuable companies less than 14 years after its founding can be attributed to this simple truth: Humans have shown no limit, so far, in their appetite for more Facebook.
  • Though it is quite abstruse, the post, by David Ginsberg and Moira Burke, two company researchers, takes readers through a tour of the nuances on whether Facebook can be bad for you.
  • The cynical take is that Facebook is conceding the most obvious downsides of its product in order to convince us it really does care.
  • It showed that using Facebook more deeply and meaningfully, for instance by posting comments and engaging in back-and-forth chats on the service, improved people’s scores on well-being.
  • You can see the issue here: Facebook is saying that if you feel bad about Facebook, it’s because you’re holding it wrong, to quote Steve Jobs. And the cure for your malaise may be to just use Facebook more.
  • If you think Facebook is ruining the world, you should be a little glad that even Facebook agrees that we need a better Facebook — and that it is pledging to build one.
tongoscar

U.S. and Iran Are Trolling Each Other - in China - The New York Times - 0 views

  • As tensions between the United States and Iran persist after the American killing of a top Iranian general this month, the two countries are waging a heated battle in an unlikely forum: the Chinese internet.The embassies of the United States and Iran in Beijing have published a series of barbed posts in recent days on Weibo, a popular Chinese social media site, attacking each other in Chinese and in plain view of the country’s hundreds of millions of internet users.
  • The battle has captivated people in China, where diplomatic rows rarely break into public view and the government often censors posts about politics.
  • Iran, for its part, has for years sought to hinder the flow of information from the West more broadly, blocking Facebook, Twitter and other social networks.
  • ...4 more annotations...
  • The Chinese authorities operate one of the world’s most aggressive censorship systems, routinely scrubbing reports, comments and posts on the internet that are deemed politically sensitive or subversive. Posts by foreign diplomats are known to have been censored, especially on topics such as North Korea or human rights.
  • China and Iran have sought closer relations in recent years, especially as American sanctions have increased economic pressure on Tehran.
  • In its Weibo posts, the Iranian Embassy made a point of appealing to Chinese internet users, thanking them for their support and even suggesting that they visit Iran for the upcoming Lunar New Year holiday (“safety is not an issue,” the embassy wrote).
  • “China has provided Iran with very important economic and political lifelines in recent years when U.S. sanctions have choked that country,”
sanderk

The 'Availability Bias' Is Driving Investor Decisions - Business Insider - 0 views

  • What availability bias tells us is that investors’ lingering perceptions of a dire market environment may be causing them to view investment opportunities through an overly negative lens, making it less appealing to consider taking on investment risk, no matter how small the returns on perceived “safe” investments.
  • “Imagine if I was a financial advisor and you came to talk to me about your risk attitude, and I started the discussion by asking you to describe how you felt in the last three years on the days when your portfolio lost 5% of its value. Then I asked you what your risk attitude was. Most people would say they don’t want to ever experience days like that again. On the other hand, what if instead I talked about people I knew who were retired and living in the Bahamas, fishing and golfing. Now your risk attitude would probably be different.”
  • As humans, our thinking is strongly influenced by what is personally most relevant, recent or dramatic.
  • ...4 more annotations...
  • lingering perceptions based on dramatic, painful events are impacting decision-making even when those events are over.
  • Ariely said a home country investment bias might be generated by two perceptual factors.“The first is an overly optimistic belief about one’s own economy; an expectation of performance in their country that is higher than what would be statistically realistic. The second reason is most likely due to procedural difficulties in investing outside the country – such as less knowledge about how to access these markets.”
  • investors may be making decisions driven more by personal bias or irrational belief than by reality and, in doing so, they may be hindering their own investment success.
  • The problem? These decisions may hinder their ability to reach their desired retirement or savings goals. The choice is between changing the goal—or changing the means of reaching it.
Javier E

Is Facebook Bad for You? It Is for About 360 Million Users, Company Surveys Suggest - WSJ - 0 views

  • Facebook FB 1.57% researchers have found that 1 in 8 of its users report engaging in compulsive use of social media that impacts their sleep, work, parenting or relationships, according to documents reviewed by The Wall Street Journal.
  • These patterns of what the company calls problematic use mirror what is popularly known as internet addiction. They were perceived by users to be worse on Facebook than any other major social-media platform
  • A Facebook team focused on user well-being suggested a range of fixes, and the company implemented some, building in optional features to encourage breaks from social media and to dial back the notifications that can serve as a lure to bring people back to the platform.
  • ...25 more annotations...
  • Facebook shut down the team in late 2019.
  • “We have a role to play, which is why we’ve built tools and controls to help people manage when and how they use our services,” she said in the statement. “Furthermore, we have a dedicated team working across our platforms to better understand these issues and ensure people are using our apps in ways that are meaningful to them.”
  • They wrote that they don’t consider the behavior to be a clinical addiction because it doesn’t affect the brain in the same way as gambling or substance abuse. In one document, they noted that “activities like shopping, sex and Facebook use, when repetitive and excessive, may cause problems for some people.”
  • In March 2020, several months after the well-being team was dissolved, researchers who had been on the team shared a slide deck internally with some of the findings and encouraged other teams to pick up the work.
  • The researchers estimated these issues affect about 12.5% of the flagship app’s more than 2.9 billion users, or more than 360 million people. About 10% of users in the U.S., one of Facebook’s most lucrative markets, exhibit this behavior
  • In the Philippines and in India, which is the company’s largest market, the employees put the figure higher, at around 25%.
  • “Why should we care?” the researchers wrote in the slide deck. “People perceive the impact. In a comparative study with competitors, people perceived lower well-being and higher problematic use on Facebook compared to any other service.
  • Facebook’s findings are consistent with what many external researchers have observed for years,
  • said Brian Primack, a professor of public health and medicine and dean of the College of Education and Health Professions at the University of Arkansas
  • His research group followed about a thousand people over six months in a nationally representative survey and found that the amount of social media that a person used was the No. 1 predictor of the variables they measured for who became depressed.
  • In late 2017, a Facebook executive and a researcher wrote a public blog post that outlined some of the issues with social-media addiction. According to the post, the company had found that while passive consumption of social media could make you feel worse, the opposite was true of more active social-media use.
  • Inside Facebook, the researchers registered concern about the direction of Facebook’s focus on certain metrics, including the number of times a person logs into the app, which the company calls a session. “One of the worries with using sessions as a north star is we want to be extra careful not to game them by creating bad experiences for vulnerable populations,” a researcher wrote, referring to elements designed to draw people back to Facebook frequently, such as push notifications.
  • Facebook then made a switch to more heavily weigh “meaningful social interactions” in its news feed as a way to combat passive consumption. One side effect of that change, as outlined in a previous Journal article in The Facebook Files, was that the company’s algorithms rewarded content that was angry or sensational, because those posts increased engagement from users.
  • Facebook said any algorithm can promote objectionable or harmful content and that the company is doing its best to mitigate the problem.
  • “Every second that I wasn’t occupied by something I had to do I was fooling around on my phone scrolling through Facebook,” Ms. Gandy said. “Facebook took over my brain.”
  • “Actively interacting with people—especially sharing messages, posts and comments with close friends and reminiscing about past interactions—is linked to improvements in well-being,” the company said.
  • The well-being team, according to people familiar with the matter, was reshuffled at least twice since late 2017 before it was disbanded, and could get only about half of the resources the team requested to do its work.
  • In 2018, Facebook’s researchers surveyed 20,000 U.S. users and paired their answers with data about their behavior on Facebook. The researchers found about 3% of these users said they experienced “serious problems” in their sleep, work or relationships related to their time on Facebook that they found difficult to change. Some of the researchers’ work was published in a 2019 paper.
  • According to that study, the researchers also said that a liberal interpretation of the results would be that 14% of respondents spent “a lot more time on Facebook than they want to,” although they didn’t label this group problematic users.
  • In 2019, the researchers had come to a new figure: What they called problematic use affects 12.5% of people on Facebook, they said. This survey used a broader definition for the issue, including users who reported negative results on key aspects of their life as well as feelings of guilt or a loss of control, according to the documents.
  • The researchers also asked Facebook users what aspects of Facebook triggered them most. The users said the app’s many notifications sucked them in. “Red dots are toxic on the home screen,” a male young adult in the U.S. told the researchers, referring to the symbol that alerts a user to new content.
  • One entrepreneur came up with his own solution to some of these issues. In 2016, software developer Louis Barclay manually unfollowed all the people, pages and groups he saw on Facebook in an attempt to be more deliberate about how he used technology. The process, which isn’t the same as unfriending, took him days, but he was happy with the result: an empty newsfeed that no longer sucked him in for hours. He could still visit the profile pages of everyone he wanted to connect with on Facebook, but their content would no longer appear in the never-ending scroll of posts.
  • Thinking other people might benefit from a similar experience on Facebook, he built a tool that would enable anyone to automate the process. He created it as a piece of add-on software called a browser extension that anyone could download. He called it Unfollow Everything and made it available on Chrome’s web store free of charge.
  • In July, Facebook sent Mr. Barclay a cease-and-desist letter, which the inventor earlier wrote about for Slate, saying his tool was a breach of its terms of service for automating user interactions. It also permanently disabled Mr. Barclay’s personal Facebook and Instagram accounts.
  • Ms. Lever, the company spokeswoman, said Mr. Barclay’s extension could pose risks if abused, and said Facebook offers its own unfollow tool that allows users to manually unfollow accounts.
Javier E

Twitter and TikTok Lead in Amplifying Misinformation, Report Finds - The New York Times - 0 views

  • It is well known that social media amplifies misinformation and other harmful content. The Integrity Institute, an advocacy group, is now trying to measure exactly how much
  • The institute’s initial report, posted online, found that a “well-crafted lie” will get more engagements than typical, truthful content and that some features of social media sites and their algorithms contribute to the spread of misinformation.
  • Twitter, the analysis showed, has what the institute called the great misinformation amplification factor, in large part because of its feature allowing people to share, or “retweet,” posts easily.
  • ...6 more annotations...
  • It was followed by TikTok, the Chinese-owned video site, which uses machine-learning models to predict engagement and make recommendations to users.
  • “We see a difference for each platform because each platform has different mechanisms for virality on it,” said Jeff Allen, a former integrity officer at Facebook and a founder and the chief research officer at the Integrity Institute. “The more mechanisms there are for virality on the platform, the more we see misinformation getting additional distribution.”
  • The institute calculated its findings by comparing posts that members of the International Fact-Checking Network have identified as false with the engagement of previous posts that were not flagged from the same accounts
  • Facebook, according to the sample that the institute has studied so far, had the most instances of misinformation but amplified such claims to a lesser degree, in part because sharing posts requires more steps. But some of its newer features are more prone to amplify misinformation, the institute found
  • Facebook’s amplification factor of video content alone is closer to TikTok’s, the institute found. That’s because the platform’s Reels and Facebook Watch, which are video features, “both rely heavily on algorithmic content recommendations” based on engagements, according to the institute’s calculations.
  • Instagram, which like Facebook is owned by Meta, had the lowest amplification rate. There was not yet sufficient data to make a statistically significant estimate for YouTube, according to the institute.
Javier E

His Job Was to Make Instagram Safe for Teens. His 14-Year-Old Showed Him What the App W... - 0 views

  • The experience of young users on Meta’s Instagram—where Bejar had spent the previous two years working as a consultant—was especially acute. In a subsequent email to Instagram head Adam Mosseri, one statistic stood out: One in eight users under the age of 16 said they had experienced unwanted sexual advances on the platform over the previous seven days.
  • For Bejar, that finding was hardly a surprise. His daughter and her friends had been receiving unsolicited penis pictures and other forms of harassment on the platform since the age of 14, he wrote, and Meta’s systems generally ignored their reports—or responded by saying that the harassment didn’t violate platform rules.
  • “I asked her why boys keep doing that,” Bejar wrote to Zuckerberg and his top lieutenants. “She said if the only thing that happens is they get blocked, why wouldn’t they?”
  • ...39 more annotations...
  • For the well-being of its users, Bejar argued, Meta needed to change course, focusing less on a flawed system of rules-based policing and more on addressing such bad experiences
  • The company would need to collect data on what upset users and then work to combat the source of it, nudging those who made others uncomfortable to improve their behavior and isolating communities of users who deliberately sought to harm others.
  • “I am appealing to you because I believe that working this way will require a culture shift,” Bejar wrote to Zuckerberg—the company would have to acknowledge that its existing approach to governing Facebook and Instagram wasn’t working.
  • During and after Bejar’s time as a consultant, Meta spokesman Andy Stone said, the company has rolled out several product features meant to address some of the Well-Being Team’s findings. Those features include warnings to users before they post comments that Meta’s automated systems flag as potentially offensive, and reminders to be kind when sending direct messages to users like content creators who receive a large volume of messages. 
  • Meta’s classifiers were reliable enough to remove only a low single-digit percentage of hate speech with any degree of precision.
  • Bejar was floored—all the more so when he learned that virtually all of his daughter’s friends had been subjected to similar harassment. “DTF?” a user they’d never met would ask, using shorthand for a vulgar proposition. Instagram acted so rarely on reports of such behavior that the girls no longer bothered reporting them. 
  • Meta’s own statistics suggested that big problems didn’t exist. 
  • Meta had come to approach governing user behavior as an overwhelmingly automated process. Engineers would compile data sets of unacceptable content—things like terrorism, pornography, bullying or “excessive gore”—and then train machine-learning models to screen future content for similar material.
  • While users could still flag things that upset them, Meta shifted resources away from reviewing them. To discourage users from filing reports, internal documents from 2019 show, Meta added steps to the reporting process. Meta said the changes were meant to discourage frivolous reports and educate users about platform rules. 
  • The outperformance of Meta’s automated enforcement relied on what Bejar considered two sleights of hand. The systems didn’t catch anywhere near the majority of banned content—only the majority of what the company ultimately removed
  • “Please don’t talk about my underage tits,” Bejar’s daughter shot back before reporting his comment to Instagram. A few days later, the platform got back to her: The insult didn’t violate its community guidelines.
  • Also buttressing Meta’s statistics were rules written narrowly enough to ban only unambiguously vile material. Meta’s rules didn’t clearly prohibit adults from flooding the comments section on a teenager’s posts with kiss emojis or posting pictures of kids in their underwear, inviting their followers to “see more” in a private Facebook Messenger group. 
  • “Mark personally values freedom of expression first and foremost and would say this is a feature and not a bug,” Rosen responded
  • Narrow rules and unreliable automated enforcement systems left a lot of room for bad behavior—but they made the company’s child-safety statistics look pretty good according to Meta’s metric of choice: prevalence.
  • Defined as the percentage of content viewed worldwide that explicitly violates a Meta rule, prevalence was the company’s preferred measuring stick for the problems users experienced.
  • According to prevalence, child exploitation was so rare on the platform that it couldn’t be reliably estimated, less than 0.05%, the threshold for functional measurement. Content deemed to encourage self-harm, such as eating disorders, was just as minimal, and rule violations for bullying and harassment occurred in just eight of 10,000 views. 
  • “There’s a grading-your-own-homework problem,”
  • Meta defines what constitutes harmful content, so it shapes the discussion of how successful it is at dealing with it.”
  • It could reconsider its AI-generated “beauty filters,” which internal research suggested made both the people who used them and those who viewed the images more self-critical
  • the team built a new questionnaire called BEEF, short for “Bad Emotional Experience Feedback.
  • A recurring survey of issues 238,000 users had experienced over the past seven days, the effort identified problems with prevalence from the start: Users were 100 times more likely to tell Instagram they’d witnessed bullying in the last week than Meta’s bullying-prevalence statistics indicated they should.
  • “People feel like they’re having a bad experience or they don’t,” one presentation on BEEF noted. “Their perception isn’t constrained by policy.
  • they seemed particularly common among teens on Instagram.
  • Among users under the age of 16, 26% recalled having a bad experience in the last week due to witnessing hostility against someone based on their race, religion or identity
  • More than a fifth felt worse about themselves after viewing others’ posts, and 13% had experienced unwanted sexual advances in the past seven days. 
  • The vast gap between the low prevalence of content deemed problematic in the company’s own statistics and what users told the company they experienced suggested that Meta’s definitions were off, Bejar argued
  • To minimize content that teenagers told researchers made them feel bad about themselves, Instagram could cap how much beauty- and fashion-influencer content users saw.
  • Proving to Meta’s leadership that the company’s prevalence metrics were missing the point was going to require data the company didn’t have. So Bejar and a group of staffers from the Well-Being Team started collecting it
  • And it could build ways for users to report unwanted contacts, the first step to figuring out how to discourage them.
  • One experiment run in response to BEEF data showed that when users were notified that their comment or post had upset people who saw it, they often deleted it of their own accord. “Even if you don’t mandate behaviors,” said Krieger, “you can at least send signals about what behaviors aren’t welcome.”
  • But among the ranks of Meta’s senior middle management, Bejar and Krieger said, BEEF hit a wall. Managers who had made their careers on incrementally improving prevalence statistics weren’t receptive to the suggestion that the approach wasn’t working. 
  • After three decades in Silicon Valley, he understood that members of the company’s C-Suite might not appreciate a damning appraisal of the safety risks young users faced from its product—especially one citing the company’s own data. 
  • “This was the email that my entire career in tech trained me not to send,” he says. “But a part of me was still hoping they just didn’t know.”
  • “Policy enforcement is analogous to the police,” he wrote in the email Oct. 5, 2021—arguing that it’s essential to respond to crime, but that it’s not what makes a community safe. Meta had an opportunity to do right by its users and take on a problem that Bejar believed was almost certainly industrywide.
  • fter Haugen’s airing of internal research, Meta had cracked down on the distribution of anything that would, if leaked, cause further reputational damage. With executives privately asserting that the company’s research division harbored a fifth column of detractors, Meta was formalizing a raft of new rules for employees’ internal communication.
  • Among the mandates for achieving “Narrative Excellence,” as the company called it, was to keep research data tight and never assert a moral or legal duty to fix a problem.
  • “I had to write about it as a hypothetical,” Bejar said. Rather than acknowledging that Instagram’s survey data showed that teens regularly faced unwanted sexual advances, the memo merely suggested how Instagram might help teens if they faced such a problem.
  • The hope that the team’s work would continue didn’t last. The company stopped conducting the specific survey behind BEEF, then laid off most everyone who’d worked on it as part of what Zuckerberg called Meta’s “year of efficiency.
  • If Meta was to change, Bejar told the Journal, the effort would have to come from the outside. He began consulting with a coalition of state attorneys general who filed suit against the company late last month, alleging that the company had built its products to maximize engagement at the expense of young users’ physical and mental health. Bejar also got in touch with members of Congress about where he believes the company’s user-safety efforts fell short. 
Javier E

Facebook will start telling you when a story may be fake - The Washington Post - 0 views

  • The social network is going to partner with the Poynter International Fact-Checking Network, which includes groups such as Snopes and the Associated Press, to evaluate articles flagged by Facebook users. If those articles do not pass the smell test for the fact-checkers, Facebook will label that evaluation whenever they are posted or shared, along with a link to the organization that debunked the story.
  • Mosseri said the social network still wants to be a place where people with all kinds of opinions can express themselves but has no interest in being the arbiter of what’s true and what's not for its 1 billion users.
  • The new system will work like this: If a story on Facebook is patently false — saying that a celebrity is dead when they are still alive, for example — then users will see a notice that the story has been disputed or debunked. People who try to share stories that have been found false will also see an alert before they post. Flagged stories will appear lower in the news feed than unflagged stories.
  • ...9 more annotations...
  • Users will also be able to report potentially false stories to Facebook or send messages directly to the person posting a questionable article.
  • The company is focusing, for now, on what Mosseri called the “bottom of the barrel” websites that are purposefully set up to deceive and spread fake news, as well as those that are impersonating other news organizations. “We are not looking to flag legitimate organizations,” Mosseri said. “We’re looking for pages posing as legitimate organizations.” Articles from legitimate sites that are controversial or even wrong should not get flagged, he said.
  • The company will also prioritize checking stories that are getting lots of flags from users and are being shared widely, to go after the biggest targets possible.
  • "From a journalistic side, is it enough? It’s a little late.”
  • Facebook is fine to filter out other content -- such as pornography -- for which the definition is unclear. There's no clear explanation for why Facebook hasn't decided to apply similar filters to fake news. “I think that’s a little weak,” Tu said. “If you recognize that it’s bad and journalists at the AP say it’s bad, you shouldn’t have it on your site.”
  • Others said Facebook's careful approach may be warranted. "I think we'll have to wait and see early results to determine how effective the strategy is," said Alexios Mantzarlis, of Poynter's International Fact-Checking Network. "In my eyes, erring on the side of caution is not a bad idea with something so complicated," he said
  • Facebook is also trying to crack down on people who have made a business in fake news by tweaking the social network's advertising practices. Any article that has been disputed, for example, cannot be used in an ad. Facebook is also playing around with ways to limit links from publishers with landing pages that are mostly ads — a common tactic for fake-news websites
  • With those measures in place, “we’re hoping financially motivated spammers might move away from fake news,” Mosseri said
  • Paul Horner, a fake news writer who makes a living writing viral hoaxes, said he wasn't immediately worried about Facebook's new crackdown on fake news sites. "It's really easy to start a new site. I have 50 domain names. I have a dedicated server. I can start up a new site within 48 hours," he said, shortly after Facebook announced its new anti-hoax programs.  If his sites, which he describes as "satire"-focused, do end up getting hit too hard, Horner says he has "backup plans."
Javier E

The right has its own version of political correctness. It's just as stifling. - The Wa... - 0 views

  • Political correctness has become a major bugaboo of the right in the past decade, a rallying cry against all that has gone wrong with liberalism and America. Conservative writers fill volumes complaining how political correctness stifles free expression and promotes bunk social theories about “power structures” based on patriarchy, race and mass victimhood. Forbes charged that it “stifles freedom of speech.” The Daily Caller has gone so far as to claim that political correctness “kills Americans.”
  • But conservatives have their own, nationalist version of PC, their own set of rules regulating speech, behavior and acceptable opinions. I call it “patriotic correctness.” It’s a full-throated, un-nuanced, uncompromising defense of American nationalism, history and cherry-picked ideals. Central to its thesis is the belief that nothing in America can’t be fixed by more patriotism enforced by public shaming, boycotts and policies to cut out foreign and non-American influences.
  • Blaming the liberal or mainstream media and “media bias” is the patriotically correct version of blaming the corporations or capitalism. The patriotically correct notion that they “would rather be governed by the first 2,000 people in the Boston telephone directory than by the 2,000 people on the faculty of Harvard University” because the former have “common sense” and the “intellectual elites” don’t know anything, despite all the evidence to the contrary, can be sustained only in a total bubble.
  • ...10 more annotations...
  • Complaining about political correctness is patriotically correct. The patriotically correct must use the non-word “illegals,” or “illegal immigrant” or “illegal alien” to describe foreigners who broke our immigration laws. Dissenters support “open borders” or “shamnesty” for 30 million illegal alien invaders. The punishment is deportation because “we’re a nation of laws” and they didn’t “get in line,” even though no such line actually exists. Just remember that they are never anti-immigration, only anti-illegal immigration, even when they want to cut legal immigration.
  • Black Lives Matter is racist because it implies that black lives are more important than other lives, but Blue Lives Matter doesn’t imply that cops’ lives are more important than the rest of ours. Banning Islam or Muslim immigration is a necessary security measure, but homosexuals should not be allowed to get married because it infringes on religious liberty. Transgender people could access women’s restrooms for perverted purposes, but Donald Trump walking in on nude underage girls in dressing rooms before a beauty pageant is just “media bias.”
  • Terrorism is an “existential threat,” even though the chance of being killed in a terrorist attack is about 1 in 3.2 million a year. Saying the words “radical Islam” when describing terrorism is an important incantation necessary to defeat that threat. When Chobani yogurt founder Hamdi Ulukaya decides to employ refugees in his factories, it’s because of his ties to “globalist corporate figures.” Waving a Mexican flag on U.S. soil means you hate America, but waving a Confederate flag just means you’re proud of your heritage.
  • Insufficient displays of patriotism among the patriotically correct can result in exclusion from public life and ruined careers. It also restricts honest criticism of failed public policies, diverting blame for things like the war in Iraq to those Americans who didn’t support the war effort enough.
  • Poor white Americans are the victims of economic dislocation and globalization beyond their control, while poor blacks and Hispanics are poor because of their failed cultures. The patriotically correct are triggered when they hear strangers speaking in a language other than English. Does that remind you of the PC duty to publicly shame those who use unacceptable language to describe race, gender or whatever other identity is the victim du jour?
  • The patriotically correct rightly ridicule PC “safe spaces” but promptly retreat to Breitbart or talk radio, where they can have mutually reinforcing homogeneous temper tantrums while complaining about the lack of intellectual diversity on the left.
  • There is no such thing as too much national security, but it’s liberals who want to coddle Americans with a “nanny state.”
  • Those who disagree with the patriotically correct are animated by anti-Americanism, are post-American, or deserve any other of a long list of clunky and vague labels that signal virtue to other members of the patriotic in-group.
  • Every group has implicit rules against certain opinions, actions and language as well as enforcement mechanisms — and the patriotically correct are no exception. But they are different because they are near-uniformly unaware of how they are hewing to a code of speech and conduct similar to the PC lefties they claim to oppose.
  • The modern form of political correctness on college campuses and the media is social tyranny with manners, while patriotic correctness is tyranny without the manners, and its adherents do not hesitate to use the law to advance their goals.
Sean Kirkpatrick

Posting your vote on Social Networks - 1 views

  •  
    On election day, I kept seeing people posting pictures of their ballots and who they were voting for on social networking sites. In this article, the author warns people of the laws in many states that prohibit people from posting pictures of their ballot. I thought this article was very interesting as it was very relatable to our generation. I wonder that if people post pictures on their social networks and if it could effect other peoples votes and sway them.
Javier E

The Republican Horse Race Is Over, and Journalism Lost - The New York Times - 0 views

  • Wrong, wrong, wrong — to the very end, we got it wrong.
  • in the end, you have to point the finger at national political journalism, which has too often lost sight of its primary directives in this election season: to help readers and viewers make sense of the presidential chaos; to reduce the confusion, not add to it; to resist the urge to put ratings, clicks and ad sales above the imperative of getting it right.
  • The first signs that something was amiss in the coverage of the Tea Party era actually surfaced in the 2014 midterms. Oh, you broadcast network newscast viewers didn’t know we had important elections with huge consequences for the governance of your country that year? You can be forgiven because the broadcast networks hardly covered them.
  • ...6 more annotations...
  • the lesson in Virginia, as the Washington Post reporter Paul Farhi wrote at the time, was that nothing exceeds the value of shoe-leather reporting, given that politics is an essentially human endeavor and therefore can defy prediction and reason.
  • Yet when Mr. Trump showed up on the scene, it was as if that had never happened.
  • It was another thing to declare, as The Huffington Post did, that coverage of his campaign could be relegated to the entertainment section (and to add a disclaimer to articles about him) and still another to give Mr. Trump a “2 percent” chance at the nomination despite strong polls in his favor, as FiveThirtyEight did six months before the first votes were cast.
  • Predictions that far out can be viewed as being all in good fun. But in Mr. Trump’s case, they also arguably sapped the journalistic will to scour his record as aggressively as those of his supposedly more serious rivals. In other words, predictions can have consequences.
  • The problems weren’t at all only due to the reliance on data. Don’t forget those moments that were supposed to have augured Mr. Trump’s collapse: the certainty that once the race narrowed to two or three candidates, Mr. Trump would be through, and what at one point became the likelihood of a contested convention.
  • That’s all the more reason in the coming months to be as sharply focused on the data we don’t have as we are on the data we do have (and maybe watching out for making any big predictions about the fall based on the polling of today). But a good place to start would be to get a good night’s sleep, and then talk to some voters.
grayton downing

Post-Publication Peer Review Mainstreamed | The Scientist Magazine® - 0 views

  • peer review. The process has been blamed for everything from slowing down the communication of new discoveries to introducing woeful biases to the literature
  • peer review does not elevate the quality of published science and that many published research findings are later shown to be false. In response, a growing number of scientists are working to impose a new vision of the scientific process through post-publication review,
  • organized post-publication peer review system could help “clarify experiments, suggest avenues for follow-up work and even catch errors.” If used by a critical mass of scientists, he added, “it could strengthen the scientific process.”  
  • ...2 more annotations...
  • allowing for an
  • onymous comments, PubPeer aims to create an open, debate-friendly environment, while maintaining the rigor the closed review process currently used by most journals. Its creators, who describe themselves as “early-stage scientists,” have also decided to remain anonymous, citing career concerns.
Javier E

Huffington Post in Limbo at Verizon - NYTimes.com - 0 views

  • The Huffington Post sits at the center of a phenomenon that some describe as the birth of a new media establishment: Several digital start-ups, including BuzzFeed and Vice, are trying to upend news presentation the way cable channels encroached on broadcast television in the 1980s. By that measure, some in the industry say, $1 billion is a reasonable valuation for a site with more than 200 million unique visitors a month, and acquiring it is a smart play for Verizon as it follows other communications companies, like Comcast, in owning its own content.
  • Others see, instead, a frothy market that has led to overly high valuations for media companies, based largely on branding and a relentless focus on audience development techniques. Photo
  • According to a document published in 2013 by the website The Smoking Gun, The Huffington Post was expected to generate $60 million in revenue in 2011, when AOL bought it, with $10 million in Ebitda (earnings before interest, tax, depreciation and amortization) growing to $165 million in revenue and $58 million in Ebitda by 2013. People with knowledge of its current finances said that its annual revenue is now in the hundreds of millions, and that its profitability depends on how generously its recent investments in a global expansion and video are assessed.
Javier E

The Elusive Big Idea - NYTimes.com - 0 views

  • we are living in an increasingly post-idea world — a world in which big, thought-provoking ideas that can’t instantly be monetized are of so little intrinsic value that fewer people are generating them and fewer outlets are disseminating them, the Internet notwithstanding. Bold ideas are almost passé.
  • we live in a post-Enlightenment age in which rationality, science, evidence, logical argument and debate have lost the battle in many sectors, and perhaps even in society generally, to superstition, faith, opinion and orthodoxy. While we continue to make giant technological advances, we may be the first generation to have turned back the epochal clock — to have gone backward intellectually from advanced modes of thinking into old modes of belief.
  • Post-Enlightenment refers to a style of thinking that no longer deploys the techniques of rational thought. Post-idea refers to thinking that is no longer done, regardless of the style.
  • ...2 more annotations...
  • There is the eclipse of the public intellectual in the general media by the pundit who substitutes outrageousness for thoughtfulness, and the concomitant decline of the essay in general-interest magazines. And there is the rise of an increasingly visual culture, especially among the young — a form in which ideas are more difficult to express.
  • a time when we know more than we have ever known, we think about it less.
Javier E

Bile, venom and lies: How I was trolled on the Internet - The Washington Post - 1 views

  • In a comprehensive new study of Facebook that analyzed posts made between 2010 and 2014, a group of scholars found that people mainly shared information that confirmed their prejudices, paying little attention to facts and veracity. (Hat tip to Cass Sunstein, the leading expert on this topic.) The result, the report says, is the “proliferation of biased narratives fomented by unsubstantiated rumors, mistrust and paranoia.”
  • The authors specifically studied trolling — the creation of highly provocative, often false information, with the hope of spreading it widely. The report says that “many mechanisms cause false information to gain acceptance, which in turn generate false beliefs that, once adopted by an individual, are highly resistant to correction.”
  • in recent weeks I was the target of a trolling campaign and saw exactly how it works. It started when an obscure website published a post titled “CNN host Fareed Zakaria calls for jihad rape of white women.
  • ...3 more annotations...
  • Here is what happened next: Hundreds of people began linking to it, tweeting and retweeting it, and adding their comments, which are too vulgar or racist to repeat. A few ultra-right-wing websites reprinted the story as fact. With each new cycle, the levels of hysteria rose, and people started demanding that I be fired, deported or killed. For a few days, the digital intimidation veered out into the real world. Some people called my house late one night and woke up and threatened my daughters, who are 7 and 12.
  • The people spreading this story were not interested in the facts; they were interested in feeding prejudice. The original story was cleverly written to provide conspiracy theorists with enough ammunition to ignore evidence. It claimed that I had taken down the post after a few hours when I realized it “receive[d] negative attention.”
  • an experiment performed by two psychologists in 1970. They divided students into two groups based on their answers to a questionnaire: high prejudice and low prejudice. Each group was told to discuss controversial issues such as school busing and integrated housing. Then the questions were asked again. “The surveys revealed a striking pattern,” Kolbert noted. “Simply by talking to one another, the bigoted students had become more bigoted and the tolerant more tolerant.” This “group polarization” is now taking place at hyper speed, around the world. It is how radicalization happens and extremism spreads.
anonymous

Opinion | I Don't Want Another Family to Lose a Child the Way We Did - The New York Times - 0 views

  • I Don’t Want Another Family to Lose a Child the Way We Did
  • The thought of suicide is terrifying, but we have to make talking about it a part of everyday life.
  • I always felt so blessed watching my boy-girl twins; even as teenagers they would walk arm in arm down the street, chatting and laughing together.
  • ...33 more annotations...
  • But that blessed feeling evaporated in June of 2019, when I lost my daughter, Frankie, to suicide, three weeks before her high school graduation
  • Ever since that day, I have thought of little else except how I could help the next struggling teenager, the next Frankie.
  • Several days after her passing, we opened our home up to our community, including Frankie’s very large group of teenage friends
  • “What strength Frankie had. It must have taken enormous energy for her to do what she did each day.”
  • That was Frankie. She had the strength to engage in school and in theater, despite her anxiety and depression. She had an ability to connect — emotionally, profoundly — with others, even when she was struggling herself
  • “empathy personified, with quite the fabulous earring collection.”
  • Whether that strength came from her home or somewhere else, or both, Frankie just had a way of drawing out warmth wherever she went.
  • Just as my parents couldn’t predict in the 1980s what seatbelt safety would look like now, I am not sure what suicide prevention should look like in the future.
  • Suicidal thinking, whether it is the result of mental illness, stress, trauma or loss, is actually far more common and difficult to see than many of us realize
  • A June 2020 Centers for Disease Control survey found that one in four 18- to 24-year-olds reported that they had seriously thought about taking their lives in the past 30 days; prepandemic estimates found that just under one in five high schoolers had seriously considered suicide, and just under one in 10 had made at least one suicide attempt during the previous year.
  • Despite 50 years of research, predicting death by suicide is still nearly impossible
  • Like others who have lost a child to suicide, I have spent countless hours going over relentless “what ifs.”
  • Maybe what we need are seatbelts for suicide.
  • “Click it or Ticket” was born in part out of a concern in the 1980s about teenagers dying in car accidents. Just as with suicides today, adults couldn’t predict who would get into a car accident, and one of the best solutions we had — seatbelts — was used routinely, in some estimates, by only 15 percent of the population. Indeed, as children, my siblings and I used to make a game of rolling around in the back of our car, seatbelts ignored.
  • Three decades later, our world is unlike anything I could have imagined as a child. Putting on a seatbelt is the first lesson of driver’s education; cars get inspected annually for working seatbelts; car companies embed those annoying beeping sounds to remind you to buckle your seatbelt
  • But like many who struggle with suicidal thinking, she kept her own pain camouflaged for a long time, perhaps for too long.
  • Most of us (estimates range as high as 91 percent) now wear a seatbelt.
  • But I imagine a world in which every health worker, school professional, employer and religious leader can recognize the signs of suicidal thinking and know how to ask about it, respond to it and offer resources to someone who is struggling
  • When I told Frankie’s orthodontist about her suicide, his response surprised me: “We really don’t come across that in our practice.” Even though orthodontists don’t ask about it, they see children during their early teenage years, when suicidal thinking often begins to emerge. Can you imagine a world in which signs for the prevention hotline and text line are posted for kids to see as they get their braces adjusted?
  • What if the annual teenage pediatric checkup involved a discussion of one-at-a-time pill packaging and boxes to lock up lethal medications, the way there is a discussion of baby-proofing homes when children start to crawl? What if pediatricians handed each adolescent a card with the prevention hotline on it (or better yet, if companies preprogrammed that number into cellphones) and the pediatrician talked through what happens when a teenager calls? What if doctors coached parents on how to ask their teenager, “Are you thinking about suicide?”
  • What if we required and funded every school to put in place one of the existing programs that train teachers and other school professionals to be a resource for struggling students?
  • I recognize that despite progress identifying effective programs to combat suicidal thinking, their success rate and simplicity does not compare with what we see with seatbelts. But that doesn’t mean we shouldn’t do more.
  • Part of doing more also includes making the world more just and caring. To give one example, state-level same-sex-marriage policies that were in place before the Supreme Court legalized same-sex marriage nationally have been linked to reductions in suicide attempts among adolescents, especially among sexual minorities.
  • Just as safer highways and car models make seatbelts more effective, asking about and responding to suicidal thinking is only one part of a solution that also includes attention to societal injustices.
  • I understand, of course, that asking about suicidal thinking is scary. But if it is scary for you to ask about it, it is even scarier for the teenager who is thinking about it.
  • I will never forget sitting with Frankie in the waiting room in the pediatric psychiatric wing on the night I brought her to the inpatient unit, three months before she took her life
  • “You know, I am so glad you finally know.” I could hear the relief in her voice. I just nodded, understandingly, but it broke my heart that she held on to such a painful secret for so long.
  • I find myself inspired by Frankie’s teenage friends, who cared deeply for her and now support one another after her passing.
  • On good days, she would sit on the worn couch in that office, snuggle in a pile of teenagers and discuss plays, schoolwork and their lives.
  • And in that corner space, she would text a friend to help her get to class or, after she had opened up about her struggles, encourage others to open up as well.
  • The fall after Frankie left us, some students decided to remake that hidden corner, dotting the walls with colored Post-it notes. Scrawled on a pink Post-it were the words “you matter”; a yellow one read “it gets better”; an orange one shared a cellphone number to call for help. Tiny Post-it squares had transformed the corner into a space to comfort, heal and support the next struggling teenager.
  • I don’t know if a seatbelt approach would have saved Frankie. And I understand that all the details of such an approach aren’t fully worked out here. But I don’t want us to lose any more children because we weren’t brave enough to take on something that scares us, something we don’t fully understand, something that is much more prevalent than many of us realize.
  • If 17- and 18-year-olds who’ve lost a friend have the strength to imagine a world dotted with healing, then the least we can do as adults is design and build the structure to support them
Javier E

No, Trump's sister did not publicly back him. He was duped by a fake account. - The New... - 0 views

  • That article, on the website of a conservative talk-radio host named Wayne Dupree, quoted a post from a Twitter account named “Betty Trump” that used a photo of Ms. Trump Grau as its profile picture.
  • “This election inspired me to break my silence and speak out on behalf of my family,” the account said in a post on Wednesday. “My brother Don won this election and will fight this to the very end. We’ve always been a family of fighters.”
  • Had the article’s author looked more closely, though, she would have noticed some suspicious details about the account. It was a day old. The photos it used of Ms. Trump Grau were taken from Getty Images and past news articles about her. And since that first post, the account had tweeted increasingly bizarre messages, sharply criticizing Democrats, journalists and Republicans who had questioned the false claim that Mr. Trump was re-elected.
  • ...1 more annotation...
  • The bizarre episode illustrates how easily misinformation spreads online, often with the help of the president himself. Right-wing websites that seek to support the president’s baseless claims, or simply attract clicks so they can sell more ads, often eschew the traditional principles of journalism, such as simple fact-checking. And the social media companies aid the cycle by making it simple to share misinformation, including via fake accounts, and by training their algorithms to promote material that attracts more attention, as sensational and divisive posts often do.
Javier E

Opinion | Barack Obama's smart way to change the disinformation debate - The Washington... - 0 views

  • The former president spoke at Stanford University on April 21 to lay out his vision for fighting disinformation on the Internet. His focus on the subject is fitting; the dusk of his administration marked a turning point from techno-optimism to pessimism after election interference revealed how easily malicious actors could exploit the free flow of information.
  • His diagnosis is on target. The Internet has given us access to more people, more opportunities and more knowledge
  • This has helped activists drum up attention for overlooked causes. It has also enabled the nation’s adversaries to play on our preexisting prejudices and divisions to sow discord
  • ...5 more annotations...
  • Mr. Obama starts where most lawmakers are stuck: Section 230 of the Communications Decency Act, which gives platforms immunity from legal liability for most third-party posts. He suggested a “higher standard of care” for ads than for so-called organic content that everyday users post. This would strike a sensible balance between eviscerating Section 230, making sites accountable for everything they host, and doing nothing.
  • On top of that, “an instant, 24/7 global information stream,” from which audiences can pick and choose material that confirms their biases, has deepened the social divides that bad actors seek to exploit.
  • Mr. Obama identified another problem with the Section 230 talk: homing in on what material platforms do and don’t take down risks missing how the “very design” of these sites privileges polarizing, inflammatory posts.
  • With this, Mr. Obama adds something vital to the mainstream debate over social media regulation, shifting attention away from a debate about whack-a-mole content removal and toward the sites’ underlying structures. His specific suggestions, while fuzzy, also have promise — from slowing down viral material to imposing transparency obligations that would subject social media companies’ algorithms to scrutiny from researchers and regulators.
  • Mr. Obama calls this “democratic oversight.” But the material companies reveal could be highly technical. Ideally, it would get translated into layman’s terms so that everyday people, too, can understand how decisions so significant in their daily lives and the life of the country are made.
Javier E

What Is Mastodon and Why Are People Leaving Twitter for It? - The New York Times - 0 views

  • Mastodon is a part of the Fediverse, or federated universe, a group of federated platforms that share communication protocols.
  • Unlike Twitter, Mastodon presents posts in chronological order, rather than based on an algorithm.
  • It also has no ads; Mastodon is largely crowdfunded
  • ...7 more annotations...
  • Most servers are funded by the people who use them.
  • The servers that Mastodon oversees — Mastodon Social and Mastodon Online — are funded through Patreon, a membership and subscription service platform often used by content creators.
  • Although Mastodon visually resembles Twitter, its user experience is more akin to that of Discord, a talking and texting app where people also join servers that have their own cultures and rules.
  • Unlike Twitter and Discord, Mastodon does not have the ability to make its users, or the people who create servers, do anything.
  • But servers can dictate how they interact with one another — or whether they interact at all in a shared stream of posts. For example, when Gab used Mastodon’s code, Mastodon Social and other independent servers blocked Gab’s server, so posts from Gab did not appear on the feeds of people using those servers.
  • Like an email account, your username includes the name of the server itself. For example, a possible username on Mastodon Social would be janedoe@mastodon.social. Regardless of which server you sign up with, you can interact with people who use other Mastodon servers, or you can switch to another one
  • Once you sign up for an account, you can post “toots,” which are Mastodon’s version of tweets. You can also boost other people’s toots, the equivalent of a retweet.
  •  
    owned
Javier E

Influencers Don't Have to Be Human to Be Believable - WSJ - 0 views

  • Why would consumers look even somewhat favorably upon virtual influencers that make comments about real products?
  • . Virtual and human social-media influencers can be equally effective for certain types of posts, the research suggests.
  • The thinking is that virtual influencers can be fun and entertaining and make a brand seem innovative and tech savvy,
  • ...8 more annotations...
  •  virtual influencers can also be cost-effective and provide more flexibility than a human alternative. 
  • “When it comes to an endorsement by a virtual influencer, the followers start questioning the expertness of the influencer on the field of the endorsed product/service,” he says. “Pretending that the influencer has actual experience with the product backfires.”
  • In one part of the study, about 300 participants were shown a social-media post purported to be from an influencer about either ice cream or sunglasses. Then, roughly half were told the influencer was human and half were told she was virtual. Regardless of the product, participants perceived the virtual influencer to be less credible than its “human” counterpart. Participants who were told the influencer was virtual also had a less-positive attitude toward the brand behind the product.
  • When the influencers “can’t really use the brand they are promoting,” it’s hard to see them as trustworthy experts, say Ozdemir.
  • Two groups saw a post with an emotional endorsement where the influencer uses words like love and adore. The other two groups saw a more staid post, focusing on specific software features. In each scenario one group was told the influencer was human and one group was told the influencer was virtual.
  • For the emotional endorsement, participants found the human influencer to be more credible. Participants who were told the influencer was human also had a more positive view of the brand than those who were told the influencer was virtual.
  • For the more factual endorsement, however, there was no statistically significant difference between the two groups when it came to influencer credibility or brand perception.
  • “When it comes to delivering a more factual endorsement, highlighting features that could be found by doing an internet search, participants really didn’t seem to care if the influencer was human or not,”
Javier E

Why it's as hard to escape an echo chamber as it is to flee a cult | Aeon Essays - 0 views

  • there are two very different phenomena at play here, each of which subvert the flow of information in very distinct ways. Let’s call them echo chambers and epistemic bubbles. Both are social structures that systematically exclude sources of information. Both exaggerate their members’ confidence in their beliefs.
  • they work in entirely different ways, and they require very different modes of intervention
  • An epistemic bubble is when you don’t hear people from the other side. An echo chamber is what happens when you don’t trust people from the other side.
  • ...90 more annotations...
  • start with epistemic bubbles
  • That omission might be purposeful
  • But that omission can also be entirely inadvertent. Even if we’re not actively trying to avoid disagreement, our Facebook friends tend to share our views and interests
  • An ‘echo chamber’ is a social structure from which other relevant voices have been actively discredited. Where an epistemic bubble merely omits contrary views, an echo chamber brings its members to actively distrust outsiders.
  • an echo chamber is something like a cult. A cult isolates its members by actively alienating them from any outside sources. Those outside are actively labelled as malignant and untrustworthy.
  • In epistemic bubbles, other voices are not heard; in echo chambers, other voices are actively undermined.
  • The way to break an echo chamber is not to wave “the facts” in the faces of its members. It is to attack the echo chamber at its root and repair that broken trust.
  • Looking to others for corroboration is a basic method for checking whether one has reasoned well or badly
  • They have been in the limelight lately, most famously in Eli Pariser’s The Filter Bubble (2011) and Cass Sunstein’s #Republic: Divided Democracy in the Age of Social Media (2017).
  • The general gist: we get much of our news from Facebook feeds and similar sorts of social media. Our Facebook feed consists mostly of our friends and colleagues, the majority of whom share our own political and cultural views
  • various algorithms behind the scenes, such as those inside Google search, invisibly personalise our searches, making it more likely that we’ll see only what we want to see. These processes all impose filters on information.
  • Such filters aren’t necessarily bad. The world is overstuffed with information, and one can’t sort through it all by oneself: filters need to be outsourced.
  • That’s why we all depend on extended social networks to deliver us knowledge
  • any such informational network needs the right sort of broadness and variety to work
  • Each individual person in my network might be superbly reliable about her particular informational patch but, as an aggregate structure, my network lacks what Sanford Goldberg in his book Relying on Others (2010) calls ‘coverage-reliability’. It doesn’t deliver to me a sufficiently broad and representative coverage of all the relevant information.
  • Epistemic bubbles also threaten us with a second danger: excessive self-confidence.
  • An ‘epistemic bubble’ is an informational network from which relevant voices have been excluded by omission
  • Suppose that I believe that the Paleo diet is the greatest diet of all time. I assemble a Facebook group called ‘Great Health Facts!’ and fill it only with people who already believe that Paleo is the best diet. The fact that everybody in that group agrees with me about Paleo shouldn’t increase my confidence level one bit. They’re not mere copies – they actually might have reached their conclusions independently – but their agreement can be entirely explained by my method of selection.
  • Luckily, though, epistemic bubbles are easily shattered. We can pop an epistemic bubble simply by exposing its members to the information and arguments that they’ve missed.
  • echo chambers are a far more pernicious and robust phenomenon.
  • amieson and Cappella’s book is the first empirical study into how echo chambers function
  • echo chambers work by systematically alienating their members from all outside epistemic sources.
  • Their research centres on Rush Limbaugh, a wildly successful conservative firebrand in the United States, along with Fox News and related media
  • His constant attacks on the ‘mainstream media’ are attempts to discredit all other sources of knowledge. He systematically undermines the integrity of anybody who expresses any kind of contrary view.
  • outsiders are not simply mistaken – they are malicious, manipulative and actively working to destroy Limbaugh and his followers. The resulting worldview is one of deeply opposed force, an all-or-nothing war between good and evil
  • The result is a rather striking parallel to the techniques of emotional isolation typically practised in cult indoctrination
  • cult indoctrination involves new cult members being brought to distrust all non-cult members. This provides a social buffer against any attempts to extract the indoctrinated person from the cult.
  • The echo chamber doesn’t need any bad connectivity to function. Limbaugh’s followers have full access to outside sources of information
  • As Elijah Millgram argues in The Great Endarkenment (2015), modern knowledge depends on trusting long chains of experts. And no single person is in the position to check up on the reliability of every member of that chain
  • Their worldview can survive exposure to those outside voices because their belief system has prepared them for such intellectual onslaught.
  • exposure to contrary views could actually reinforce their views. Limbaugh might offer his followers a conspiracy theory: anybody who criticises him is doing it at the behest of a secret cabal of evil elites, which has already seized control of the mainstream media.
  • Perversely, exposure to outsiders with contrary views can thus increase echo-chamber members’ confidence in their insider sources, and hence their attachment to their worldview.
  • ‘evidential pre-emption’. What’s happening is a kind of intellectual judo, in which the power and enthusiasm of contrary voices are turned against those contrary voices through a carefully rigged internal structure of belief.
  • One might be tempted to think that the solution is just more intellectual autonomy. Echo chambers arise because we trust others too much, so the solution is to start thinking for ourselves.
  • that kind of radical intellectual autonomy is a pipe dream. If the philosophical study of knowledge has taught us anything in the past half-century, it is that we are irredeemably dependent on each other in almost every domain of knowledge
  • Limbaugh’s followers regularly read – but do not accept – mainstream and liberal news sources. They are isolated, not by selective exposure, but by changes in who they accept as authorities, experts and trusted sources.
  • we depend on a vastly complicated social structure of trust. We must trust each other, but, as the philosopher Annette Baier says, that trust makes us vulnerable. Echo chambers operate as a kind of social parasite on that vulnerability, taking advantage of our epistemic condition and social dependency.
  • I am quite confident that there are plenty of echo chambers on the political Left. More importantly, nothing about echo chambers restricts them to the arena of politics
  • The world of anti-vaccination is clearly an echo chamber, and it is one that crosses political lines. I’ve also encountered echo chambers on topics as broad as diet (Paleo!), exercise technique (CrossFit!), breastfeeding, some academic intellectual traditions, and many, many more
  • Here’s a basic check: does a community’s belief system actively undermine the trustworthiness of any outsiders who don’t subscribe to its central dogmas? Then it’s probably an echo chamber.
  • much of the recent analysis has lumped epistemic bubbles together with echo chambers into a single, unified phenomenon. But it is absolutely crucial to distinguish between the two.
  • Epistemic bubbles are rather ramshackle; they go up easily, and they collapse easily
  • Echo chambers are far more pernicious and far more robust. They can start to seem almost like living things. Their belief systems provide structural integrity, resilience and active responses to outside attacks
  • the two phenomena can also exist independently. And of the events we’re most worried about, it’s the echo-chamber effects that are really causing most of the trouble.
  • new data does, in fact, seem to show that people on Facebook actually do see posts from the other side, or that people often visit websites with opposite political affiliation.
  • their basis for evaluation – their background beliefs about whom to trust – are radically different. They are not irrational, but systematically misinformed about where to place their trust.
  • Many people have claimed that we have entered an era of ‘post-truth’.
  • Not only do some political figures seem to speak with a blatant disregard for the facts, but their supporters seem utterly unswayed by evidence. It seems, to some, that truth no longer matters.
  • This is an explanation in terms of total irrationality. To accept it, you must believe that a great number of people have lost all interest in evidence or investigation, and have fallen away from the ways of reason.
  • echo chambers offers a less damning and far more modest explanation. The apparent ‘post-truth’ attitude can be explained as the result of the manipulations of trust wrought by echo chambers.
  • We don’t have to attribute a complete disinterest in facts, evidence or reason to explain the post-truth attitude. We simply have to attribute to certain communities a vastly divergent set of trusted authorities.
  • An echo chamber doesn’t destroy their members’ interest in the truth; it merely manipulates whom they trust and changes whom they accept as trustworthy sources and institutions.
  • in many ways, echo-chamber members are following reasonable and rational procedures of enquiry. They’re engaging in critical reasoning. They’re questioning, they’re evaluating sources for themselves, they’re assessing different pathways to information. They are critically examining those who claim expertise and trustworthiness, using what they already know about the world
  • none of this weighs against the existence of echo chambers. We should not dismiss the threat of echo chambers based only on evidence about connectivity and exposure.
  • Notice how different what’s going on here is from, say, Orwellian doublespeak, a deliberately ambiguous, euphemism-filled language designed to hide the intent of the speaker.
  • echo chambers don’t trade in vague, ambiguous pseudo-speech. We should expect that echo chambers would deliver crisp, clear, unambiguous claims about who is trustworthy and who is not
  • clearly articulated conspiracy theories, and crisply worded accusations of an outside world rife with untrustworthiness and corruption.
  • Once an echo chamber starts to grip a person, its mechanisms will reinforce themselves.
  • In an epistemically healthy life, the variety of our informational sources will put an upper limit to how much we’re willing to trust any single person. Everybody’s fallible; a healthy informational network tends to discover people’s mistakes and point them out. This puts an upper ceiling on how much you can trust even your most beloved leader
  • nside an echo chamber, that upper ceiling disappears.
  • Being caught in an echo chamber is not always the result of laziness or bad faith. Imagine, for instance, that somebody has been raised and educated entirely inside an echo chamber
  • when the child finally comes into contact with the larger world – say, as a teenager – the echo chamber’s worldview is firmly in place. That teenager will distrust all sources outside her echo chamber, and she will have gotten there by following normal procedures for trust and learning.
  • It certainly seems like our teenager is behaving reasonably. She could be going about her intellectual life in perfectly good faith. She might be intellectually voracious, seeking out new sources, investigating them, and evaluating them using what she already knows.
  • The worry is that she’s intellectually trapped. Her earnest attempts at intellectual investigation are led astray by her upbringing and the social structure in which she is embedded.
  • Echo chambers might function like addiction, under certain accounts. It might be irrational to become addicted, but all it takes is a momentary lapse – once you’re addicted, your internal landscape is sufficiently rearranged such that it’s rational to continue with your addiction
  • Similarly, all it takes to enter an echo chamber is a momentary lapse of intellectual vigilance. Once you’re in, the echo chamber’s belief systems function as a trap, making future acts of intellectual vigilance only reinforce the echo chamber’s worldview.
  • There is at least one possible escape route, however. Notice that the logic of the echo chamber depends on the order in which we encounter the evidence. An echo chamber can bring our teenager to discredit outside beliefs precisely because she encountered the echo chamber’s claims first. Imagine a counterpart to our teenager who was raised outside of the echo chamber and exposed to a wide range of beliefs. Our free-range counterpart would, when she encounters that same echo chamber, likely see its many flaws
  • Those caught in an echo chamber are giving far too much weight to the evidence they encounter first, just because it’s first. Rationally, they should reconsider their beliefs without that arbitrary preference. But how does one enforce such informational a-historicity?
  • The escape route is a modified version of René Descartes’s infamous method.
  • Meditations on First Philosophy (1641). He had come to realise that many of the beliefs he had acquired in his early life were false. But early beliefs lead to all sorts of other beliefs, and any early falsehoods he’d accepted had surely infected the rest of his belief system.
  • The only solution, thought Descartes, was to throw all his beliefs away and start over again from scratch.
  • He could start over, trusting nothing and no one except those things that he could be entirely certain of, and stamping out those sneaky falsehoods once and for all. Let’s call this the Cartesian epistemic reboot.
  • Notice how close Descartes’s problem is to our hapless teenager’s, and how useful the solution might be. Our teenager, like Descartes, has problematic beliefs acquired in early childhood. These beliefs have infected outwards, infesting that teenager’s whole belief system. Our teenager, too, needs to throw everything away, and start over again.
  • Let’s call the modernised version of Descartes’s methodology the social-epistemic reboot.
  • when she starts from scratch, we won’t demand that she trust only what she’s absolutely certain of, nor will we demand that she go it alone
  • For the social reboot, she can proceed, after throwing everything away, in an utterly mundane way – trusting her senses, trusting others. But she must begin afresh socially – she must reconsider all possible sources of information with a presumptively equanimous eye. She must take the posture of a cognitive newborn, open and equally trusting to all outside sources
  • we’re not asking people to change their basic methods for learning about the world. They are permitted to trust, and trust freely. But after the social reboot, that trust will not be narrowly confined and deeply conditioned by the particular people they happened to be raised by.
  • Such a profound deep-cleanse of one’s whole belief system seems to be what’s actually required to escape. Look at the many stories of people leaving cults and echo chambers
  • Take, for example, the story of Derek Black in Florida – raised by a neo-Nazi father, and groomed from childhood to be a neo-Nazi leader. Black left the movement by, basically, performing a social reboot. He completely abandoned everything he’d believed in, and spent years building a new belief system from scratch. He immersed himself broadly and open-mindedly in everything he’d missed – pop culture, Arabic literature, the mainstream media, rap – all with an overall attitude of generosity and trust.
  • It was the project of years and a major act of self-reconstruction, but those extraordinary lengths might just be what’s actually required to undo the effects of an echo-chambered upbringing.
  • we need to attack the root, the systems of discredit themselves, and restore trust in some outside voices.
  • Stories of actual escapes from echo chambers often turn on particular encounters – moments when the echo-chambered individual starts to trust somebody on the outside.
  • Black’s is case in point. By high school, he was already something of a star on neo-Nazi media, with his own radio talk-show. He went on to college, openly neo-Nazi, and was shunned by almost every other student in his community college. But then Matthew Stevenson, a Jewish fellow undergraduate, started inviting Black to Stevenson’s Shabbat dinners. In Black’s telling, Stevenson was unfailingly kind, open and generous, and slowly earned Black’s trust. This was the seed, says Black, that led to a massive intellectual upheaval – a slow-dawning realisation of the depths to which he had been misled
  • Similarly, accounts of people leaving echo-chambered homophobia rarely involve them encountering some institutionally reported fact. Rather, they tend to revolve around personal encounters – a child, a family member, a close friend coming out.
  • hese encounters matter because a personal connection comes with a substantial store of trust.
  • We don’t simply trust people as educated experts in a field – we rely on their goodwill. And this is why trust, rather than mere reliability, is the key concept
  • goodwill is a general feature of a person’s character. If I demonstrate goodwill in action, then you have some reason to think that I also have goodwill in matters of thought and knowledge.
  • f one can demonstrate goodwill to an echo-chambered member – as Stevenson did with Black – then perhaps one can start to pierce that echo chamber.
  • the path I’m describing is a winding, narrow and fragile one. There is no guarantee that such trust can be established, and no clear path to its being established systematically.
  • what we’ve found here isn’t an escape route at all. It depends on the intervention of another. This path is not even one an echo-chamber member can trigger on her own; it is only a whisper-thin hope for rescue from the outside.
‹ Previous 21 - 40 of 718 Next › Last »
Showing 20 items per page