Skip to main content

Home/ TOK Friends/ Group items tagged nudge

Rss Feed Group items tagged

huffem4

Twitter aims to limit people sharing articles they have not read | Twitter | The Guardian - 1 views

  • “Study: 70% of Facebook users only read the headline of science stories before commenting” – the fake news website the Science Post has racked up a healthy 127,000 shares for the article which is almost entirely lorem ipsum filler text.
  • Twitter’s solution is not to ban such retweets, but to inject “friction” into the process, in order to try to nudge some users into rethinking their actions on the social network.
  • In May, the company began experimenting with asking users to “revise” their replies if they were about to send tweets with “harmful language” to other people.
  • ...4 more annotations...
  • It is an approach the company has been taking more frequently recently, in an attempt to improve “platform health” without facing accusations of censorship.
  • Twitter is trying to stop people from sharing articles they have not read, in an experiment the company hopes will “promote informed discussion” on social media.
  • We’re trying to encourage people to rethink their behaviour and rethink their language before posting because they often are in the heat of the moment and they might say something they regret
  • A 2016 study from computer scientists at Columbia University and Microsoft found that 59% of links posted on Twitter are never clicked. 
anniina03

Coronavirus: Why some countries wear face masks and others don't - BBC News - 0 views

  • Since the start of the coronavirus outbreak some places have fully embraced wearing face masks, and anyone caught without one risks becoming a social pariah.But in many other parts of the world, from the UK and the US to Sydney and Singapore, it's still perfectly acceptable to walk around bare-faced.
  • the official advice from the World Health Organization has been clear. Only two types of people should wear masks: those who are sick and show symptoms, and those who are caring for people who are suspected to have the coronavirus.
  • a mask is not seen as reliable protection, given that current research shows the virus is spread by droplets and contact with contaminated surfaces. So it could protect you, but only in certain situations such as when you're in close quarters with others where someone infected might sneeze or cough near your face.
  • ...10 more annotations...
  • Removing a mask requires special attention to avoid hand contamination, and it could also breed a false sense of security.
  • in some parts of Asia everyone now wears a mask by default - it is seen as safer and more considerate.
  • In mainland China, Hong Kong, Japan, Thailand and Taiwan, the broad assumption is that anyone could be a carrier of the virus, even healthy people. So in the spirit of solidarity, you need to protect others from yourself.
  • In East Asia, many people are used to wearing masks when they are sick or when it's hayfever season, because it's considered impolite to be sneezing or coughing openly.
  • The 2003 Sars virus outbreak, which affected several countries in the region, also drove home the importance of wearing masks, particularly in Hong Kong, where many died as a result of the virus.So one key difference between these societies and Western ones, is that they have experienced contagion before - and the memories are still fresh and painful.
  • Some argue that ubiquitous mask wearing, as a very visual reminder of the dangers of the virus, could actually act as a "behavioural nudge" to you and others for overall better personal hygiene.
  • But there are downsides of course. Some places such as Japan, Indonesia and Thailand are facing shortages at the moment, and South Korea has had to ration out masks.
  • There is the fear that people may end up re-using masks - which is unhygienic - use masks sold on the black market, or wear homemade masks, which could be of inferior quality and essentially useless.
  • People who do not wear masks in these places have also been stigmatised, to the point that they are shunned and blocked from shops and buildings.
  • In countries where mask wearing is not the norm, such as the West, those who do wear masks have been shunned or even attacked. It hasn't helped that many of these mask wearers are Asians.
Javier E

Free Speech and Civic Virtue between "Fake News" and "Wokeness" | History News Network - 1 views

  • none of these arguments reaches past adversarial notions of democracy. They all characterize free speech as a matter of conflicting rights-claims and competing factions.
  • As long as political polarization precludes rational consensus, she argues, we are left to “[make] personal choices and pronouncements regarding what we are willing (or unwilling) to tolerate, in an attempt to slightly nudge the world in our preferred direction.” Notably, she makes no mention of how we might discern the validity of those preferences or how we might arbitrate between them in cases of conflict.
  • Free speech advocates are hypocritical or ignore some extenuating context, they claim, while those stifling disagreeable or offensive views are merely rectifying past injustices or paying their opponents back in kind, operating practically in a flawed public sphere.
  • ...16 more annotations...
  • It is telling, however, that the letter’s critics focus on speakers and what they deserve to say far more than the listening public and what we deserve to hear
  • In Free Speech and Its Relation to Self-Government (1948), Meikeljohn challenges us to approach public discourse from the perspective of the “good man”: that is to say, the virtuous citizen
  • One cannot appreciate the freedom of speech, he writes, unless one sees it as an act of collective deliberation, carried out by “a man who, in his political activities, is not merely fighting for what…he can get, but is eagerly and generously serving the common welfare”
  • Free speech is not only about discovering truth, or encouraging ethical individualism, or protecting minority opinions—liberals’ usual lines of defense—it is ultimately about binding our fate to others’ by “sharing” the truth with our fellow citizens
  • Sharing truth requires mutual respect and a jealous defense of intellectual freedom, so that “no idea, no opinion, no doubt, no belief, no counter belief, no relevant information” is withheld from the electorate
  • For their part, voters must judge these arguments individually, through introspection, virtue, and meditation on the common good. 
  • The “marketplace of ideas” is dangerous because it relieves citizens of exactly these duties. As Meikeljohn writes:   As separate thinkers, we have no obligation to test our thinking, to make sure that it is worthy of a citizen who is one of the ‘rulers of the nation.’ That testing is to be done, we believe, not by us, but by ‘the competition of the market.
  • this is precisely the sort of self-interested posturing that many on the Left resent in their opponents, but which they now propose to embrace as their own, casually accepting the notion that their fellow citizens are incapable of exercising public reason or considering alternative viewpoints with honesty, bravery, humility, and compassion. 
  • In practice, curtailing public speech is likely to worsen polarization and further empower dominant cultural interests. As an ideal (or a lack thereof), it undermines the intelligibility and mutual respect that form the very basis of citizenship.
  • political polarization has induced Americans to abandon “truth-directed methods of persuasion”—such as argumentation and evidence—for a form of non-rational “messaging,” in which “every speech act is classified as friend or foe… and in which very little faith exists as to the rational faculties of those being spoken to.”
  • “In such a context,” she writes, “even the cry for ‘free speech’ invites a nonliteral interpretation, as being nothing but the most efficient way for its advocates to acquire or consolidate power.”
  • Segments of the Right have pushed this sort of political messaging to its cynical extremes—taking Donald Trump’s statements “seriously but not literally” or taking antagonistic positions simply to “own the libs.”
  • Rather than assuming the supremacy of our own opinions or aspersing the motives of those with whom we disagree, our duty as Americans is to think with, learn from, and correct each other.
  • some critics of the Harper’s letter seem eager to reduce all public debate to a form of power politics
  • Trans activist Julia Serano merely punctuates the tendency when she writes that calls for free speech represent a “misconception that we, as a society, are all in the midst of some grand rational debate, and that marginalized people simply need to properly plea our case for acceptance, and once we do, reason-minded people everywhere will eventually come around. This notion is utterly ludicrous.”
  • one could say that critics of the Harper’s letter take the “bad man” as their unit of analysis. By their lights, all participants in public debate are prejudiced, particular, and self-interested
Javier E

There Is More to Us Than Just Our Brains - The New York Times - 0 views

  • we are less like data processing machines and more like soft-bodied mollusks, picking up cues from within and without and transforming ourselves accordingly.
  • Still, we “insist that the brain is the sole locus of thinking, a cordoned-off space where cognition happens, much as the workings of my laptop are sealed inside its aluminum case,”
  • We get constant messages about what’s going on inside our bodies, sensations we can either attend to or ignore. And we belong to tribes that cosset and guide us
  • ...14 more annotations...
  • we’re networked organisms who move around in shifting surroundings, environments that have the power to transform our thinking
  • Annie Murphy Paul’s new book, “The Extended Mind,” which exhorts us to use our entire bodies, our surroundings and our relationships to “think outside the brain.”
  • In 2011, she published “Origins,” which focused on all the ways we are shaped by the environment, before birth and minute to minute thereafter.
  • “In the nature-nurture dynamic, nurture begins at the time of conception. The food the mother eats, the air she breathes, the water she drinks, the stress or trauma she experiences — all may affect her child for better or worse, over the decades to come.”
  • a down-to-earth take on the science of epigenetics — how environmental signals become catalysts for gene expression
  • the parallel to this latest book is that the boundaries we commonly assume to be fixed are actually squishy. The moment of a child’s birth, her I.Q. scores or fMRI snapshots of what’s going on inside her brain — all are encroached upon and influenced by outside forces.
  • awareness of our internal signals, such as exactly when our hearts beat, or how cold and clammy our hands are, can boost our performance at the poker table or in the financial markets, and even improve our pillow talk
  • “Though we typically think of the brain as telling the body what to do, just as much does the body guide the brain with an array of subtle nudges and prods. One psychologist has called this guide our ‘somatic rudder,’
  • The “body scan” aspect of mindfulness meditation that has been deployed by the behavioral medicine pioneer Jon Kabat-Zinn may help people lower their heart rates and blood pressure,
  • techniques that help us pinpoint their signals can foster well-being
  • Tania Singer has shown how the neural circuitry underlying compassion is strengthened by meditation practice
  • our thoughts “are powerfully shaped by the way we move our bodies.” Gestures help us understand spatial concepts; indeed, “without gesture as an aid, students may fail to understand spatial ideas at all,”
  • looking out on grassy expanses near loose clumps of trees and a source of water helps us solve problems. “Passive attention,” she writes, is “effortless: diffuse and unfocused, it floats from object to object, topic to topic. This is the kind of attention evoked by nature, with its murmuring sounds and fluid motions; psychologists working in the tradition of James call this state of mind ‘soft fascination.’”
  • The chapters on the ways natural and built spaces reflect universal preferences and enhance the thinking process felt like a respite
Javier E

His Job Was to Make Instagram Safe for Teens. His 14-Year-Old Showed Him What the App W... - 0 views

  • The experience of young users on Meta’s Instagram—where Bejar had spent the previous two years working as a consultant—was especially acute. In a subsequent email to Instagram head Adam Mosseri, one statistic stood out: One in eight users under the age of 16 said they had experienced unwanted sexual advances on the platform over the previous seven days.
  • For Bejar, that finding was hardly a surprise. His daughter and her friends had been receiving unsolicited penis pictures and other forms of harassment on the platform since the age of 14, he wrote, and Meta’s systems generally ignored their reports—or responded by saying that the harassment didn’t violate platform rules.
  • “I asked her why boys keep doing that,” Bejar wrote to Zuckerberg and his top lieutenants. “She said if the only thing that happens is they get blocked, why wouldn’t they?”
  • ...39 more annotations...
  • For the well-being of its users, Bejar argued, Meta needed to change course, focusing less on a flawed system of rules-based policing and more on addressing such bad experiences
  • The company would need to collect data on what upset users and then work to combat the source of it, nudging those who made others uncomfortable to improve their behavior and isolating communities of users who deliberately sought to harm others.
  • “I am appealing to you because I believe that working this way will require a culture shift,” Bejar wrote to Zuckerberg—the company would have to acknowledge that its existing approach to governing Facebook and Instagram wasn’t working.
  • During and after Bejar’s time as a consultant, Meta spokesman Andy Stone said, the company has rolled out several product features meant to address some of the Well-Being Team’s findings. Those features include warnings to users before they post comments that Meta’s automated systems flag as potentially offensive, and reminders to be kind when sending direct messages to users like content creators who receive a large volume of messages. 
  • Meta’s classifiers were reliable enough to remove only a low single-digit percentage of hate speech with any degree of precision.
  • Bejar was floored—all the more so when he learned that virtually all of his daughter’s friends had been subjected to similar harassment. “DTF?” a user they’d never met would ask, using shorthand for a vulgar proposition. Instagram acted so rarely on reports of such behavior that the girls no longer bothered reporting them. 
  • Meta’s own statistics suggested that big problems didn’t exist. 
  • Meta had come to approach governing user behavior as an overwhelmingly automated process. Engineers would compile data sets of unacceptable content—things like terrorism, pornography, bullying or “excessive gore”—and then train machine-learning models to screen future content for similar material.
  • While users could still flag things that upset them, Meta shifted resources away from reviewing them. To discourage users from filing reports, internal documents from 2019 show, Meta added steps to the reporting process. Meta said the changes were meant to discourage frivolous reports and educate users about platform rules. 
  • The outperformance of Meta’s automated enforcement relied on what Bejar considered two sleights of hand. The systems didn’t catch anywhere near the majority of banned content—only the majority of what the company ultimately removed
  • “Please don’t talk about my underage tits,” Bejar’s daughter shot back before reporting his comment to Instagram. A few days later, the platform got back to her: The insult didn’t violate its community guidelines.
  • Also buttressing Meta’s statistics were rules written narrowly enough to ban only unambiguously vile material. Meta’s rules didn’t clearly prohibit adults from flooding the comments section on a teenager’s posts with kiss emojis or posting pictures of kids in their underwear, inviting their followers to “see more” in a private Facebook Messenger group. 
  • “Mark personally values freedom of expression first and foremost and would say this is a feature and not a bug,” Rosen responded
  • Narrow rules and unreliable automated enforcement systems left a lot of room for bad behavior—but they made the company’s child-safety statistics look pretty good according to Meta’s metric of choice: prevalence.
  • Defined as the percentage of content viewed worldwide that explicitly violates a Meta rule, prevalence was the company’s preferred measuring stick for the problems users experienced.
  • According to prevalence, child exploitation was so rare on the platform that it couldn’t be reliably estimated, less than 0.05%, the threshold for functional measurement. Content deemed to encourage self-harm, such as eating disorders, was just as minimal, and rule violations for bullying and harassment occurred in just eight of 10,000 views. 
  • “There’s a grading-your-own-homework problem,”
  • Meta defines what constitutes harmful content, so it shapes the discussion of how successful it is at dealing with it.”
  • It could reconsider its AI-generated “beauty filters,” which internal research suggested made both the people who used them and those who viewed the images more self-critical
  • the team built a new questionnaire called BEEF, short for “Bad Emotional Experience Feedback.
  • A recurring survey of issues 238,000 users had experienced over the past seven days, the effort identified problems with prevalence from the start: Users were 100 times more likely to tell Instagram they’d witnessed bullying in the last week than Meta’s bullying-prevalence statistics indicated they should.
  • “People feel like they’re having a bad experience or they don’t,” one presentation on BEEF noted. “Their perception isn’t constrained by policy.
  • they seemed particularly common among teens on Instagram.
  • Among users under the age of 16, 26% recalled having a bad experience in the last week due to witnessing hostility against someone based on their race, religion or identity
  • More than a fifth felt worse about themselves after viewing others’ posts, and 13% had experienced unwanted sexual advances in the past seven days. 
  • The vast gap between the low prevalence of content deemed problematic in the company’s own statistics and what users told the company they experienced suggested that Meta’s definitions were off, Bejar argued
  • To minimize content that teenagers told researchers made them feel bad about themselves, Instagram could cap how much beauty- and fashion-influencer content users saw.
  • Proving to Meta’s leadership that the company’s prevalence metrics were missing the point was going to require data the company didn’t have. So Bejar and a group of staffers from the Well-Being Team started collecting it
  • And it could build ways for users to report unwanted contacts, the first step to figuring out how to discourage them.
  • One experiment run in response to BEEF data showed that when users were notified that their comment or post had upset people who saw it, they often deleted it of their own accord. “Even if you don’t mandate behaviors,” said Krieger, “you can at least send signals about what behaviors aren’t welcome.”
  • But among the ranks of Meta’s senior middle management, Bejar and Krieger said, BEEF hit a wall. Managers who had made their careers on incrementally improving prevalence statistics weren’t receptive to the suggestion that the approach wasn’t working. 
  • After three decades in Silicon Valley, he understood that members of the company’s C-Suite might not appreciate a damning appraisal of the safety risks young users faced from its product—especially one citing the company’s own data. 
  • “This was the email that my entire career in tech trained me not to send,” he says. “But a part of me was still hoping they just didn’t know.”
  • “Policy enforcement is analogous to the police,” he wrote in the email Oct. 5, 2021—arguing that it’s essential to respond to crime, but that it’s not what makes a community safe. Meta had an opportunity to do right by its users and take on a problem that Bejar believed was almost certainly industrywide.
  • fter Haugen’s airing of internal research, Meta had cracked down on the distribution of anything that would, if leaked, cause further reputational damage. With executives privately asserting that the company’s research division harbored a fifth column of detractors, Meta was formalizing a raft of new rules for employees’ internal communication.
  • Among the mandates for achieving “Narrative Excellence,” as the company called it, was to keep research data tight and never assert a moral or legal duty to fix a problem.
  • “I had to write about it as a hypothetical,” Bejar said. Rather than acknowledging that Instagram’s survey data showed that teens regularly faced unwanted sexual advances, the memo merely suggested how Instagram might help teens if they faced such a problem.
  • The hope that the team’s work would continue didn’t last. The company stopped conducting the specific survey behind BEEF, then laid off most everyone who’d worked on it as part of what Zuckerberg called Meta’s “year of efficiency.
  • If Meta was to change, Bejar told the Journal, the effort would have to come from the outside. He began consulting with a coalition of state attorneys general who filed suit against the company late last month, alleging that the company had built its products to maximize engagement at the expense of young users’ physical and mental health. Bejar also got in touch with members of Congress about where he believes the company’s user-safety efforts fell short. 
Javier E

How 2020 Forced Facebook and Twitter to Step In - The Atlantic - 0 views

  • mainstream platforms learned their lesson, accepting that they should intervene aggressively in more and more cases when users post content that might cause social harm.
  • During the wildfires in the American West in September, Facebook and Twitter took down false claims about their cause, even though the platforms had not done the same when large parts of Australia were engulfed in flames at the start of the year
  • Twitter, Facebook, and YouTube cracked down on QAnon, a sprawling, incoherent, and constantly evolving conspiracy theory, even though its borders are hard to delineate.
  • ...15 more annotations...
  • Content moderation comes to every content platform eventually, and platforms are starting to realize this faster than ever.
  • Nothing symbolizes this shift as neatly as Facebook’s decision in October (and Twitter’s shortly after) to start banning Holocaust denial. Almost exactly a year earlier, Zuckerberg had proudly tied himself to the First Amendment in a widely publicized “stand for free expression” at Georgetown University.
  • The evolution continues. Facebook announced earlier this month that it will join platforms such as YouTube and TikTok in removing, not merely labeling or down-ranking, false claims about COVID-19 vaccines.
  • the pandemic also showed that complete neutrality is impossible. Even though it’s not clear that removing content outright is the best way to correct misperceptions, Facebook and other platforms plainly want to signal that, at least in the current crisis, they don’t want to be seen as feeding people information that might kill them.
  • When internet platforms announce new policies, assessing whether they can and will enforce them consistently has always been difficult. In essence, the companies are grading their own work. But too often what can be gleaned from the outside suggests that they’re failing.
  • It tweaked its algorithm to boost authoritative sources in the news feed and turned off recommendations to join groups based around political or social issues. Facebook is reversing some of these steps now, but it cannot make people forget this toolbox exists in the future
  • As platforms grow more comfortable with their power, they are recognizing that they have options beyond taking posts down or leaving them up. In addition to warning labels, Facebook implemented other “break glass” measures to stem misinformation as the election approached.
  • Platforms don’t deserve praise for belatedly noticing dumpster fires that they helped create and affixing unobtrusive labels to them
  • Warning labels for misinformation might make some commentators feel a little better, but whether labels actually do much to contain the spread of false information is still unknown.
  • News reporting suggests that insiders at Facebook knew they could and should do more about misinformation, but higher-ups vetoed their ideas. YouTube barely acted to stem the flood of misinformation about election results on its platform.
  • Even before the pandemic, YouTube had begun adjusting its recommendation algorithm to reduce the spread of borderline and harmful content, and is introducing pop-up nudges to encourage user
  • And if 2020 finally made clear to platforms the need for greater content moderation, it also exposed the inevitable limits of content moderation.
  • Down-ranking, labeling, or deleting content on an internet platform does not address the social or political circumstances that caused it to be posted in the first place
  • even the most powerful platform will never be able to fully compensate for the failures of other governing institutions or be able to stop the leader of the free world from constructing an alternative reality when a whole media ecosystem is ready and willing to enable him. As Renée DiResta wrote in The Atlantic last month, “reducing the supply of misinformation doesn’t eliminate the demand.”
  • Even so, this year’s events showed that nothing is innate, inevitable, or immutable about platforms as they currently exist. The possibilities for what they might become—and what role they will play in society—are limited more by imagination than any fixed technological constraint, and the companies appear more willing to experiment than ever.
Javier E

Kids and Social Media: a Mental Health Crisis or Moral Panic? - 0 views

  • given the range of evidence and the fact that the biggest increases relate to a specific group (teenage girls) and a specific set of issues clustered around anxiety and body image I would assign a high probability to it being a real issue. Especially as it fits the anecdotal conservations I have with headteachers and parents.
  • Is social media the cause?
  • One of the most commonly identified culprits is social media. Until recently I’ve been sceptical for two reasons. First I’m allergic to moral panics.
  • ...8 more annotations...
  • Secondly as Stuart Ritchie points out in this excellent article, to date the evidence assembled by proponents of the social media theory like Jonathan Haidt and Jean Twenge, has shown correlations not causal relationships. Yes, it seems that young people who use social media a lot have worse mental health, but that could easily be because young people with worse mental health choose to use social media more!  
  • recently I’ve shifted to thinking it probably is a major cause for three reasons:
  • 1.       I can’t think of anything else that fits. Other suggested causes just don’t work.
  • Social media does fit, the big increase in take up maps well on to the mental health data and it happened everywhere in rich countries at the same time. The most affected group, teenage girls, are also the ones who report that social media makes them more anxious and body conscious in focus groups
  • It is of course true that correlation doesn’t prove anything but if there’s only one strongly related correlation it’s pretty likely there’s a relationship.
  • 2.       There is no doubt that young people are spending a huge amount of time online now. And that, therefore, must have replaced other activities that involve being out with friends in real life. Three quarters of 12 year olds now have a social media profile and 95% of teenagers use social media regularly. Over half who say they’ve been bullied, say it was on social media.
  •   We finally have the first evidence of a direct causal relationship via a very clever US study using the staged rollout of Facebook across US college campuses to assess the impact on mental health. Not only does it show that mental illness increased after the introduction of Facebook but it also shows that it was particularly pronounced amongst those who were more likely to view themselves unfavourably alongside their peers due to being e.g. overweight or having lower socio-economic status. It is just one study but it nudges me even further towards thinking this a major cause of the problem.
  • I have blocked my (12 year old) twins from all social media apps and will hold out as long as possible. The evidence isn’t yet rock solid but it’s solid enough to make me want to protect them as best I can.
‹ Previous 21 - 27 of 27
Showing 20 items per page