Skip to main content

Home/ Digit_al Society/ Group items tagged moderation

Rss Feed Group items tagged

dr tech

TikTok battles to remove video of livestreamed suicide | TikTok | The Guardian - 0 views

  •  
    "TikTok is battling to remove a graphic video of a livestreamed suicide, after the footage was uploaded to the service on Sunday night from Facebook, where it was initially broadcast. Although the footage was rapidly taken down from TikTok, users spent much of Monday re-uploading it, initially unchanged, but later incorporated into so-called bait-and-switch videos, which are designed to shock and upset unsuspecting users."
dr tech

Twitter is developing a new misinfo moderation tool called Birdwatch - 0 views

  •  
    "As Americans continue to grapple with media distrust, conspiracy theories, bots, trolls, and general panic amid multiple unprecedented crises, Twitter is once again trying a new method of identifying misinformation. A new feature in development at the social media platform, called "Birdwatch," was first reported by reverse engineer Jane Manchun Wong (h/t Tech Crunch) in early August. "
dr tech

Russia's trolling on Ukraine gets 'incredible traction' on TikTok | Russia | The Guardian - 0 views

  •  
    "Russia's online trolling operation is becoming increasingly decentralised and is gaining "incredible traction" on TikTok with misinformation aimed at sowing doubt over events in Ukraine, a US social media researcher has warned."
dr tech

TechScape: 'Lives are ruined in an afternoon' - social media and the Huw Edwards story ... - 0 views

  •  
    "In some respects, singling out Twitter is unfair: it was a collective failure of social media. People were able to name Edwards as the BBC presenter with impunity in social media comment sections. TikTok suggested Edwards and other BBC presenters' names as "hot" search terms, appending the fire emoji to their names. Google showed news stories and videos about the then-unnamed BBC presenter to people who searched for Huw Edwards' name, connecting him to the scandal."
dr tech

Distressing Annecy footage put social media's self-regulation to the test | France | Th... - 0 views

  •  
    "Most social media users know to self-regulate when violent events such as terror attacks occur: don't share distressing footage; don't spread unfounded rumours. But in the aftermath of the Annecy attack some inevitably acted without restraint. Bystander footage of a man attacking children in a park in south-east France appeared online after the attack on Thursday and was still available, on Twitter and TikTok, on Friday. The distressing footage has been used by TV networks but is heavily edited. The raw versions seen by the Guardian show the attacker dodging a member of the public and running around the playground before appearing to stab a toddler in a pushchair."
dr tech

#ClimateScam: denialism claims flooding Twitter have scientists worried | Twitter | The... - 0 views

  •  
    "Twitter has proved a cherished forum for climate scientists to share research, as well as for activists seeking to rally action to halt oil pipelines or decry politicians' failure to cut pollution. But many are now fleeing Twitter due to a surge in climate misinformation, spam and even threats that have upended their relationship with the platform."
dr tech

Leaked Doc: New Rules Allow Slurs on Facebook, Meta Platforms - 0 views

  •  
    "LEAKED META RULES: USERS ARE FREE TO POST "MEXICAN IMMIGRANTS ARE TRASH!" OR "TRANS PEOPLE ARE IMMORAL" Under Meta's relaxed hate speech rules, users can now post "I'm a proud racist" or "Black people are more violent than whites.""
dr tech

To Evaluate Meta's Shift, Focus on the Product Changes, Not the Moderation - 0 views

  •  
    "The announcement that Meta would be changing their approach to political content and discussions of gender is concerning, though it is unclear exactly what those changes are. Given that many product changes regarding those content areas were used in high-risk settings, a change intended to allay US free speech concerns could lead to violence incitement elsewhere. For example, per this post from Meta, reducing "content that has been shared by a chain of two or more people" was a content-neutral product change done to protect people in Ethiopia, where algorithms have been implicated in the spread of ethnic violence. A similar change - removing optimizations for reshared content - was discussed in this post concerning reductions in political content. Will those changes be undone? Globally? Such changes could also lead to increased amplification of attention getting discussions of gender. Per this report from Equimundo and Futures Without Violence, 40% of young men trust at least one "manosphere" influencer - who often exploit algorithmic incentives by posting increasingly extreme, attention-getting mixes of ideas about self-improvement, aggression, and traditional gender roles."
« First ‹ Previous 41 - 48 of 48
Showing 20 items per page