Skip to main content

Home/ Digit_al Society/ Group items tagged moderation social media

Rss Feed Group items tagged

aren01

Protocols, Not Platforms: A Technological Approach to Free Speech | Knight First Amendm... - 1 views

  •  
    "Some have argued for much greater policing of content online, and companies like Facebook, YouTube, and Twitter have talked about hiring thousands to staff up their moderation teams.8 8. April Glaser, Want a Terrible Job? Facebook and Google May Be Hiring,Slate (Jan. 18, 2018), https://slate.com/technology/2018/01/facebook-and-google-are-building-an-army-of-content-moderators-for-2018.html (explaining that major platforms have hired or have announced plans to hire thousands, in some cases more than ten thousand, new content moderators).On the other side of the coin, companies are increasingly investing in more and more sophisticated technology help, such as artificial intelligence, to try to spot contentious content earlier in the process.9 9. Tom Simonite, AI Has Started Cleaning Up Facebook, But Can It Finish?,Wired (Dec. 18, 2018), https://www.wired.com/story/ai-has-started-cleaning-facebook-can-it-finish/.Others have argued that we should change Section 230 of the CDA, which gives platforms a free hand in determining how they moderate (or how they don't moderate).10 10. Gohmert Press Release, supra note 7 ("Social media companies enjoy special legal protections under Section 230 of the Communications Act of 1934, protections not shared by other media. Instead of acting like the neutral platforms they claim to be in order obtain their immunity, these companies have turned Section 230 into a license to potentially defraud and defame with impunity… Since there still appears to be no sincere effort to stop this disconcerting behavior, it is time for social media companies to be liable for any biased and unethical impropriety of their employees as any other media company. If these companies want to continue to act like a biased medium and publish their own agendas to the detriment of others, they need to be held accountable."); Eric Johnson, Silicon Valley's Self-Regulating Days "Probably Should Be" Over, Nancy Pelosi Says, Vox (Apr. 11, 2019), https:/
  •  
    "After a decade or so of the general sentiment being in favor of the internet and social media as a way to enable more speech and improve the marketplace of ideas, in the last few years the view has shifted dramatically-now it seems that almost no one is happy. Some feel that these platforms have become cesspools of trolling, bigotry, and hatred.1 1. Zachary Laub, Hate Speech on Social Media: Global Comparisons, Council on Foreign Rel. (Jun. 7, 2019), https://www.cfr.org/backgrounder/hate-speech-social-media-global-comparisons.Meanwhile, others feel that these platforms have become too aggressive in policing language and are systematically silencing or censoring certain viewpoints.2 2. Tony Romm, Republicans Accused Facebook, Google and Twitter of Bias. Democrats Called the Hearing 'Dumb.', Wash. Post (Jul. 17, 2018), https://www.washingtonpost.com/technology/2018/07/17/republicans-accused-facebook-google-twitter-bias-democrats-called-hearing-dumb/?utm_term=.895b34499816.And that's not even touching on the question of privacy and what these platforms are doing (or not doing) with all of the data they collect."
dr tech

Diary of a TikTok moderator: 'We are the people who sweep up the mess' | TikTok | The G... - 0 views

  •  
    "Next, was two months of probation where we moderated on practice queues that consisted of hundreds of thousands of videos that had already been moderated. The policies we applied to these practice videos were compared with what had previously been applied to them by a more experienced moderator in order to find areas we needed to improve in. Everyone passed their probation. One trend that is particularly hated by moderators are the "recaps". These consist of a 15- to 60-second barrage of pictures, sometimes hundreds, shown as a super fast slideshow often with three to four pictures a second. We have to view every one of these photos for infractions. If a video is 60 seconds long then the system will allocate us around 48 seconds to do this. We also have to check the video description, account bio and hashtags. Around the end of the school year or New Year's Eve, when these sort of videos are popular, it becomes incredibly draining and also affects our stats. "
dr tech

YouTube moderators must sign contract acknowledging job could cause PTSD - report | Tec... - 0 views

  •  
    "Social media sites are increasingly informing employees of the negative effects of moderation jobs following several reports on harrowing working conditions, including long hours viewing violent and sexually exploitative content with little mental health support. Before accepting a job with Accenture, a subcontractor that works with several social media companies and manages some YouTube moderators at a Texas facility, employees had to sign a form titled "Acknowledgement", the Verge reported."
dr tech

TikTok moderators struggling to assess Israel-Gaza content, Guardian told | TikTok | Th... - 0 views

  •  
    "TikTok moderators have struggled to assess content related to the Israel-Gaza conflict because the platform removed an internal tool for flagging videos in a foreign language, the Guardian has been told. The change has meant moderators in Europe cannot flag that they do not understand foreign-language videos, for example, in Arabic and Hebrew, which are understood to be appearing more frequently in video queues. The Guardian was told that moderators hired to work in English previously had access to a button to state that a video or post was not in their language. Internal documents seen by the Guardian show the button was called "not my language", or "foreign language"."
dr tech

Elon Musk declares Twitter 'moderation council' - as some push the platform's limits | ... - 0 views

  •  
    "Among the most urgent questions facing Twitter in its new era as a private company under Elon Musk, a self-declared "free speech absolutist", is how the platform will handle moderation. After finalizing his takeover and ousting senior leadership, Musk declared on Friday that he would be forming a new "content moderation council" that would bring together "diverse views" on the issue."
dr tech

Facebook will pay moderators $52 million settlement for psychological harm - 0 views

  •  
    "Facebook has agreed to pay $52 million to its content moderators as compensation for mental health issues caused by their work. The internet is already generally a cesspool of filth and cruelty, so one can only imagine the incredibly horrific things its moderators are forced to witness every day."
dr tech

TechScape: 'Lives are ruined in an afternoon' - social media and the Huw Edwards story ... - 0 views

  •  
    "In some respects, singling out Twitter is unfair: it was a collective failure of social media. People were able to name Edwards as the BBC presenter with impunity in social media comment sections. TikTok suggested Edwards and other BBC presenters' names as "hot" search terms, appending the fire emoji to their names. Google showed news stories and videos about the then-unnamed BBC presenter to people who searched for Huw Edwards' name, connecting him to the scandal."
dr tech

How Silicon Valley's Russia crackdown proves its power - and its threat | April Glaser,... - 0 views

  •  
    "Tech companies around the world appeared to listen. The very public and very swift removal of Russian channels on social media represented a sea change from years of prior content moderation decisions, when government requests for removals were often done with less fanfare and were frequently met with ire from human rights groups."
dr tech

Facebook moderators call on firm to do more about posts praising Bucha atrocities | Tec... - 0 views

  •  
    "That ties their hands in how they can treat content related to the killings, they say, and forces them to leave up some content they believe ought to be removed. "It's been a month since the massacre and mass graves in Bucha, but this event hasn't been even designated a 'violating event', let alone a hate crime," said one moderator, who spoke to the Guardian on condition of anonymity. "On that same day there was a shooting in the US, with one fatality and two casualties, and this was declared a violating event within three hours.""
dr tech

President Biden's executive action takes on kids' mental health and social media platfo... - 0 views

  •  
    "They join attempts by lawmakers to regulate the internet for kids. States have proposed and even passed laws that restrict what children can access online, up to banning certain services entirely. On the federal level, several recently introduced bipartisan bills run the gamut from giving children more privacy protections to forbidding them from using social media at all. Some efforts also try to control the content that children can be exposed to. Critics of such legislation point to privacy issues with age verification mechanisms and fears that forced content moderation will inevitably lead to censorship, preventing kids from seeing material that's helpful along with what's considered harmful."
dr tech

Twitter moderators turn to automation amid a reported surge in hate speech | Twitter | ... - 0 views

  •  
    "Elon Musk's Twitter is leaning heavily on automation to moderate content according to the company's new head of trust and safety, amid a reported surge in hate speech on the social media platform. Ella Irwin has told the Reuters news agency that Musk, who acquired the company in October, was focused on using automation more, arguing that Twitter had in the past erred on the side of using time and labour-intensive human reviews of harmful content."
dr tech

Revealed: catastrophic effects of working as a Facebook moderator | Technology | The Gu... - 0 views

  •  
    "A group of current and former contractors who worked for years at the social network's Berlin-based moderation centres has reported witnessing colleagues become "addicted" to graphic content and hoarding ever more extreme examples for a personal collection. They also said others were pushed towards the far right by the amount of hate speech and fake news they read every day."
dr tech

I helped build ByteDance's censorship machine - Protocol - The people, power and politi... - 0 views

  •  
    "My job was to use technology to make the low-level content moderators' work more efficient. For example, we created a tool that allowed them to throw a video clip into our database and search for similar content. When I was at ByteDance, we received multiple requests from the bases to develop an algorithm that could automatically detect when a Douyin user spoke Uyghur, and then cut off the livestream session. The moderators had asked for this because they didn't understand the language. Streamers speaking ethnic languages and dialects that Mandarin-speakers don't understand would receive a warning to switch to Mandarin."
dr tech

Distressing Annecy footage put social media's self-regulation to the test | France | Th... - 0 views

  •  
    "Most social media users know to self-regulate when violent events such as terror attacks occur: don't share distressing footage; don't spread unfounded rumours. But in the aftermath of the Annecy attack some inevitably acted without restraint. Bystander footage of a man attacking children in a park in south-east France appeared online after the attack on Thursday and was still available, on Twitter and TikTok, on Friday. The distressing footage has been used by TV networks but is heavily edited. The raw versions seen by the Guardian show the attacker dodging a member of the public and running around the playground before appearing to stab a toddler in a pushchair."
dr tech

TechScape: What should social media giants do to protect children? | Technology | The G... - 0 views

  •  
    "In a way, this is a powerful rhetorical move. Insisting that the conversation focus on the details is an insistence that people who dismiss client-side scanning on principle are wrong to do so: if you believe that privacy of private communications is and should be an inviolable right, then Levy and Robinson are effectively arguing that you be cut out of the conversation in favour of more moderate people who are willing to discuss trade-offs."
dr tech

Twitter is developing a new misinfo moderation tool called Birdwatch - 0 views

  •  
    "As Americans continue to grapple with media distrust, conspiracy theories, bots, trolls, and general panic amid multiple unprecedented crises, Twitter is once again trying a new method of identifying misinformation. A new feature in development at the social media platform, called "Birdwatch," was first reported by reverse engineer Jane Manchun Wong (h/t Tech Crunch) in early August. "
dr tech

Content Moderation is a Dead End. - by Ravi Iyer - 0 views

  •  
    "One of the many policy-based projects I worked on at Meta was Engagement Bait, which is defined as "a tactic that urges people to interact with Facebook posts through likes, shares, comments, and other actions in order to artificially boost engagement and get greater reach." Accordingly, "Posts and Pages that use this tactic will be demoted." To do this, "models are built off of certain guidelines" trained using "hundreds of thousands of posts" that "teams at Facebook have reviewed and categorized." The examples provided are obvious (eg. a post saying "comment "Yes" if you love rock as much as I do"), but the problem is that there will always be far subtler ways to get people to engage with something artificially. As an example, psychology researchers have a long history of studying negativity bias, which has been shown to operate across a wide array of domains, and to lead to increased online engagement. "
dr tech

Child safety groups and prosecutors criticize encryption of Facebook and Messenger | Fa... - 0 views

  •  
    "This week, the tech giant announced it had begun rolling out automatic encryption for direct messages on its Facebook and Messenger platforms to more than 1 billion users. Under the changes, Meta will no longer have access to the contents of the messages that users send or receive unless one participant reports a message to the company. As a result, messages will not be subject to content moderation unless reported, which social media companies undertake to detect and report abusive and criminal activity. Encryption hides the contents of a message from anyone but the sender and the intended recipient by converting text and images into unreadable cyphers that are unscrambled on receipt."
dr tech

Russia's trolling on Ukraine gets 'incredible traction' on TikTok | Russia | The Guardian - 0 views

  •  
    "Russia's online trolling operation is becoming increasingly decentralised and is gaining "incredible traction" on TikTok with misinformation aimed at sowing doubt over events in Ukraine, a US social media researcher has warned."
dr tech

'Nobody can block it': how the Telegram app fuels global protest | Social media | The G... - 0 views

  •  
    "Telegram, a messaging app created by the reclusive Russian exile Pavel Durov, is suited to running protests for a number of reasons. It allows huge encrypted chat groups, making it easier to organise people, like a slicker version of WhatsApp. And its "channels" allow moderators to disseminate information quickly to large numbers of followers in a way that other messaging services do not; they combine the reach and immediacy of a Twitter feed, and the focus of an email newsletter. The combination of usability and privacy has made the app popular with protestors (it has been adopted by Extinction Rebellion) as well as people standing against authoritarian regimes (in Hong Kong and Iran, as well as Belarus); it is also used by terrorists and criminals. In the past five years, Telegram has grown at a remarkable speed, hitting 60 million users in 2015 and 400 million in April this year. "
1 - 20 of 24 Next ›
Showing 20 items per page