Skip to main content

Home/ Digit_al Society/ Group items tagged moderator

Rss Feed Group items tagged

dr tech

Diary of a TikTok moderator: 'We are the people who sweep up the mess' | TikTok | The G... - 0 views

  •  
    "Next, was two months of probation where we moderated on practice queues that consisted of hundreds of thousands of videos that had already been moderated. The policies we applied to these practice videos were compared with what had previously been applied to them by a more experienced moderator in order to find areas we needed to improve in. Everyone passed their probation. One trend that is particularly hated by moderators are the "recaps". These consist of a 15- to 60-second barrage of pictures, sometimes hundreds, shown as a super fast slideshow often with three to four pictures a second. We have to view every one of these photos for infractions. If a video is 60 seconds long then the system will allocate us around 48 seconds to do this. We also have to check the video description, account bio and hashtags. Around the end of the school year or New Year's Eve, when these sort of videos are popular, it becomes incredibly draining and also affects our stats. "
dr tech

TikTok moderators struggling to assess Israel-Gaza content, Guardian told | TikTok | Th... - 0 views

  •  
    "TikTok moderators have struggled to assess content related to the Israel-Gaza conflict because the platform removed an internal tool for flagging videos in a foreign language, the Guardian has been told. The change has meant moderators in Europe cannot flag that they do not understand foreign-language videos, for example, in Arabic and Hebrew, which are understood to be appearing more frequently in video queues. The Guardian was told that moderators hired to work in English previously had access to a button to state that a video or post was not in their language. Internal documents seen by the Guardian show the button was called "not my language", or "foreign language"."
dr tech

YouTube moderators must sign contract acknowledging job could cause PTSD - report | Tec... - 0 views

  •  
    "Social media sites are increasingly informing employees of the negative effects of moderation jobs following several reports on harrowing working conditions, including long hours viewing violent and sexually exploitative content with little mental health support. Before accepting a job with Accenture, a subcontractor that works with several social media companies and manages some YouTube moderators at a Texas facility, employees had to sign a form titled "Acknowledgement", the Verge reported."
dr tech

Elon Musk declares Twitter 'moderation council' - as some push the platform's limits | ... - 0 views

  •  
    "Among the most urgent questions facing Twitter in its new era as a private company under Elon Musk, a self-declared "free speech absolutist", is how the platform will handle moderation. After finalizing his takeover and ousting senior leadership, Musk declared on Friday that he would be forming a new "content moderation council" that would bring together "diverse views" on the issue."
dr tech

Facebook will pay moderators $52 million settlement for psychological harm - 0 views

  •  
    "Facebook has agreed to pay $52 million to its content moderators as compensation for mental health issues caused by their work. The internet is already generally a cesspool of filth and cruelty, so one can only imagine the incredibly horrific things its moderators are forced to witness every day."
aren01

Protocols, Not Platforms: A Technological Approach to Free Speech | Knight First Amendm... - 1 views

  •  
    "Some have argued for much greater policing of content online, and companies like Facebook, YouTube, and Twitter have talked about hiring thousands to staff up their moderation teams.8 8. April Glaser, Want a Terrible Job? Facebook and Google May Be Hiring,Slate (Jan. 18, 2018), https://slate.com/technology/2018/01/facebook-and-google-are-building-an-army-of-content-moderators-for-2018.html (explaining that major platforms have hired or have announced plans to hire thousands, in some cases more than ten thousand, new content moderators).On the other side of the coin, companies are increasingly investing in more and more sophisticated technology help, such as artificial intelligence, to try to spot contentious content earlier in the process.9 9. Tom Simonite, AI Has Started Cleaning Up Facebook, But Can It Finish?,Wired (Dec. 18, 2018), https://www.wired.com/story/ai-has-started-cleaning-facebook-can-it-finish/.Others have argued that we should change Section 230 of the CDA, which gives platforms a free hand in determining how they moderate (or how they don't moderate).10 10. Gohmert Press Release, supra note 7 ("Social media companies enjoy special legal protections under Section 230 of the Communications Act of 1934, protections not shared by other media. Instead of acting like the neutral platforms they claim to be in order obtain their immunity, these companies have turned Section 230 into a license to potentially defraud and defame with impunity… Since there still appears to be no sincere effort to stop this disconcerting behavior, it is time for social media companies to be liable for any biased and unethical impropriety of their employees as any other media company. If these companies want to continue to act like a biased medium and publish their own agendas to the detriment of others, they need to be held accountable."); Eric Johnson, Silicon Valley's Self-Regulating Days "Probably Should Be" Over, Nancy Pelosi Says, Vox (Apr. 11, 2019), https:/
  •  
    "After a decade or so of the general sentiment being in favor of the internet and social media as a way to enable more speech and improve the marketplace of ideas, in the last few years the view has shifted dramatically-now it seems that almost no one is happy. Some feel that these platforms have become cesspools of trolling, bigotry, and hatred.1 1. Zachary Laub, Hate Speech on Social Media: Global Comparisons, Council on Foreign Rel. (Jun. 7, 2019), https://www.cfr.org/backgrounder/hate-speech-social-media-global-comparisons.Meanwhile, others feel that these platforms have become too aggressive in policing language and are systematically silencing or censoring certain viewpoints.2 2. Tony Romm, Republicans Accused Facebook, Google and Twitter of Bias. Democrats Called the Hearing 'Dumb.', Wash. Post (Jul. 17, 2018), https://www.washingtonpost.com/technology/2018/07/17/republicans-accused-facebook-google-twitter-bias-democrats-called-hearing-dumb/?utm_term=.895b34499816.And that's not even touching on the question of privacy and what these platforms are doing (or not doing) with all of the data they collect."
dr tech

Facebook moderators call on firm to do more about posts praising Bucha atrocities | Tec... - 0 views

  •  
    "That ties their hands in how they can treat content related to the killings, they say, and forces them to leave up some content they believe ought to be removed. "It's been a month since the massacre and mass graves in Bucha, but this event hasn't been even designated a 'violating event', let alone a hate crime," said one moderator, who spoke to the Guardian on condition of anonymity. "On that same day there was a shooting in the US, with one fatality and two casualties, and this was declared a violating event within three hours.""
dr tech

Revealed: catastrophic effects of working as a Facebook moderator | Technology | The Gu... - 0 views

  •  
    "A group of current and former contractors who worked for years at the social network's Berlin-based moderation centres has reported witnessing colleagues become "addicted" to graphic content and hoarding ever more extreme examples for a personal collection. They also said others were pushed towards the far right by the amount of hate speech and fake news they read every day."
dr tech

I helped build ByteDance's censorship machine - Protocol - The people, power and politi... - 0 views

  •  
    "My job was to use technology to make the low-level content moderators' work more efficient. For example, we created a tool that allowed them to throw a video clip into our database and search for similar content. When I was at ByteDance, we received multiple requests from the bases to develop an algorithm that could automatically detect when a Douyin user spoke Uyghur, and then cut off the livestream session. The moderators had asked for this because they didn't understand the language. Streamers speaking ethnic languages and dialects that Mandarin-speakers don't understand would receive a warning to switch to Mandarin."
dr tech

Facebook asked for nudes to help stop revenge porn and it worked. Can our culture chang... - 0 views

  •  
    "Here's how the program, which has been developed in partnership with SWGfL, a UK-based non-profit behind the Revenge Porn Helpline, works. If you've shared an intimate image with someone and are worried that that person might do something nefarious with it, you can send the images to content moderators at Facebook to be "hashed"- essentially the image is assigned a digital fingerprint. If someone then tries to upload that image to Facebook it can be quickly identified and blocked. It's obviously not a silver bullet for stopping revenge porn, and it requires putting a lot of trust in Facebook and accepting that a random content moderator is going to be looking at your naked photos, but it gives people a little bit of control over their images."
dr tech

Twitter moderators turn to automation amid a reported surge in hate speech | Twitter | ... - 0 views

  •  
    "Elon Musk's Twitter is leaning heavily on automation to moderate content according to the company's new head of trust and safety, amid a reported surge in hate speech on the social media platform. Ella Irwin has told the Reuters news agency that Musk, who acquired the company in October, was focused on using automation more, arguing that Twitter had in the past erred on the side of using time and labour-intensive human reviews of harmful content."
dr tech

YouTube will temporarily increase automated content moderation | Engadget - 0 views

  •  
    "YouTube will rely more on machine learning and less on human reviewers during the coronavirus outbreak. Normally, algorithms detect potentially harmful content and send it to human reviewers for assessment. But these are not normal times, and in an effort to reduce the need for employees and contractors to come into an office, YouTube will allow its automated system to remove some content without human review."
dr tech

Content Moderation Case Study: Facebook Removes A Picture Of A Famous Danish Mermaid St... - 0 views

  •  
    "In 2016, Danish politician Mette Gjerskov used Facebook to post a link to her own blog post on the TV2 website, which included an image of the statue. Facebook automatically displayed the image with the link, leading the company to then take down the link. The explanation provided by Facebook was that the image had "too much bare skin or sexual undertones.""
dr tech

Content Moderation is a Dead End. - by Ravi Iyer - 0 views

  •  
    "One of the many policy-based projects I worked on at Meta was Engagement Bait, which is defined as "a tactic that urges people to interact with Facebook posts through likes, shares, comments, and other actions in order to artificially boost engagement and get greater reach." Accordingly, "Posts and Pages that use this tactic will be demoted." To do this, "models are built off of certain guidelines" trained using "hundreds of thousands of posts" that "teams at Facebook have reviewed and categorized." The examples provided are obvious (eg. a post saying "comment "Yes" if you love rock as much as I do"), but the problem is that there will always be far subtler ways to get people to engage with something artificially. As an example, psychology researchers have a long history of studying negativity bias, which has been shown to operate across a wide array of domains, and to lead to increased online engagement. "
dr tech

Twitter reportedly makes more cuts to online safety teams | Twitter | The Guardian - 0 views

  •  
    "Twitter has made more cuts to its trust and safety team in charge of international content moderation, as well as a unit overseeing hate speech and harassment, Bloomberg reported on Friday. The move adds to longstanding concerns that new owner Elon Musk is dismantling the company's regulation of hateful content and misinformation."
dr tech

Tiny South Pacific island to lose free/universal Internet lifeline / Boing Boing - 0 views

  •  
    "But last month, Rocket Systems, who administered the .nu deal and the free Internet connection, announced that they would be shutting down the free link and replacing it with a paid one, because the .nu royalties had been cut. Under the new mandate, the 75% of people in Niue who relied on the service will begin paying an eye-popping NZD50/10gb to access the service. This is moderately competitive for satellite data, but by the standards of the developed world, it's amazingly expensive, especially given the country's low median per capita income."
dr tech

From Tahrir to Trump: how the internet became the dictators' home turf / Boing Boing - 1 views

  •  
    "Tufekci describes how insurgent, democratic movements were early arrivals to the internet, and how clumsy authoritarians' attempts to fight them by shutting the net down only energized their movements. But canny authoritarians mastered the platforms, figuring out how to game their automated algorithms to upvote their messages, and how to game their moderation policies to banish their adversaries."
dr tech

Want the platforms to police bad speech and fake news? The copyright wars want a word w... - 0 views

  •  
    "EFF's Legal Director Corynne McSherry offers five lessons to keep in mind: 1. (Lots of) mistakes will be made: copyright takedowns result in the removal of tons of legitimate content. 2. Robots won't help: automated filtering tools like Content ID have been a disaster, and policing copyright with algorithms is a lot easier than policing "bad speech." 3. These systems need to be transparent and have due process. A system that allows for automated instant censorship and slow, manual review of censorship gives a huge advantage to people who want to abuse the system. 4. Punish abuse. The ability to censor other peoples' speech is no joke. If you're careless or malicious in your takedown requests, you should pay a consequence: maybe a fine, maybe being barred form using the takedown system. 5. Voluntary moderation quickly becomes mandatory. Every voluntary effort to stem copyright infringement has been followed by calls to make those efforts mandatory (and expand them)."
dr tech

Facebook's content moderation a mess, employees outraged, contractors have PTSD: Report... - 0 views

  •  
    "Of all the disturbing parts of this @CaseyNewton piece about Facebook content reviewers -- and there are many -- the one about people slowly coming to believe the conspiracy theories sticks with me https://t.co/ulDx3PEaWa"
dr tech

To fix the problem of deepfakes we must treat the cause, not the symptoms | Matt Beard ... - 0 views

  •  
    "However, once technology is released it's like herding cats. Deepfakes are a moving feast and as soon as moderators find a way of detecting them, people will find a workaround."
1 - 20 of 41 Next › Last »
Showing 20 items per page