Skip to main content

Home/ Duty of care + Standards _ CU/ Group items tagged hate

Rss Feed Group items tagged

Carsten Ullrich

Tech companies can distinguish between free speech and hate speech if they want to - Da... - 0 views

  • Facebook has come under recent criticism for censoring LGBTQ people’s posts because they contained words that Facebook deem offensive. At the same time, the LGBTQ community are one of the groups frequently targetted with hate speech on the platform. If users seem to “want their cake and eat it too”, the tech companies are similarly conflicted.
  • At the same time, the laws of many countries like Germany, and other international conventions, explicitly limit these freedoms when it comes to hate speech.
  • It would not be impossible for tech companies to form clear guidelines within their own platforms about what was and wasn’t permissable. For the mainly US companies, this would mean that they would have to be increasingly aware of the differences between US law and culture and those of other countries.
Carsten Ullrich

HUDOC - European Court of Human Rights - 0 views

  • Thus, the Court considers that the applicant company was in a position to assess the risks related to its activities and that it must have been able to foresee, to a reasonable degree, the consequences which these could entail. It therefore concludes that the interference in issue was “prescribed by law” within the meaning of the second paragraph of Article 10 of the Convention.
  • The Court has found that persons carrying on a professional activity, who are used to having to proceed with a high degree of caution when pursuing their occupation, can on this account be expected to take special care in assessing the risks that such activity entails
  • Thus, the Court notes that the applicant company cannot be said to have wholly neglected its duty to avoid causing harm to third parties. Nevertheless, and more importantly, the automatic word-based filter used by the applicant company failed to filter out odious hate speech and speech inciting violence posted by readers and thus limited its ability to expeditiously remove the offending comments
  • ...2 more annotations...
  • Against that background, the Chamber considered that the applicant company had been in a position to assess the risks related to its activities and that it must have been able to foresee, to a reasonable degree, the consequences which these could entail.
  • Lastly, the Court observes that the applicant company has argued (see paragraph 78 above) that the Court should have due regard to the notice-and-take-down system that it had introduced. If accompanied by effective procedures allowing for rapid response, this system can in the Court’s view function in many cases as an appropriate tool for balancing the rights and interests of all those involved. However, in cases such as the present one, where third-party user comments are in the form of hate speech and direct threats to the physical integrity of individuals, as understood in the Court’s case-law (see paragraph 136 above), the Court considers, as stated above (see paragraph 153), that the rights and interests of others and of society as a whole may entitle Contracting States to impose liability on Internet news portals, without contravening Article 10 of the Convention, if they fail to take measures to remove clearly unlawful comments without delay, even without notice from the alleged victim or from third parties.
Carsten Ullrich

Article - 0 views

  • new measures are designed to make it easier to identify hate crime on the Internet. In future, platforms such as Facebook, Twitter and YouTube will not only be able to delete posts that incite hatred or contain death threats, but also report them to the authorities, along with the user’s IP address.
  • ossibility of extending the scope of the Netzwerkdurchsetzungsgesetz
  • new rules on hate crime will be added to the German Strafgesetzbuch (Criminal Code), while the definition of existing offences will be amended to take into account the specific characteristics of the Internet.
    • Carsten Ullrich
       
      internet specific normative considerations?
Carsten Ullrich

Facebook Publishes Enforcement Numbers for the First Time | Facebook Newsroom - 0 views

  • 86% of which was identified by our technology before it was reported to Facebook.
  • For hate speech, our technology still doesn’t work that well and so it needs to be checked by our review teams. We removed 2.5 million pieces of hate speech in Q1 2018 — 38% of which was flagged by our technology.
  • addition, in many areas — whether it’s spam, porn or fake accounts — we’re up against sophisticated adversaries who continually change tactics to circumvent our controls,
Carsten Ullrich

What Facebook isn't telling us about its fight against online abuse - Laura Bliss | Inf... - 0 views

  • In a six-month period from October 2017 to March 20178, 21m sexually explicit pictures, 3.5m graphically violent posts and 2.5m forms of hate speech were removed from its site. These figures help reveal some striking points.
  • As expected, the data indicates that the problem is getting worse.
    • Carsten Ullrich
       
      problem is getting worse - use as argument - look at facebook report
  • For instance, between January and March it was estimated that for every 10,000 messages online, between 22 and 27 contained graphic violence, up from 16 to 19 in the previous three months.
  • ...9 more annotations...
  • Here, the company has been proactive. Between January and March 2018, Facebook removed 1.9m messages encouraging terrorist propaganda, an increase of 800,000 comments compared to the previous three months. A total of 99.5% of these messages were located with the aid of advancing technology.
  • But Facebook hasn’t released figures showing how prevalent terrorist propaganda is on its site. So we really don’t know how successful the software is in this respect.
    • Carsten Ullrich
       
      we need data this would be part of my demand for standardized reporting system
  • on self-regulation,
  • Between the two three-month periods there was a 183% increase in the amount of posts removed that were labelled graphically violent. A total of 86% of these comments were flagged by a computer system.
  • But we also know that Facebook’s figures also show that up to 27 out of every 10,000 comments that made it past the detection technology contained graphic violence.
  • One estimate suggests that 510,000 comments are posted every minute. If accurate, that would mean 1,982,880 violent comments are posted every 24 hours.
  • Facebook has also used technology to aid the removal of graphic violence from its site.
  • This brings us to the other significant figure not included in the data released by Facebook: the total number of comments reported by users. As this is a fundamental mechanism in tackling online abuse, the amount of reports made to the company should be made publicly available
  • However, even Facebook still has a long way to go to get to total transparency. Ideally, all social networking sites would release annual reports on how they are tackling abuse online. This would enable regulators and the public to hold the firms more directly to account for failures to remove online abuse from their servers.
    • Carsten Ullrich
       
      my demand - standardized reporting
Carsten Ullrich

Facebook's Hate Speech Policies Censor Marginalized Users | WIRED - 0 views

  •  
    example of incorrect filtering advanced by LGBT groups
Carsten Ullrich

European regulation of video-sharing platforms: what's new, and will it work? | LSE Med... - 0 views

  • his set of rules creates a novel regulatory model
  • Again, leaving regulatory powers to a private entity without any public oversight is clearly not the right solution. But this is also not what, in my opinion, the new AVMSD does
  • But without transparency and information about individual cases, you surely can’t say whether the takedowns are really improving the media environment, or the providers are just trying to get rid of any controversial content – or, indeed, the content somebody just happens to be complaining about.
  • ...4 more annotations...
  • he regulator, on the other hand, has a more detached role, when compared to older types of media regulation, in which they mainly assess whether mechanisms established by the provider comply with the law
  • This approach gives rise to concerns that we are just outsourcing regulation to private companies.
  • Indeed, the delegation of the exercise of regulatory powers to a private entity could be very damaging to freedom of speech and media.
  • So, I think the legal groundwork for protection but also the fair treatment of users is in the directive. Now it depends on the member states to implement it in such a way that this potential will be fulfilled (and the European Commission has a big role in this process).
Carsten Ullrich

CG v Facebook Ireland Ltd & Anor [2016] NICA 54 (21 December 2016) - 0 views

  • The commercial importance of ISS providers is recognised in Recital 2 of the Directive which notes the significant employment opportunities and stimulation of economic growth and investment in innovation from the development of electronic commerce. The purpose of the exemption from monitoring is to make the provision of the service practicable and to facilitate the opportunities for commercial activity. The quantities of information described by the learned trial judge at paragraph [19] of his judgment explain why such a provision is considered necessary. Although the 2002 Regulations do not contain a corresponding provision they need to be interpreted with the monitoring provision in mind.
  • Given the quantities of information generated the legislative steer is that monitoring is not an option
  • he judge concluded that the existence of the XY litigation was itself sufficient to fix Facebook with actual knowledge of unlawful disclosure of information on Predators 2 or awareness of facts and circumstances from which it would have been apparent that the publication of the information constituted misuse of private information. In our view such a liability could only arise if Facebook was subject to a monitoring obligation
Carsten Ullrich

The secret lives of Facebook moderators in America - The Verge - 0 views

  • It’s a place where, in stark contrast to the perks lavished on Facebook employees, team leaders micromanage content moderators’ every bathroom and prayer break; where employees, desperate for a dopamine rush amid the misery, have been found having sex inside stairwells and a room reserved for lactating mothers; where people develop severe anxiety while still in training, and continue to struggle with trauma symptoms long after they leave; and where the counseling that Cognizant offers them ends the moment they quit — or are simply let go.
  • The moderators told me it’s a place where the conspiracy videos and memes that they see each day gradually lead them to embrace fringe views. One auditor walks the floor promoting the idea that the Earth is flat. A former employee told me he has begun to question certain aspects of the Holocaust. Another former employee, who told me he has mapped every escape route out of his house and sleeps with a gun at his side, said: “I no longer believe 9/11 was a terrorist attack.”
  • The use of contract labor also has a practical benefit for Facebook: it is radically cheaper. The median Facebook employee earns $240,000 annually in salary, bonuses, and stock options. A content moderator working for Cognizant in Arizona, on the other hand, will earn just $28,800 per year. The arrangement helps Facebook maintain a high profit margin. In its most recent quarter, the company earned $6.9 billion in profits, on $16.9 billion in revenue. And while Zuckerberg had warned investors that Facebook’s investment in security would reduce the company’s profitability, profits were up 61 percent over the previous year.
  • ...3 more annotations...
  • Miguel takes a dim view of the accuracy figure. “Accuracy is only judged by agreement. If me and the auditor both allow the obvious sale of heroin, Cognizant was ‘correct,’ because we both agreed,” he says. “This number is fake.”
  • Even with an ever-changing rulebook, moderators are granted only the slimmest margins of error. The job resembles a high-stakes video game in which you start out with 100 points — a perfect accuracy score — and then scratch and claw to keep as many of those points as you can. Because once you fall below 95, your job is at risk. If a quality assurance manager marks Miguel’s decision wrong, he can appeal the decision. Getting the QA to agree with you is known as “getting the point back.” In the short term, an “error” is whatever a QA says it is, and so moderators have good reason to appeal every time they are marked wrong. (Recently, Cognizant made it even harder to get a point back, by requiring moderators to first get a SME to approve their appeal before it would be forwarded to the QA.)
  • eforeBefore Miguel can take a break, he clicks a browser extension to let Cognizant know he is leaving his desk. (“That’s a standard thing in this type of industry,” Facebook’s Davidson tells me. “To be able to track, so you know where your workforce is.”)
  •  
    "Pro Unlimited"
Carsten Ullrich

My Library - 0 views

  • that the elements which
  • re relevant for assessing whether the proprietor of an EU trade mark is entitled to prohibit the use of a sign in part of the European Union not covered by that action, may be taken into account by that court
  • Although, for the purpose of assessing whether Ornua is entitled to prohibit the use of the sign KERRYMAID in Spain, the referring court should consider taking into account elements present in Ireland and the United Kingdom, it should first of all ensure that there is no significant difference between the market conditions or the sociocultural circumstances
  • ...4 more annotations...
  • In that regard, account should be taken, in particular, of the overall presentation of the product marketed by the third party, the circumstances in which a distinction is made between that mark and the sign used by that the third party, and the effort made by that third party to ensure that consumers distinguish its products from those of which it is not the trade mark owner
  • in part of the European Union, an EU trade mark with a reputation and a sign peacefully coexist
  • It cannot be excluded that the conduct which can be expected of the third party so that its use of the sign follows honest practices in industrial or commercial matters must be analysed differently in a part of the European Union where consumers have a particular affinity with the geographical word contained in the mark and the sign at issue than in a part of the European Union where that affinity is weaker.
  • allows the conclusion that in another part of the European Union, where that peaceful coexistence is absent, there is due cause legitimising the use of that sign.
Carsten Ullrich

Article - 0 views

  • Entwurf für ein Gesetz zur Bekämpfung des Rechtsextremismus und der Hasskriminalität
  • oviders of commercial telemedia services and associated contributors and intermediaries will, in future, be subject to the same information obligations as telecommunications services. A new Article 15a TMG obliges them to disclose information about their users’ inventory data if requested by the Federal Office for the Protection of the Constitution, law enforcement or police authorities, the Militärische Abschirmdienst (Military Counterintelligence Service), the Bundesnachrichtendienst (Federal Intelligence Service) or customs authorities
  • To this end, they are required, at their own expense, to make arrangements for the disclosure of such information within their field of responsibility. Services with over 100 000 customers must also provide a secure electronic interface for this purpose.
  • ...2 more annotations...
  • Social network providers, meanwhile, are subject to proactive reporting obligations
  • The provider must check whether this is the case and report the content immediately, as well as provide the IP address and port number of the person responsible. The user “on whose behalf the content was stored” should be informed that the information has been passed on to the BKA, unless the BKA orders otherwise.
Carsten Ullrich

Article - 0 views

  • On 6 February 2020, the audiovisual regulator of the French-speaking community of Belgium (Conseil supérieur de l’audiovisuel – CSA) published a guidance note on the fight against certain forms of illegal Internet content, in particular hate speech
  • In the note, the CSA begins by summarising the current situation, highlighting the important role played by content-sharing platforms and their limited responsibility. It emphasises that some content can be harmful to young people in particular, whether they are the authors or victims of the content. It recognises that regulation, in its current form, is inappropriate and creates an imbalance between the regulation of online content-sharing platform operators, including social networks, and traditional players in the audiovisual sector
  • ould take its own legislative measures without waiting for work to start on an EU directive on the subject. 
  • ...6 more annotations...
  • f it advocates crimes against humanity; incites or advocates terrorist acts; or incites hatred, violence, discrimination or insults against a person or a group of people on grounds of origin, alleged race, religion, ethnic background, nationality, gender, sexual orientation, gender identity or disability, whether real or alleged.
  • obligations be imposed on the largest content-sharing platform operators, that is, any natural or legal person offering, on a professional basis, whether for remuneration or not, an online content-sharing platform, wherever it is based, used by at least 20% of the population of the French-speaking region of Belgium or the bilingual Brussels-Capital region.
  • iged to remove or block content notified to them that is ‘clearly illegal’ within 24 hours. T
  • need to put in place reporting procedures as well as processes for contesting their decisions
  • appoint an official contact person
  • half-yearly report on compliance with their obligation
Carsten Ullrich

JIPLP: Editorial - Control of content on social media - 0 views

  • Can technology resolve these issues? As regards technical solutions, there are already examples of these, such as YouTube’s Content ID, an automated piece of software that scans material uploaded to the site for IP infringement by comparing it against a database of registered IPs. The next challenge may be how these types of systems can be harnessed by online platform providers to address extreme and hate crime content. Again the dilemma for policy- and law-makers may be the extent to which they are prepared to cede control over content to technology companies, which will become judge, jury and executioner. 
  • who should bear the cost of monitoring and removal.
  • o block access to websites where infringing content has been hosted. In Cartier International AG & Ors v British Sky Broadcasting Ltd & Ors [2016] EWCA civ 658 the Court of Appeal concluded that it is entirely reasonable to expect ISPs to pay the costs associated with implementing mechanisms to block access to sites where infringing content has been made available
  • ...1 more annotation...
  • Thus the cost of implementing the order could therefore be regarded as just another overhead associated with ISPs carrying on their business
Carsten Ullrich

The Next Wave of Platform Governance - Centre for International Governance Innovation - 0 views

  • he shift from product- and service-based to platform-based business creates a new set of platform governance implications — especially when these businesses rely upon shared infrastructure from a small, powerful group of technology providers (Figure 1).
  • The industries in which AI is deployed, and the primary use cases it serves, will naturally determine the types and degrees of risk, from health and physical safety to discrimination and human-rights violations. Just as disinformation and hate speech are known risks of social media platforms, fatal accidents are a known risk of automobiles and heavy machinery, whether they are operated by people or by machines. Bias and discrimination are potential risks of any automated system, but they are amplified and pronounced in technologies that learn, whether autonomously or by training, from existing data.
  • Business Model-Specific Implications
  • ...7 more annotations...
  • The implications of cloud platforms such as Salesforce, Microsoft, Apple, Amazon and others differ again. A business built on a technology platform with a track record of well-developed data and model governance, audit capability, responsible product development practices and a culture and track record of transparency will likely reduce some risks related to biased data and model transparency, while encouraging (and even enforcing) adoption of those same practices and norms throughout its ecosystem.
  • policies that govern their internal practices for responsible technology development; guidance, tools and educational resources for their customers’ responsible use of their technologies; and policies (enforced in terms of service) that govern the acceptable use of not only their platforms but also specific technologies, such as face recognition or gait detection.
  • At the same time, overreliance on a small, well-funded, global group of technology vendors to set the agenda for responsible and ethical use of AI may create a novel set of risks.
  • Audit is another area that, while promising, is also fraught with potential conflict. Companies such as O’Neil Risk Consulting and Algorithmic Auditing, founded by the author of Weapons of Math Destruction, Cathy O’Neil, provide algorithmic audit and other services intended to help companies better understand and remediate data and model issues related to discriminatory outcomes. Unlike, for example, audits of financial statements, algorithmic audit services are as yet entirely voluntary, lack oversight by any type of governing board, and do not carry disclosure requirements or penalties. As a result, no matter how thorough the analysis or comprehensive the results, these types of services are vulnerable to manipulation or exploitation by their customers for “ethics-washing” purposes.
  • , we must broaden our understanding of platforms beyond social media sites to other types of business platforms, examine those risks in context, and approach governance in a way that accounts not only for the technologies themselves, but also for the disparate impacts among industries and business models.
  • This is a time-sensitive issue
  • arge technology companies — for a range of reasons — are trying to fill the policy void, creating the potential for a kind of demilitarized zone for AI, one in which neither established laws nor corporate policy hold sway.
Carsten Ullrich

The Trump Deplatforming Distraction | Centre for International Governance Innovation - 0 views

  • Facebook alone handles more than 100 billion transactions a day.
  • And it is this act of algorithmic determination that has created the communities that have too often seeded division and hate. And because these companies have become so large, we can no longer rely on the free market to correct for the harms they might be causing. The result of the business model, scale and market concentration is a systemic failure.
  • If you don’t like platforms wielding such tremendous power, then the solution is democratic governance, not more self-governance. It is only by doing the tough work of governance, not  just banning Trump’s tweets, that we will begin to address the harms so clearly on display at the Capitol.
1 - 18 of 18
Showing 20 items per page