Skip to main content

Home/ Duty of care + Standards _ CU/ Group items tagged facebook

Rss Feed Group items tagged

Carsten Ullrich

Facebook is stepping in where governments won't on free expression - Wendy H. Wong and ... - 0 views

  • The explicit reference to human rights in its charter acknowledges that companies have a role in protecting and enforcing human rights.
  • This is consistent with efforts by the United Nations and other advocacy efforts to create standards on how businesses should be held accountable for human rights abuses. In light of Facebook’s entanglement in misinformation, scandals and election falsehoods, as well as genocide and incitement of violence, it seems particularly pertinent for the company.
  • To date, we have assigned such decision-making powers to states, many of which are accountable to their citizens. Facebook, on the other hand, is unaccountable to citizens in nations around the world, and a single individual (Mark Zuckerberg) holds majority decision-making power at the company.
  • ...6 more annotations...
  • In other cases, human moderators have had their decisions overturned. The Oversight Board also upheld Facebook’s decision to remove a dehumanizing ethnic slur against Azerbaijanis in the context of an active conflict over the Nagorno-Karabakh disputed region.
  • But Facebook and other social media companies do not have to engage in a transparent, publicly accountable process to make their decisions. However, Facebook claims that in its decision-making, it upholds the human right of freedom of expression. However, freedom of expression does not mean the same thing to everyone
  • rivate organizations are currently the only consistent governors of data and social media.
  • However, the Oversight Board deals with only a small fraction of possible cases.
  • Facebook’s dominance in social media, however, is notable not because it’s a private company. Mass communication has been privatized, at least in the U.S., for a long time. Rather, Facebook’s insertion into the regulation of freedom of expression and its claim to support human rights is notable because these have traditionally been the territory of governments. While far from perfect, democracies provide citizens and other groups influence over the enforcement of human rights.
  • Facebook and other social media companies, however, have no such accountability to the public. Ensuring human rights needs to go beyond volunteerism by private companies. Perhaps with the Australia versus Facebook showdown, governments finally have an impetus to pay attention to the effects of technology companies on fundamental human rights.
Carsten Ullrich

What Facebook isn't telling us about its fight against online abuse - Laura Bliss | Inf... - 0 views

  • In a six-month period from October 2017 to March 20178, 21m sexually explicit pictures, 3.5m graphically violent posts and 2.5m forms of hate speech were removed from its site. These figures help reveal some striking points.
  • As expected, the data indicates that the problem is getting worse.
    • Carsten Ullrich
       
      problem is getting worse - use as argument - look at facebook report
  • For instance, between January and March it was estimated that for every 10,000 messages online, between 22 and 27 contained graphic violence, up from 16 to 19 in the previous three months.
  • ...9 more annotations...
  • Here, the company has been proactive. Between January and March 2018, Facebook removed 1.9m messages encouraging terrorist propaganda, an increase of 800,000 comments compared to the previous three months. A total of 99.5% of these messages were located with the aid of advancing technology.
  • But Facebook hasn’t released figures showing how prevalent terrorist propaganda is on its site. So we really don’t know how successful the software is in this respect.
    • Carsten Ullrich
       
      we need data this would be part of my demand for standardized reporting system
  • on self-regulation,
  • Between the two three-month periods there was a 183% increase in the amount of posts removed that were labelled graphically violent. A total of 86% of these comments were flagged by a computer system.
  • But we also know that Facebook’s figures also show that up to 27 out of every 10,000 comments that made it past the detection technology contained graphic violence.
  • One estimate suggests that 510,000 comments are posted every minute. If accurate, that would mean 1,982,880 violent comments are posted every 24 hours.
  • Facebook has also used technology to aid the removal of graphic violence from its site.
  • This brings us to the other significant figure not included in the data released by Facebook: the total number of comments reported by users. As this is a fundamental mechanism in tackling online abuse, the amount of reports made to the company should be made publicly available
  • However, even Facebook still has a long way to go to get to total transparency. Ideally, all social networking sites would release annual reports on how they are tackling abuse online. This would enable regulators and the public to hold the firms more directly to account for failures to remove online abuse from their servers.
    • Carsten Ullrich
       
      my demand - standardized reporting
Carsten Ullrich

The secret lives of Facebook moderators in America - The Verge - 0 views

  • It’s a place where, in stark contrast to the perks lavished on Facebook employees, team leaders micromanage content moderators’ every bathroom and prayer break; where employees, desperate for a dopamine rush amid the misery, have been found having sex inside stairwells and a room reserved for lactating mothers; where people develop severe anxiety while still in training, and continue to struggle with trauma symptoms long after they leave; and where the counseling that Cognizant offers them ends the moment they quit — or are simply let go.
  • The moderators told me it’s a place where the conspiracy videos and memes that they see each day gradually lead them to embrace fringe views. One auditor walks the floor promoting the idea that the Earth is flat. A former employee told me he has begun to question certain aspects of the Holocaust. Another former employee, who told me he has mapped every escape route out of his house and sleeps with a gun at his side, said: “I no longer believe 9/11 was a terrorist attack.”
  • The use of contract labor also has a practical benefit for Facebook: it is radically cheaper. The median Facebook employee earns $240,000 annually in salary, bonuses, and stock options. A content moderator working for Cognizant in Arizona, on the other hand, will earn just $28,800 per year. The arrangement helps Facebook maintain a high profit margin. In its most recent quarter, the company earned $6.9 billion in profits, on $16.9 billion in revenue. And while Zuckerberg had warned investors that Facebook’s investment in security would reduce the company’s profitability, profits were up 61 percent over the previous year.
  • ...3 more annotations...
  • Miguel takes a dim view of the accuracy figure. “Accuracy is only judged by agreement. If me and the auditor both allow the obvious sale of heroin, Cognizant was ‘correct,’ because we both agreed,” he says. “This number is fake.”
  • Even with an ever-changing rulebook, moderators are granted only the slimmest margins of error. The job resembles a high-stakes video game in which you start out with 100 points — a perfect accuracy score — and then scratch and claw to keep as many of those points as you can. Because once you fall below 95, your job is at risk. If a quality assurance manager marks Miguel’s decision wrong, he can appeal the decision. Getting the QA to agree with you is known as “getting the point back.” In the short term, an “error” is whatever a QA says it is, and so moderators have good reason to appeal every time they are marked wrong. (Recently, Cognizant made it even harder to get a point back, by requiring moderators to first get a SME to approve their appeal before it would be forwarded to the QA.)
  • eforeBefore Miguel can take a break, he clicks a browser extension to let Cognizant know he is leaving his desk. (“That’s a standard thing in this type of industry,” Facebook’s Davidson tells me. “To be able to track, so you know where your workforce is.”)
  •  
    "Pro Unlimited"
Carsten Ullrich

CG v Facebook Ireland Ltd & Anor [2016] NICA 54 (21 December 2016) - 0 views

  • The commercial importance of ISS providers is recognised in Recital 2 of the Directive which notes the significant employment opportunities and stimulation of economic growth and investment in innovation from the development of electronic commerce. The purpose of the exemption from monitoring is to make the provision of the service practicable and to facilitate the opportunities for commercial activity. The quantities of information described by the learned trial judge at paragraph [19] of his judgment explain why such a provision is considered necessary. Although the 2002 Regulations do not contain a corresponding provision they need to be interpreted with the monitoring provision in mind.
  • Given the quantities of information generated the legislative steer is that monitoring is not an option
  • he judge concluded that the existence of the XY litigation was itself sufficient to fix Facebook with actual knowledge of unlawful disclosure of information on Predators 2 or awareness of facts and circumstances from which it would have been apparent that the publication of the information constituted misuse of private information. In our view such a liability could only arise if Facebook was subject to a monitoring obligation
Carsten Ullrich

How to regulate Facebook and the online giants in one word: transparency - George Brock... - 0 views

  • New responsibilities arise from these changes.
  • Greater transparency will disclose whether further regulation is required and make it better targeted, providing specific remedies for clearly identified ills.
  • If Facebook and others must account in detail to an electoral commission or data protection authority for micro-targeting or “dark” ads, are forbidden from deleting certain relevant data, and must submit to algorithm audits, they will forced to foresee and to try to solve some of the problems which they have been addressing so slowly
  • ...1 more annotation...
  • ansparency would have its own radical effect inside the tech giants
Carsten Ullrich

Facebook Publishes Enforcement Numbers for the First Time | Facebook Newsroom - 0 views

  • 86% of which was identified by our technology before it was reported to Facebook.
  • For hate speech, our technology still doesn’t work that well and so it needs to be checked by our review teams. We removed 2.5 million pieces of hate speech in Q1 2018 — 38% of which was flagged by our technology.
  • addition, in many areas — whether it’s spam, porn or fake accounts — we’re up against sophisticated adversaries who continually change tactics to circumvent our controls,
Carsten Ullrich

Facebook and the EU, or the failure of self-regulation | The Guest Blog - 0 views

  • How did we let this happen? Why do we appear so weak?
  • For years Brussels has been the champion of self-regulation. The dogma is – at least publicly – based on the assumption that companies know best how to tackle some of the challenges.
  • Our failure to understand the underlying challenges and a failure of regulation.
  • ...2 more annotations...
  • If it’s the latter, then we have to move away from self-regulation. We can’t continue defending self-regulation and fake outrage when what we already knew becomes public.
  • Some will shift all the blame to Facebook, but we are at least as responsible as they are. EU decision-makers let this happen with self-regulation and soft policy.
Carsten Ullrich

Tech companies can distinguish between free speech and hate speech if they want to - Da... - 0 views

  • Facebook has come under recent criticism for censoring LGBTQ people’s posts because they contained words that Facebook deem offensive. At the same time, the LGBTQ community are one of the groups frequently targetted with hate speech on the platform. If users seem to “want their cake and eat it too”, the tech companies are similarly conflicted.
  • At the same time, the laws of many countries like Germany, and other international conventions, explicitly limit these freedoms when it comes to hate speech.
  • It would not be impossible for tech companies to form clear guidelines within their own platforms about what was and wasn’t permissable. For the mainly US companies, this would mean that they would have to be increasingly aware of the differences between US law and culture and those of other countries.
Carsten Ullrich

XY v Facebook Ireland Ltd [2012] NIQB 96 (30 November 2012) - 0 views

  • [19] The Order of the Court will incorporate provision for liberty to apply. By this mechanism the Plaintiff, if necessary and if so advised, will be able to seek further relief from the Court if there is any recurrence of the offending publication. Of course, in such eventuality, it will be open to Facebook, acting responsibly and in accordance with the principles and themes clearly expressed in this judgment, to proactively take the necessary removal and closure steps.
  • [20] I refuse the Plaintiff's application for the wider form of interim injunction sought by him. This was to the effect that Facebook be required to monitor the offending webpage in order to prevent republication of the offensive material. In this respect, I prefer the argument of Mr Hopkins that such an order would lack the requisite precision, could impose a disproportionate burden and, further, would potentially require excessive supervision by the Court. See Cooperative Insurance v Argyll [1997] 3AL ER 297, pages 303 – 304, per Lord Hoffman. See also Halsbury's Laws of England, Volume 24 (Fourth Edition Reissue), paragraph 849. The propriety of granting this discrete remedy will, of course, be revisited at the substantive trial, against the backcloth of a fuller evidential matrix, which should include details of how this social networking site actually operates from day to day.
Carsten Ullrich

Is the Era of "Permissionless Innovation" and Avoidance of Regulation on the Internet F... - 0 views

  • avoidance of regulation that the Silicon Valley platforms
  • It hasn’t been a great couple of weeks for the “Don’t Be Evil” company.
  • The Supreme Court had upheld a lower court ruling requiring Google to delist from its global search results references to a rogue Canadian company that is the subject of an injunction in British Columbia (B.C) f
  • ...14 more annotations...
  • intellectual property infringement.
  • The Google/Equustek case is not one of permissionless innovation, but is still an example of a large internet intermediary taking the position that it can do as it damned well pleases because, after all, it operates in multiple jurisdictions—in fact it operates in cyberspace, where, according to some, normal regulatory practices and laws shouldn’t apply or we will “stifle innovation”.
  • One innovation that Google has instituted is to tweak its geolocation system
  • The excuse of “it’s not my fault; blame the algorithm”, also won’t fly anymore. Google’s algorithms are the “secret sauce” that differentiates it from its competitors, and the dominance of Google is proof of the effectiveness of its search formulae.
    • Carsten Ullrich
       
      courts have become streetwise on the "algorithm"
  • But scooping up every bit of information and interpreting what people want (or what Google thinks they want) through an algorithm has its downsides. A German court has found that Google cannot hide behind its algorithms when it comes to producing perverse search results
  • AI is great, until it isn’t, and there is no doubt that regulators will start to look at legal issues surrounding AI.
  • Companies like Google and Facebook will not be able to duck their responsibility just because results that are potentially illegal are produced by algorithms or AI
  • One area where human judgement is very much involved is in the placing of ads, although Youtube and others are quick to blame automated programs when legitimate ads appear alongside questionable or illegal content. Platforms have no obligation to accept ads as long as they don’t engage in non-competitive trade practices
  • Google has already learned its lesson on pharmaceutical products the hard way, having been fined $500 million in 2011 for running ads on its Adwords service from unlicenced Canadian online pharmacies illegally (according to US law) selling prescriptions to US consumers.
  • Google is a deep-pocketed corporation but it seems to have got the message when it comes to pharmaceuticals. What galls me is that if Google can remove Adwords placements promoting illegal drug products, why, when I google “watch pirated movies”, do I get an Adwords listing on page 1 of search that says “Watch HD Free Full Movies Online”.
  • At the end of the day whether it is Google, Facebook, Amazon, or any other major internet intermediary, the old wheeze that respect for privacy, respect for copyright and just plain old respect for the law in general gets in the way of innovation is being increasingly shown to be a threadbare argument.
  • What is interesting is that many cyber-libertarians who oppose any attempt to impose copyright obligations and publishing liability on internet platforms are suddenly starting to get nervous about misuse of data by these same platforms when it comes to privacy.
  • This is a remarkable revelation for someone who has not only advocated that Canada adopt in NAFTA the overly-broad US safe harbour provisions found in the Communications Decency Act, a provision that has been widely abused in the US by internet intermediaries as a way of ducking any responsibility for the content they make available, but who has consistently crusaded against any strengthening of copyright laws that might impose greater obligations on internet platforms.
  • proponents of reasonable internet regulation
Carsten Ullrich

Facebook's Hate Speech Policies Censor Marginalized Users | WIRED - 0 views

  •  
    example of incorrect filtering advanced by LGBT groups
Carsten Ullrich

Algorithm Transparency: How to Eat the Cake and Have It Too - European Law Blog - 0 views

  • While AI tools still exist in a relative legal vacuum, this blog post explores: 1) the extent of protection granted to algorithms as trade secrets with exceptions of overriding public interest; 2) how the new generation of regulations on the EU and national levels attempt to provide algorithm transparency while preserving trade secrecy; and 3) why the latter development is not a futile endeavour. 
  • most complex algorithms dominating our lives (including those developed by Google and Facebook), are proprietary, i.e. shielded as trade secrets, while only a negligible minority of algorithms are open source. 
  • Article 2 of the EU Trade Secrets Directive
  • ...11 more annotations...
  • However, the protection granted by the Directive is not absolute. Article 1(2)(b), bolstered by Recital 11, concedes that secrecy will take a back seat if the ‘Union or national rules require trade secret holders to disclose, for reasons of public interest, information, including trade secrets, to the public or to administrative or judicial authorities for the performance of the duties of those authorities’. 
  • With regard to trade secrets in general, in the Microsoft case, the CJEU held that a refusal by Microsoft to share interoperability information with a competitor constituted a breach of Article 102 TFEU.
  • Although trade secrets remained protected from the public and competitors, Google had to disclose Page Rank parameters to the Commission as the administrative authority for the performance of its investigative duties. It is possible that a similar examination will take place in the recently launched probe in Amazon’s treatment of third-party sellers. 
  • For instance, in February 2020, the District Court of the Hague held that the System Risk Indication algorithm that the Dutch government used to detect fraud in areas such as benefits, allowances, and taxes, violated the right to privacy (Article 8 ECHR), inter alia, because it was not transparent enough, i.e. the government has neither publicized the risk model and indicators that make up the risk model, nor submitted them to the Court (para 6 (49)).
  • Article 22 still remains one of the most unenforceable provisions of the GDPR. Some scholars (see, e.g. Wachter) question the existence of such a right to explanation altogether claiming that if the right does not withstand the balancing against trade secrets, it is of little value.
  • In 2019, to ensure competition in the platform economy, the European Parliament and the Council adopted Platform-to-Business (P2B) Regulation. To create a level playing field between businesses, the Regulation for the first time mandates the platforms to disclose to the businesses the main parameters of the ranking systems they employ, i.e. ‘algorithmic sequencing, rating or review mechanisms, visual highlights, or other saliency tools’ while recognising the protection of algorithms by the Trade Secrets Directive (Article 1(5)).
  • The recent Guidelines on ranking transparency by the European Commission interpret the ‘main parameters’ to mean ‘what drove the design of the algorithm in the first place’ (para 41).
  • The German Interstate Media Law that entered into force in October 2020, transposes the revised Audio-Visual Services Directive, but also goes well beyond the Directive in tackling automated decision-making that leads to prioritization and recommendation of content.
  • This obligation to ‘explain the algorithm’ makes it the first national law that, in ensuring fairness for all journalistic and editorial offers, also aims more generally at diversity of opinion and information in the digital space – a distinct human rights dimension. If the provision proves enforceable, it might serve as an example for other Member States to emulate. 
  • Lastly, the draft DSA grants the newly introduced Digital Service Coordinators, the Commission, as well as vetted researchers (under conditions to be specified) the powers of data access to ensure compliance with the DSA. The core of this right, however, is undermined in Article 31(6), which effectively allows the platforms to refuse such access based on trade secrecy concerns. 
  • This shows that although addressing algorithms in a horizontal instrument is a move in the right direction, to make it enforceable, the final DSA, as well as any ensuing guidelines, should differentiate between three tiers of disclosure: 1) full disclosure – granting supervisory bodies the right of access, which may not be refused by the IP owners, to all confidential information; 2) limited disclosure – granting vetted researchers the right of access limited in time and scope, with legal guarantees for protection of trade secrecy; and 3) explanation of main parameters – granting individuals information in accessible language without prejudice to trade secrets. 
Carsten Ullrich

The white paper on online harms is a global first. It has never been more needed | John... - 0 views

  • Could it be, another wondered, that the flurry of apocalyptic angst reflected the extent to which the Californian Ideology (which held that cyberspace was beyond the reach of the state) had seeped into the souls of even well-intentioned critics?
  • In reality, the problem we have is not the internet so much as those corporations that ride on it and allow some unacceptable activities to flourish on their platforms
  • This is what ethicists call “obligation responsibility” and in this country we call a duty of care. I
  • ...8 more annotations...
  • corporate responsibility
  • Since the mid-1990s, internet companies have been absolved from liability – by Section 230 of the 1996 US Telecommunications Act and to some extent by the EU’s e-commerce directive – for the damage that their platforms do.
  • Sooner or later, democracies will have to bring these outfits under control and the only question is how best to do it. The white paper suggests one possible way forward.
  • essentially a responsibility for unintended consequences of the way you have set up and run your business.
  • The white paper says that the government will establish a new statutory duty of care on relevant companies “to take reasonable steps to keep their users safe and tackle illegal and harmful activity on their services”.
  • for example assessing and responding to the risk associated with emerging harms or technology
  • Stirring stuff, eh? It has certainly taken much of the tech industry aback, especially those for whom the idea of government regulation has always been anathema and who regard this fancy new “duty of care’ as a legal fantasy dreamed up in an undergraduate seminar.
  • To which the best riposte is perhaps the old Chinese proverb that the longest journey begins with a single step. This white paper is it.
Carsten Ullrich

LG Würzburg, Urteil v. 07.03.2017 - 11 O 2338/16 UVR - Bürgerservice - 0 views

  • Diensteanbieter, die von Nutzern bereitgestellte Informationen speichern, wie die Verfügungsbeklagte, müssen gemäß Erwägungsgrund 48 der ECRL außerdem die nach vernünftigem Ermessen von ihnen zu erwartende und in innerstaatlichen Rechtsvorschriften niedergelegter Sorgfaltspflicht anwenden, um bestimmte Arten rechtswidriger Tätigkeiten aufzudecken und zu verhindern.
  • sondern auch solche Inhalte als „eigene Inhalte“ gewertet werden, die sich der Diensteanbieter zu eigen gemacht hat.
  • Danach muss anhand der Umstände des Einzelfalls unter Berücksichtigung und Abwägung aller betroffenen Interessen und relevanten rechtlichen Wertungen bestimmt werden, welche Maßnahmen zumutbar sind. Zu diesen Umständen gehören Funktionen und Aufgabenstellung des angebotenen Dienstes, Risiko und Anzahl möglicher Rechtsverletzungen sowie die Eigenverantwortung des Verletzten, der wirtschaftliche Vorteil in Gestalt von Provisionen aus Rechtsverletzungen durch Dritte, die Werbung für mögliche rechtswidrige Aktivitäten, Erleichterungen von Rechtsverletzungen durch zur Verfügungstellung von Hilfsmitteln sowie Software, der wirtschaftliche Aufwand von Prüfmaßnahmen sowie die Effektivität grundsätzlich möglicher Prüf- und Sicherungsmaßnahmen auch im Hinblick auf Maßnahmen zur Vermeidung einer möglichen Vielzahl gleichartiger Verletzungen der Rechte in Bezug auf andere
  • ...4 more annotations...
  • Unzumutbar sind Maßnahmen nicht schon allein dadurch, dass der Schuldner zusätzliches Personal für die Kontrolle einsetzen müsste, sondern erst, wenn durch den Überprüfungsaufwand das Geschäftsmodell in Frage gestellt würde.
  • Aufgrund von Zumutbarkeitserwägungen kann eine Prüfungspflicht auf klare, d. h. grobe, ohne weitere Nachforschungen unschwer zu erkennende Verstöße beschränkt bleiben.
  • Gesteigerte Prüfungspflichten können sich auch bei Providern ergeben, die es ihren Kunden ermöglichen, ihre Dienste anonym zu nutzen, so dass der Kunde im Bedarfsfall als Rechtsverletzer nicht identifiziert werden könnte (Spindler/Volkmann, in: Spindler/Schuster, Recht der elektronischen Medien, 3. Aufl. 2015, § 1004 BGB Rn. 25, mit weiteren Nachweisen).
  • Allerdings lässt sich die Frage der technischen Machbarkeit und damit auch der Zumutbarkeit im Verfügungsverfahren noch nicht sicher beurteilen. Dies sprengt den Rahmen eines Verfügungsverfahrens und wird im Hauptsacheverfahren, ggf. durch Gutachten, überprüft werden müssen.
Carsten Ullrich

A more transparent and accountable Internet? Here's how. | LSE Media Policy Project - 0 views

  • Procedural accountability” was a focus of discussion at the March 2018 workshop on platform responsibility convened by LSE’s Truth, Trust and Technology Commission. The idea is that firms should be held to account for the effectiveness of their internal processes in tackling the negative social impact of their services.
  • o be credible and trusted, information disclosed by online firms will need to be independently verified.
  • Piloting a Transparency Reporting Framework
1 - 20 of 28 Next ›
Showing 20 items per page