Skip to main content

Home/ Duty of care + Standards _ CU/ Group items tagged platform

Rss Feed Group items tagged

Carsten Ullrich

American Internet, American Platforms, American Values - Centre for International Gover... - 0 views

  • Non-Americans should not be satisfied with this state of affairs, which basically amounts to Americans fighting with other Americans about how to run the world.
    • Carsten Ullrich
       
      !!!
  • that is, the idea that people should have a say in the rules that govern their activities. The Manila Principles, moreover, place an inordinate emphasis on domestic courts to regulate platforms, even though, as my co-author Keller notes, courts lack the expertise and policy-making capacity to do so.
  • What all of these proposals have in common, beyond adopting the American free-speech debate as their starting point, is that they treat these large platforms as an unalterable fact of life. They consider the main question to be not whether these platforms should be making decisions for billions of non-Americans, but how they should make these decisions.
  • ...10 more annotations...
  • he democratic right for non-Americans to determine the rules under which we should live is not even considered. Instead, attempts by democratic governments to impose legitimate democratic regulation on these companies, many of which have assumed the status of essential infrastructure, is derided as creeping authoritarianism or as a threat to the free and open internet.
  • At the very least, thinking of internet governance in these terms should make us more sympathetic to attempts by the Australian, Canadian, German and United Kingdom governments to legislate in this area, rather than be dismissive of the legitimacy of (democratic) governance on its face. If we value democratic oversight, state regulation is almost the only game in town, an approach that can be complemented with international treaty-making among democratic states so as to create agreed-upon minimum standards for regulating cross-border platform activities.
  • o address the first question, in a sense, the global American platforms are free riders on the notion that the internet as a network should be global in reach. Here, a useful analogy is the global financial system. Although we have a global financial system, it is characterized by domestic regulation and, in many countries
  • many of the social harms perpetuated by platforms are the likely result of their business models, which incentivize extremist speech and pervasive surveillance
  • Speech regulation without addressing these root causes is unlikely to be successful. If tools such as internet search functions truly have become essential to knowledge discovery and exhibit natural monopoly characteristics, countries should have the ability to determine for themselves what form they should take. To be blunt, public ownership should be on the table, even if it isn’t, currently, in the United States.
  • Google’s threat (which mirrored Facebook’s) to cut off its search service to Australia was likely due as much, if not more, to Australia’s plan to exercise oversight over its proprietary algorithm than it was about Australia’s plan to force Google to give a cut of its revenues to various Australian media outlets. The harshness of this threat highlights exactly how hard it will be for non-US countries to exert any meaningful control over the services currently monopolized by these US companies.
  • Already, the United States, as the home of these companies, is working to solidify the market and social dominance of its platforms.
  • As already mentioned, the CUSMA contains provisions protecting free cross-border data flows that, while justified in terms of encouraging trade, serve to preserve the dominance of the US platforms in Canada and Mexico. To this, we can add its successful inclusion of CDA Section 230 language in the agreement, effectively pre-empting Canadian and Mexican debates over what values we wish to apply to platform governance.
  • he first step to coming up with a sound policy involves understanding the policy terrain. In internet governance, and particularly in platform governance, this involves understanding the extent to which the dominant debates and landscape reflect particular US interests and values
  • hese interests and values do not necessarily reflect those of people living in other countries. Both Canadians and Americans believe in free speech and market competition. However, our interpretations of the limits of each differ. This reality — the acknowledgement of legitimate differences and the necessity of democratic accountability — should be our starting point in discussions of internet governance, not the desire to preserve a global internet and platform ecosystem that is much less global, and much more American, than it appears.
Carsten Ullrich

The Next Wave of Platform Governance - Centre for International Governance Innovation - 0 views

  • he shift from product- and service-based to platform-based business creates a new set of platform governance implications — especially when these businesses rely upon shared infrastructure from a small, powerful group of technology providers (Figure 1).
  • The industries in which AI is deployed, and the primary use cases it serves, will naturally determine the types and degrees of risk, from health and physical safety to discrimination and human-rights violations. Just as disinformation and hate speech are known risks of social media platforms, fatal accidents are a known risk of automobiles and heavy machinery, whether they are operated by people or by machines. Bias and discrimination are potential risks of any automated system, but they are amplified and pronounced in technologies that learn, whether autonomously or by training, from existing data.
  • Business Model-Specific Implications
  • ...7 more annotations...
  • The implications of cloud platforms such as Salesforce, Microsoft, Apple, Amazon and others differ again. A business built on a technology platform with a track record of well-developed data and model governance, audit capability, responsible product development practices and a culture and track record of transparency will likely reduce some risks related to biased data and model transparency, while encouraging (and even enforcing) adoption of those same practices and norms throughout its ecosystem.
  • policies that govern their internal practices for responsible technology development; guidance, tools and educational resources for their customers’ responsible use of their technologies; and policies (enforced in terms of service) that govern the acceptable use of not only their platforms but also specific technologies, such as face recognition or gait detection.
  • At the same time, overreliance on a small, well-funded, global group of technology vendors to set the agenda for responsible and ethical use of AI may create a novel set of risks.
  • Audit is another area that, while promising, is also fraught with potential conflict. Companies such as O’Neil Risk Consulting and Algorithmic Auditing, founded by the author of Weapons of Math Destruction, Cathy O’Neil, provide algorithmic audit and other services intended to help companies better understand and remediate data and model issues related to discriminatory outcomes. Unlike, for example, audits of financial statements, algorithmic audit services are as yet entirely voluntary, lack oversight by any type of governing board, and do not carry disclosure requirements or penalties. As a result, no matter how thorough the analysis or comprehensive the results, these types of services are vulnerable to manipulation or exploitation by their customers for “ethics-washing” purposes.
  • , we must broaden our understanding of platforms beyond social media sites to other types of business platforms, examine those risks in context, and approach governance in a way that accounts not only for the technologies themselves, but also for the disparate impacts among industries and business models.
  • This is a time-sensitive issue
  • arge technology companies — for a range of reasons — are trying to fill the policy void, creating the potential for a kind of demilitarized zone for AI, one in which neither established laws nor corporate policy hold sway.
Carsten Ullrich

A New Blueprint for Platform Governance | Centre for International Governance Innovation - 0 views

  • We often talk about the “online environment.” This metaphorical language makes it seem like the online space looks similar to our offline world. For example, the term “information pollution,” coined by Claire Wardle, is increasingly being used to discuss disinformation online.  
  • It is even harder to prove direct connections between online platforms and offline harms. This is partly because platforms are not transparent.
  • Finally, this analogy reminds us that both problems are dispiritingly hard to solve. Two scholars, Whitney Phillips and Ryan Milner, have suggested that our online information problems are ecosystemic, similar to the climate crisis.
  • ...12 more annotations...
  • As Phillips argues, “we’re not going to solve the climate crisis if people just stop drinking water out of water bottles. But we need to start minimizing the amount of pollution that’s even put into the landscape. It’s a place to start; it’s not the place to end.”
  • There may not be a one-size-fits-all analogy for platforms, but “horizontalizing” can help us to understand which solutions worked in other industries, which were under-ambitious and which had unintended consequences. Comparing horizontally also reminds us that the problems of how to regulate the online world are not unique, and will prove as difficult to resolve as those of other large industries.  
  • The key to vertical thinking is to figure out how not to lock in incumbents or to tilt the playing field even more toward them. We often forget that small rivals do exist, and our regulation should think about how to include them. This means fostering a market that has room for ponies and stable horses as well as unicorns.
  • Vertical thinking has started to spread in Washington, DC. In mid January, the antitrust subcommittee in Congress held a hearing with four smaller tech firms. All of them asked for regulatory intervention. The CEO of phone accessory maker PopSockets called Amazon’s behaviour “bullying with a smile.” Amazon purportedly ignored the selling of counterfeited PopSocket products on its platform and punished PopSocket for wanting to end its relationship with Amazon. Both Republicans and Democrats seemed sympathetic to smaller firms’ travails. The question is how to adequately address vertical concerns.
  • Without Improved Governance, Big Firms Will Weaponize Regulation
  • One is the question of intellectual property. Pa
  • Big companies can marshall an army of lawyers, which even medium-sized firms could never afford to do.
  • A second aspect to consider is sliding scales of regulation.
  • A third aspect is burden of proof. One option is to flip the present default and make big companies prove that they are not engaging in harmful behaviour
  • The EU head of antitrust, Margrethe Vestager, is considering whether to turn this on its head: in cases where the European Union suspects monopolistic behaviour, major digital platforms would have to prove that users benefit from their services.
  • Companies would have to prove gains, rather than Brussels having to prove damages. This change would relieve pressure on smaller companies to show harms. It would put obligations on companies such as Google, which Vestager sees as so dominant that she has called them “de facto regulators” in their markets. 
  • A final aspect to consider is possibly mandating larger firms to open up.
Carsten Ullrich

Systemic Duties of Care and Intermediary Liability - Daphne Keller | Inforrm's Blog - 0 views

  • ursuing two reasonable-sounding goals for platform regulation
  • irst, they want platforms to abide by a “duty of care,” going beyond today’s notice-and-takedown based legal m
  • Second, they want to preserve existing immunitie
  • ...8 more annotations...
  • ystemic duty of care” is a legal standard for assessing a platform’s overall system for handling harmful online content. It is not intended to define liability for any particular piece of content, or the outcome of particular litigation disputes.
  • The basic idea is that platforms should improve their systems for reducing online harms. This could mean following generally applicable rules established in legislation, regulations, or formal guidelines; or it could mean working with the regulator to produce and implement a platform-specific plan.
  • In one sense I have a lot of sympathy for this approach
  • In another sense, I am quite leery of the duty of care idea.
  • he actions platforms might take to comply with a SDOC generally fall into two categories. The first encompasses improvements to existing notice-and-takedown systems.
  • he second SDOC category – which is in many ways more consequential – includes obligations for platforms to proactively detect and remove or demote such content.
  • Proactive Monitoring Measures
    • Carsten Ullrich
       
      this is a bit too narrow, proactivity means really a rsk based approach, nit just monitoring, but monitoring for threats and risks
  • The eCommerce Directive and DMCA both permit certain injunctions, even against intermediaries that are otherwise immune from damages. Here again, the platform’s existing capabilities – its capacity to know about and control user content – matter. In the U.K. Mosley v. Google case, for example, the claimant successfully argued that because Google already used technical filters to block illegal child sexual abuse material, it could potentially be compelled to filter the additional images at image in his case.
Carsten Ullrich

Is the Era of "Permissionless Innovation" and Avoidance of Regulation on the Internet F... - 0 views

  • avoidance of regulation that the Silicon Valley platforms
  • It hasn’t been a great couple of weeks for the “Don’t Be Evil” company.
  • The Supreme Court had upheld a lower court ruling requiring Google to delist from its global search results references to a rogue Canadian company that is the subject of an injunction in British Columbia (B.C) f
  • ...14 more annotations...
  • intellectual property infringement.
  • The Google/Equustek case is not one of permissionless innovation, but is still an example of a large internet intermediary taking the position that it can do as it damned well pleases because, after all, it operates in multiple jurisdictions—in fact it operates in cyberspace, where, according to some, normal regulatory practices and laws shouldn’t apply or we will “stifle innovation”.
  • One innovation that Google has instituted is to tweak its geolocation system
  • The excuse of “it’s not my fault; blame the algorithm”, also won’t fly anymore. Google’s algorithms are the “secret sauce” that differentiates it from its competitors, and the dominance of Google is proof of the effectiveness of its search formulae.
    • Carsten Ullrich
       
      courts have become streetwise on the "algorithm"
  • But scooping up every bit of information and interpreting what people want (or what Google thinks they want) through an algorithm has its downsides. A German court has found that Google cannot hide behind its algorithms when it comes to producing perverse search results
  • AI is great, until it isn’t, and there is no doubt that regulators will start to look at legal issues surrounding AI.
  • Companies like Google and Facebook will not be able to duck their responsibility just because results that are potentially illegal are produced by algorithms or AI
  • One area where human judgement is very much involved is in the placing of ads, although Youtube and others are quick to blame automated programs when legitimate ads appear alongside questionable or illegal content. Platforms have no obligation to accept ads as long as they don’t engage in non-competitive trade practices
  • Google has already learned its lesson on pharmaceutical products the hard way, having been fined $500 million in 2011 for running ads on its Adwords service from unlicenced Canadian online pharmacies illegally (according to US law) selling prescriptions to US consumers.
  • Google is a deep-pocketed corporation but it seems to have got the message when it comes to pharmaceuticals. What galls me is that if Google can remove Adwords placements promoting illegal drug products, why, when I google “watch pirated movies”, do I get an Adwords listing on page 1 of search that says “Watch HD Free Full Movies Online”.
  • At the end of the day whether it is Google, Facebook, Amazon, or any other major internet intermediary, the old wheeze that respect for privacy, respect for copyright and just plain old respect for the law in general gets in the way of innovation is being increasingly shown to be a threadbare argument.
  • What is interesting is that many cyber-libertarians who oppose any attempt to impose copyright obligations and publishing liability on internet platforms are suddenly starting to get nervous about misuse of data by these same platforms when it comes to privacy.
  • This is a remarkable revelation for someone who has not only advocated that Canada adopt in NAFTA the overly-broad US safe harbour provisions found in the Communications Decency Act, a provision that has been widely abused in the US by internet intermediaries as a way of ducking any responsibility for the content they make available, but who has consistently crusaded against any strengthening of copyright laws that might impose greater obligations on internet platforms.
  • proponents of reasonable internet regulation
Carsten Ullrich

Article - 0 views

  • elf-assessment reports submitted by Facebook, Google, Microsoft, Mozilla and Twitter
  • bserved that “[a]ll platform signatories deployed policies and systems to ensure transparency around political advertising, including a requirement that all political ads be clearly labelled as sponsored content and include a ‘paid for by’ disclaimer.”
  • While some of the platforms have gone to the extent of banning political ads, the transparency of issue-based advertising is still significantly neglected.
  • ...5 more annotations...
  • re are notable differences in scop
  • inauthentic behaviour, including the suppression of millions of fake accounts and the implementation of safeguards against malicious automated activities.
  • more granular information is needed to better assess malicious behaviour specifically targeting the EU and the progress achieved by the platforms to counter such behaviour.”
  • several tools have been developed to help consumers evaluate the reliability of information sources, and to open up access to platform data for researchers.
    • Carsten Ullrich
       
      one element of a technical standard, degree of providing consumer with transparent to content assessment tools, transparency still lagging!
  • platforms have not demonstrated much progress in developing and implementing trustworthiness indicators in collaboration with the news ecosystem”, and “some consumer empowerment tools are still not available in most EU Member States.”
Carsten Ullrich

Algorithm Transparency: How to Eat the Cake and Have It Too - European Law Blog - 0 views

  • While AI tools still exist in a relative legal vacuum, this blog post explores: 1) the extent of protection granted to algorithms as trade secrets with exceptions of overriding public interest; 2) how the new generation of regulations on the EU and national levels attempt to provide algorithm transparency while preserving trade secrecy; and 3) why the latter development is not a futile endeavour. 
  • most complex algorithms dominating our lives (including those developed by Google and Facebook), are proprietary, i.e. shielded as trade secrets, while only a negligible minority of algorithms are open source. 
  • Article 2 of the EU Trade Secrets Directive
  • ...11 more annotations...
  • However, the protection granted by the Directive is not absolute. Article 1(2)(b), bolstered by Recital 11, concedes that secrecy will take a back seat if the ‘Union or national rules require trade secret holders to disclose, for reasons of public interest, information, including trade secrets, to the public or to administrative or judicial authorities for the performance of the duties of those authorities’. 
  • With regard to trade secrets in general, in the Microsoft case, the CJEU held that a refusal by Microsoft to share interoperability information with a competitor constituted a breach of Article 102 TFEU.
  • Although trade secrets remained protected from the public and competitors, Google had to disclose Page Rank parameters to the Commission as the administrative authority for the performance of its investigative duties. It is possible that a similar examination will take place in the recently launched probe in Amazon’s treatment of third-party sellers. 
  • For instance, in February 2020, the District Court of the Hague held that the System Risk Indication algorithm that the Dutch government used to detect fraud in areas such as benefits, allowances, and taxes, violated the right to privacy (Article 8 ECHR), inter alia, because it was not transparent enough, i.e. the government has neither publicized the risk model and indicators that make up the risk model, nor submitted them to the Court (para 6 (49)).
  • Article 22 still remains one of the most unenforceable provisions of the GDPR. Some scholars (see, e.g. Wachter) question the existence of such a right to explanation altogether claiming that if the right does not withstand the balancing against trade secrets, it is of little value.
  • In 2019, to ensure competition in the platform economy, the European Parliament and the Council adopted Platform-to-Business (P2B) Regulation. To create a level playing field between businesses, the Regulation for the first time mandates the platforms to disclose to the businesses the main parameters of the ranking systems they employ, i.e. ‘algorithmic sequencing, rating or review mechanisms, visual highlights, or other saliency tools’ while recognising the protection of algorithms by the Trade Secrets Directive (Article 1(5)).
  • The recent Guidelines on ranking transparency by the European Commission interpret the ‘main parameters’ to mean ‘what drove the design of the algorithm in the first place’ (para 41).
  • The German Interstate Media Law that entered into force in October 2020, transposes the revised Audio-Visual Services Directive, but also goes well beyond the Directive in tackling automated decision-making that leads to prioritization and recommendation of content.
  • This obligation to ‘explain the algorithm’ makes it the first national law that, in ensuring fairness for all journalistic and editorial offers, also aims more generally at diversity of opinion and information in the digital space – a distinct human rights dimension. If the provision proves enforceable, it might serve as an example for other Member States to emulate. 
  • Lastly, the draft DSA grants the newly introduced Digital Service Coordinators, the Commission, as well as vetted researchers (under conditions to be specified) the powers of data access to ensure compliance with the DSA. The core of this right, however, is undermined in Article 31(6), which effectively allows the platforms to refuse such access based on trade secrecy concerns. 
  • This shows that although addressing algorithms in a horizontal instrument is a move in the right direction, to make it enforceable, the final DSA, as well as any ensuing guidelines, should differentiate between three tiers of disclosure: 1) full disclosure – granting supervisory bodies the right of access, which may not be refused by the IP owners, to all confidential information; 2) limited disclosure – granting vetted researchers the right of access limited in time and scope, with legal guarantees for protection of trade secrecy; and 3) explanation of main parameters – granting individuals information in accessible language without prejudice to trade secrets. 
Carsten Ullrich

Article - 0 views

  • On 6 February 2020, the audiovisual regulator of the French-speaking community of Belgium (Conseil supérieur de l’audiovisuel – CSA) published a guidance note on the fight against certain forms of illegal Internet content, in particular hate speech
  • In the note, the CSA begins by summarising the current situation, highlighting the important role played by content-sharing platforms and their limited responsibility. It emphasises that some content can be harmful to young people in particular, whether they are the authors or victims of the content. It recognises that regulation, in its current form, is inappropriate and creates an imbalance between the regulation of online content-sharing platform operators, including social networks, and traditional players in the audiovisual sector
  • ould take its own legislative measures without waiting for work to start on an EU directive on the subject. 
  • ...6 more annotations...
  • f it advocates crimes against humanity; incites or advocates terrorist acts; or incites hatred, violence, discrimination or insults against a person or a group of people on grounds of origin, alleged race, religion, ethnic background, nationality, gender, sexual orientation, gender identity or disability, whether real or alleged.
  • obligations be imposed on the largest content-sharing platform operators, that is, any natural or legal person offering, on a professional basis, whether for remuneration or not, an online content-sharing platform, wherever it is based, used by at least 20% of the population of the French-speaking region of Belgium or the bilingual Brussels-Capital region.
  • iged to remove or block content notified to them that is ‘clearly illegal’ within 24 hours. T
  • need to put in place reporting procedures as well as processes for contesting their decisions
  • appoint an official contact person
  • half-yearly report on compliance with their obligation
Carsten Ullrich

How Platforms Could Benefit from the Precautionary Principle | Centre for International... - 0 views

  • Risk assessments: First, companies could conduct risk-based assessments, as commonly happens for large-scale infrastructure projects. No engineer builds a bridge without calculating its stability. If platform companies want to be our online infrastructure, we might ask for similar levels of care as for physical infrastructure.
  • First, if governments used the precautionary principle to ask for risk assessments, these assessments themselves would not be foolproof and could be gamed.
  • Third, the precautionary principle can lock in big players and stifle innovation. If risk assessments are expensive, only the larger companies will be able to afford them.
Carsten Ullrich

Broad Consequences of a Systemic Duty of Care for Platforms - Daphne Keller [Updated] |... - 0 views

  • n the up-side, flexible standards would give platforms more leeway to figure out meaningful technical improvements, and perhaps arrive at more nuanced automated assessment of content over tim
  • The down-sides of open-ended SDOC standards could be considerable, though. Proactive measures devised by platforms themselves would, even when coupled with transparency obligations, be far less subject to meaningful public review, accountability,
Carsten Ullrich

How to regulate Facebook and the online giants in one word: transparency - George Brock... - 0 views

  • New responsibilities arise from these changes.
  • Greater transparency will disclose whether further regulation is required and make it better targeted, providing specific remedies for clearly identified ills.
  • If Facebook and others must account in detail to an electoral commission or data protection authority for micro-targeting or “dark” ads, are forbidden from deleting certain relevant data, and must submit to algorithm audits, they will forced to foresee and to try to solve some of the problems which they have been addressing so slowly
  • ...1 more annotation...
  • ansparency would have its own radical effect inside the tech giants
Carsten Ullrich

European regulation of video-sharing platforms: what's new, and will it work? | LSE Med... - 0 views

  • his set of rules creates a novel regulatory model
  • Again, leaving regulatory powers to a private entity without any public oversight is clearly not the right solution. But this is also not what, in my opinion, the new AVMSD does
  • But without transparency and information about individual cases, you surely can’t say whether the takedowns are really improving the media environment, or the providers are just trying to get rid of any controversial content – or, indeed, the content somebody just happens to be complaining about.
  • ...4 more annotations...
  • he regulator, on the other hand, has a more detached role, when compared to older types of media regulation, in which they mainly assess whether mechanisms established by the provider comply with the law
  • This approach gives rise to concerns that we are just outsourcing regulation to private companies.
  • Indeed, the delegation of the exercise of regulatory powers to a private entity could be very damaging to freedom of speech and media.
  • So, I think the legal groundwork for protection but also the fair treatment of users is in the directive. Now it depends on the member states to implement it in such a way that this potential will be fulfilled (and the European Commission has a big role in this process).
Carsten Ullrich

The white paper on online harms is a global first. It has never been more needed | John... - 0 views

  • Could it be, another wondered, that the flurry of apocalyptic angst reflected the extent to which the Californian Ideology (which held that cyberspace was beyond the reach of the state) had seeped into the souls of even well-intentioned critics?
  • In reality, the problem we have is not the internet so much as those corporations that ride on it and allow some unacceptable activities to flourish on their platforms
  • This is what ethicists call “obligation responsibility” and in this country we call a duty of care. I
  • ...8 more annotations...
  • corporate responsibility
  • Since the mid-1990s, internet companies have been absolved from liability – by Section 230 of the 1996 US Telecommunications Act and to some extent by the EU’s e-commerce directive – for the damage that their platforms do.
  • Sooner or later, democracies will have to bring these outfits under control and the only question is how best to do it. The white paper suggests one possible way forward.
  • essentially a responsibility for unintended consequences of the way you have set up and run your business.
  • The white paper says that the government will establish a new statutory duty of care on relevant companies “to take reasonable steps to keep their users safe and tackle illegal and harmful activity on their services”.
  • for example assessing and responding to the risk associated with emerging harms or technology
  • Stirring stuff, eh? It has certainly taken much of the tech industry aback, especially those for whom the idea of government regulation has always been anathema and who regard this fancy new “duty of care’ as a legal fantasy dreamed up in an undergraduate seminar.
  • To which the best riposte is perhaps the old Chinese proverb that the longest journey begins with a single step. This white paper is it.
Carsten Ullrich

A more transparent and accountable Internet? Here's how. | LSE Media Policy Project - 0 views

  • Procedural accountability” was a focus of discussion at the March 2018 workshop on platform responsibility convened by LSE’s Truth, Trust and Technology Commission. The idea is that firms should be held to account for the effectiveness of their internal processes in tackling the negative social impact of their services.
  • o be credible and trusted, information disclosed by online firms will need to be independently verified.
  • Piloting a Transparency Reporting Framework
Carsten Ullrich

Article - 0 views

  • new measures are designed to make it easier to identify hate crime on the Internet. In future, platforms such as Facebook, Twitter and YouTube will not only be able to delete posts that incite hatred or contain death threats, but also report them to the authorities, along with the user’s IP address.
  • ossibility of extending the scope of the Netzwerkdurchsetzungsgesetz
  • new rules on hate crime will be added to the German Strafgesetzbuch (Criminal Code), while the definition of existing offences will be amended to take into account the specific characteristics of the Internet.
    • Carsten Ullrich
       
      internet specific normative considerations?
Carsten Ullrich

Problems with Filters in the European Commission's Platforms Proposal - Daphne Keller |... - 0 views

  • ey are shockingly expensive – YouTube’s ContentID had cost Google $60 million as of several years ago – so only incumbents can afford them. Start-ups forced to build them won’t be able to afford it, or will build lousy ones with high error rates. Filters address symptoms and leave underlying problems to fester – like, in the case of radical Islamist material, the brutal conflict in Syria, global refugee crisis, and marginalization of Muslim immigrants to the US and Europe. All these problems make filters incredibly hard to justify without some great demonstrated upside – but no one has demonstrated such a thing.
  • The DMCA moves literally billions of disputes about online speech out of courts and into the hands of private parties.
  • That allocative choice was reasonable in 1998, and it remains reasonable in 2016.
    • Carsten Ullrich
       
      I dont think so.
  • ...1 more annotation...
  • The Internet has grown exponentially in size since the DMCA was enacted, but we should not forget that the problem of large-scale infringement was an expected development—and one that the safe harbors were specifically designed to manage.
    • Carsten Ullrich
       
      any proof for that assertion?
Carsten Ullrich

JIPLP: Editorial - Control of content on social media - 0 views

  • Can technology resolve these issues? As regards technical solutions, there are already examples of these, such as YouTube’s Content ID, an automated piece of software that scans material uploaded to the site for IP infringement by comparing it against a database of registered IPs. The next challenge may be how these types of systems can be harnessed by online platform providers to address extreme and hate crime content. Again the dilemma for policy- and law-makers may be the extent to which they are prepared to cede control over content to technology companies, which will become judge, jury and executioner. 
  • who should bear the cost of monitoring and removal.
  • o block access to websites where infringing content has been hosted. In Cartier International AG & Ors v British Sky Broadcasting Ltd & Ors [2016] EWCA civ 658 the Court of Appeal concluded that it is entirely reasonable to expect ISPs to pay the costs associated with implementing mechanisms to block access to sites where infringing content has been made available
  • ...1 more annotation...
  • Thus the cost of implementing the order could therefore be regarded as just another overhead associated with ISPs carrying on their business
Carsten Ullrich

Tech companies can distinguish between free speech and hate speech if they want to - Da... - 0 views

  • Facebook has come under recent criticism for censoring LGBTQ people’s posts because they contained words that Facebook deem offensive. At the same time, the LGBTQ community are one of the groups frequently targetted with hate speech on the platform. If users seem to “want their cake and eat it too”, the tech companies are similarly conflicted.
  • At the same time, the laws of many countries like Germany, and other international conventions, explicitly limit these freedoms when it comes to hate speech.
  • It would not be impossible for tech companies to form clear guidelines within their own platforms about what was and wasn’t permissable. For the mainly US companies, this would mean that they would have to be increasingly aware of the differences between US law and culture and those of other countries.
1 - 20 of 61 Next › Last »
Showing 20 items per page