Skip to main content

Home/ Duty of care + Standards _ CU/ Group items tagged media

Rss Feed Group items tagged

Carsten Ullrich

Facebook is stepping in where governments won't on free expression - Wendy H. Wong and ... - 0 views

  • The explicit reference to human rights in its charter acknowledges that companies have a role in protecting and enforcing human rights.
  • This is consistent with efforts by the United Nations and other advocacy efforts to create standards on how businesses should be held accountable for human rights abuses. In light of Facebook’s entanglement in misinformation, scandals and election falsehoods, as well as genocide and incitement of violence, it seems particularly pertinent for the company.
  • To date, we have assigned such decision-making powers to states, many of which are accountable to their citizens. Facebook, on the other hand, is unaccountable to citizens in nations around the world, and a single individual (Mark Zuckerberg) holds majority decision-making power at the company.
  • ...6 more annotations...
  • In other cases, human moderators have had their decisions overturned. The Oversight Board also upheld Facebook’s decision to remove a dehumanizing ethnic slur against Azerbaijanis in the context of an active conflict over the Nagorno-Karabakh disputed region.
  • However, the Oversight Board deals with only a small fraction of possible cases.
  • rivate organizations are currently the only consistent governors of data and social media.
  • But Facebook and other social media companies do not have to engage in a transparent, publicly accountable process to make their decisions. However, Facebook claims that in its decision-making, it upholds the human right of freedom of expression. However, freedom of expression does not mean the same thing to everyone
  • Facebook’s dominance in social media, however, is notable not because it’s a private company. Mass communication has been privatized, at least in the U.S., for a long time. Rather, Facebook’s insertion into the regulation of freedom of expression and its claim to support human rights is notable because these have traditionally been the territory of governments. While far from perfect, democracies provide citizens and other groups influence over the enforcement of human rights.
  • Facebook and other social media companies, however, have no such accountability to the public. Ensuring human rights needs to go beyond volunteerism by private companies. Perhaps with the Australia versus Facebook showdown, governments finally have an impetus to pay attention to the effects of technology companies on fundamental human rights.
Carsten Ullrich

European regulation of video-sharing platforms: what's new, and will it work? | LSE Med... - 0 views

  • his set of rules creates a novel regulatory model
  • Again, leaving regulatory powers to a private entity without any public oversight is clearly not the right solution. But this is also not what, in my opinion, the new AVMSD does
  • But without transparency and information about individual cases, you surely can’t say whether the takedowns are really improving the media environment, or the providers are just trying to get rid of any controversial content – or, indeed, the content somebody just happens to be complaining about.
  • ...4 more annotations...
  • he regulator, on the other hand, has a more detached role, when compared to older types of media regulation, in which they mainly assess whether mechanisms established by the provider comply with the law
  • This approach gives rise to concerns that we are just outsourcing regulation to private companies.
  • Indeed, the delegation of the exercise of regulatory powers to a private entity could be very damaging to freedom of speech and media.
  • So, I think the legal groundwork for protection but also the fair treatment of users is in the directive. Now it depends on the member states to implement it in such a way that this potential will be fulfilled (and the European Commission has a big role in this process).
Carsten Ullrich

A more transparent and accountable Internet? Here's how. | LSE Media Policy Project - 0 views

  • Procedural accountability” was a focus of discussion at the March 2018 workshop on platform responsibility convened by LSE’s Truth, Trust and Technology Commission. The idea is that firms should be held to account for the effectiveness of their internal processes in tackling the negative social impact of their services.
  • o be credible and trusted, information disclosed by online firms will need to be independently verified.
  • Piloting a Transparency Reporting Framework
Carsten Ullrich

Twitter to ask users to rethink abusive messages - a promising step towards 'slowcial m... - 0 views

  • All this seems to suggest that social media platforms are a unique environment where individuals post with little prior consideration as to whether that post could offend or upset others.
Carsten Ullrich

The battle against disinformation is global - Scott Shackelford | Inforrm's Blog - 0 views

  • the EU is spending more money on combating disinformation across the board by hiring new staff with expertise in data mining and analytics to respond to complaints and proactively detect disinformation
  • EU also seems to be losing patience with Silicon Valley. It pressured social media giants like Facebook, Google and Twitter to sign the Code of Practice on Disinformation in 2018.
Carsten Ullrich

The Next Wave of Platform Governance - Centre for International Governance Innovation - 0 views

  • he shift from product- and service-based to platform-based business creates a new set of platform governance implications — especially when these businesses rely upon shared infrastructure from a small, powerful group of technology providers (Figure 1).
  • The industries in which AI is deployed, and the primary use cases it serves, will naturally determine the types and degrees of risk, from health and physical safety to discrimination and human-rights violations. Just as disinformation and hate speech are known risks of social media platforms, fatal accidents are a known risk of automobiles and heavy machinery, whether they are operated by people or by machines. Bias and discrimination are potential risks of any automated system, but they are amplified and pronounced in technologies that learn, whether autonomously or by training, from existing data.
  • Business Model-Specific Implications
  • ...7 more annotations...
  • The implications of cloud platforms such as Salesforce, Microsoft, Apple, Amazon and others differ again. A business built on a technology platform with a track record of well-developed data and model governance, audit capability, responsible product development practices and a culture and track record of transparency will likely reduce some risks related to biased data and model transparency, while encouraging (and even enforcing) adoption of those same practices and norms throughout its ecosystem.
  • policies that govern their internal practices for responsible technology development; guidance, tools and educational resources for their customers’ responsible use of their technologies; and policies (enforced in terms of service) that govern the acceptable use of not only their platforms but also specific technologies, such as face recognition or gait detection.
  • This is a time-sensitive issue
  • Audit is another area that, while promising, is also fraught with potential conflict. Companies such as O’Neil Risk Consulting and Algorithmic Auditing, founded by the author of Weapons of Math Destruction, Cathy O’Neil, provide algorithmic audit and other services intended to help companies better understand and remediate data and model issues related to discriminatory outcomes. Unlike, for example, audits of financial statements, algorithmic audit services are as yet entirely voluntary, lack oversight by any type of governing board, and do not carry disclosure requirements or penalties. As a result, no matter how thorough the analysis or comprehensive the results, these types of services are vulnerable to manipulation or exploitation by their customers for “ethics-washing” purposes.
  • , we must broaden our understanding of platforms beyond social media sites to other types of business platforms, examine those risks in context, and approach governance in a way that accounts not only for the technologies themselves, but also for the disparate impacts among industries and business models.
  • At the same time, overreliance on a small, well-funded, global group of technology vendors to set the agenda for responsible and ethical use of AI may create a novel set of risks.
  • arge technology companies — for a range of reasons — are trying to fill the policy void, creating the potential for a kind of demilitarized zone for AI, one in which neither established laws nor corporate policy hold sway.
Carsten Ullrich

The white paper on online harms is a global first. It has never been more needed | John... - 0 views

  • Could it be, another wondered, that the flurry of apocalyptic angst reflected the extent to which the Californian Ideology (which held that cyberspace was beyond the reach of the state) had seeped into the souls of even well-intentioned critics?
  • In reality, the problem we have is not the internet so much as those corporations that ride on it and allow some unacceptable activities to flourish on their platforms
  • This is what ethicists call “obligation responsibility” and in this country we call a duty of care. I
  • ...8 more annotations...
  • corporate responsibility
  • Since the mid-1990s, internet companies have been absolved from liability – by Section 230 of the 1996 US Telecommunications Act and to some extent by the EU’s e-commerce directive – for the damage that their platforms do.
  • Sooner or later, democracies will have to bring these outfits under control and the only question is how best to do it. The white paper suggests one possible way forward.
  • essentially a responsibility for unintended consequences of the way you have set up and run your business.
  • The white paper says that the government will establish a new statutory duty of care on relevant companies “to take reasonable steps to keep their users safe and tackle illegal and harmful activity on their services”.
  • for example assessing and responding to the risk associated with emerging harms or technology
  • Stirring stuff, eh? It has certainly taken much of the tech industry aback, especially those for whom the idea of government regulation has always been anathema and who regard this fancy new “duty of care’ as a legal fantasy dreamed up in an undergraduate seminar.
  • To which the best riposte is perhaps the old Chinese proverb that the longest journey begins with a single step. This white paper is it.
1 - 20 of 37 Next ›
Showing 20 items per page