Skip to main content

Home/ Duty of care + Standards _ CU/ Group items tagged disinformation

Rss Feed Group items tagged

Carsten Ullrich

The battle against disinformation is global - Scott Shackelford | Inforrm's Blog - 0 views

  • the EU is spending more money on combating disinformation across the board by hiring new staff with expertise in data mining and analytics to respond to complaints and proactively detect disinformation
  • EU also seems to be losing patience with Silicon Valley. It pressured social media giants like Facebook, Google and Twitter to sign the Code of Practice on Disinformation in 2018.
Carsten Ullrich

A New Blueprint for Platform Governance | Centre for International Governance Innovation - 0 views

  • We often talk about the “online environment.” This metaphorical language makes it seem like the online space looks similar to our offline world. For example, the term “information pollution,” coined by Claire Wardle, is increasingly being used to discuss disinformation online.  
  • It is even harder to prove direct connections between online platforms and offline harms. This is partly because platforms are not transparent.
  • Finally, this analogy reminds us that both problems are dispiritingly hard to solve. Two scholars, Whitney Phillips and Ryan Milner, have suggested that our online information problems are ecosystemic, similar to the climate crisis.
  • ...12 more annotations...
  • As Phillips argues, “we’re not going to solve the climate crisis if people just stop drinking water out of water bottles. But we need to start minimizing the amount of pollution that’s even put into the landscape. It’s a place to start; it’s not the place to end.”
  • There may not be a one-size-fits-all analogy for platforms, but “horizontalizing” can help us to understand which solutions worked in other industries, which were under-ambitious and which had unintended consequences. Comparing horizontally also reminds us that the problems of how to regulate the online world are not unique, and will prove as difficult to resolve as those of other large industries.  
  • The key to vertical thinking is to figure out how not to lock in incumbents or to tilt the playing field even more toward them. We often forget that small rivals do exist, and our regulation should think about how to include them. This means fostering a market that has room for ponies and stable horses as well as unicorns.
  • Vertical thinking has started to spread in Washington, DC. In mid January, the antitrust subcommittee in Congress held a hearing with four smaller tech firms. All of them asked for regulatory intervention. The CEO of phone accessory maker PopSockets called Amazon’s behaviour “bullying with a smile.” Amazon purportedly ignored the selling of counterfeited PopSocket products on its platform and punished PopSocket for wanting to end its relationship with Amazon. Both Republicans and Democrats seemed sympathetic to smaller firms’ travails. The question is how to adequately address vertical concerns.
  • Without Improved Governance, Big Firms Will Weaponize Regulation
  • One is the question of intellectual property. Pa
  • Big companies can marshall an army of lawyers, which even medium-sized firms could never afford to do.
  • A second aspect to consider is sliding scales of regulation.
  • A third aspect is burden of proof. One option is to flip the present default and make big companies prove that they are not engaging in harmful behaviour
  • The EU head of antitrust, Margrethe Vestager, is considering whether to turn this on its head: in cases where the European Union suspects monopolistic behaviour, major digital platforms would have to prove that users benefit from their services.
  • Companies would have to prove gains, rather than Brussels having to prove damages. This change would relieve pressure on smaller companies to show harms. It would put obligations on companies such as Google, which Vestager sees as so dominant that she has called them “de facto regulators” in their markets. 
  • A final aspect to consider is possibly mandating larger firms to open up.
Carsten Ullrich

How to regulate Facebook and the online giants in one word: transparency - George Brock... - 0 views

  • New responsibilities arise from these changes.
  • Greater transparency will disclose whether further regulation is required and make it better targeted, providing specific remedies for clearly identified ills.
  • If Facebook and others must account in detail to an electoral commission or data protection authority for micro-targeting or “dark” ads, are forbidden from deleting certain relevant data, and must submit to algorithm audits, they will forced to foresee and to try to solve some of the problems which they have been addressing so slowly
  • ...1 more annotation...
  • ansparency would have its own radical effect inside the tech giants
Carsten Ullrich

Article - 0 views

  • elf-assessment reports submitted by Facebook, Google, Microsoft, Mozilla and Twitter
  • bserved that “[a]ll platform signatories deployed policies and systems to ensure transparency around political advertising, including a requirement that all political ads be clearly labelled as sponsored content and include a ‘paid for by’ disclaimer.”
  • While some of the platforms have gone to the extent of banning political ads, the transparency of issue-based advertising is still significantly neglected.
  • ...5 more annotations...
  • re are notable differences in scop
  • inauthentic behaviour, including the suppression of millions of fake accounts and the implementation of safeguards against malicious automated activities.
  • more granular information is needed to better assess malicious behaviour specifically targeting the EU and the progress achieved by the platforms to counter such behaviour.”
  • several tools have been developed to help consumers evaluate the reliability of information sources, and to open up access to platform data for researchers.
    • Carsten Ullrich
       
      one element of a technical standard, degree of providing consumer with transparent to content assessment tools, transparency still lagging!
  • platforms have not demonstrated much progress in developing and implementing trustworthiness indicators in collaboration with the news ecosystem”, and “some consumer empowerment tools are still not available in most EU Member States.”
Carsten Ullrich

The Next Wave of Platform Governance - Centre for International Governance Innovation - 0 views

  • he shift from product- and service-based to platform-based business creates a new set of platform governance implications — especially when these businesses rely upon shared infrastructure from a small, powerful group of technology providers (Figure 1).
  • The industries in which AI is deployed, and the primary use cases it serves, will naturally determine the types and degrees of risk, from health and physical safety to discrimination and human-rights violations. Just as disinformation and hate speech are known risks of social media platforms, fatal accidents are a known risk of automobiles and heavy machinery, whether they are operated by people or by machines. Bias and discrimination are potential risks of any automated system, but they are amplified and pronounced in technologies that learn, whether autonomously or by training, from existing data.
  • Business Model-Specific Implications
  • ...7 more annotations...
  • The implications of cloud platforms such as Salesforce, Microsoft, Apple, Amazon and others differ again. A business built on a technology platform with a track record of well-developed data and model governance, audit capability, responsible product development practices and a culture and track record of transparency will likely reduce some risks related to biased data and model transparency, while encouraging (and even enforcing) adoption of those same practices and norms throughout its ecosystem.
  • policies that govern their internal practices for responsible technology development; guidance, tools and educational resources for their customers’ responsible use of their technologies; and policies (enforced in terms of service) that govern the acceptable use of not only their platforms but also specific technologies, such as face recognition or gait detection.
  • At the same time, overreliance on a small, well-funded, global group of technology vendors to set the agenda for responsible and ethical use of AI may create a novel set of risks.
  • Audit is another area that, while promising, is also fraught with potential conflict. Companies such as O’Neil Risk Consulting and Algorithmic Auditing, founded by the author of Weapons of Math Destruction, Cathy O’Neil, provide algorithmic audit and other services intended to help companies better understand and remediate data and model issues related to discriminatory outcomes. Unlike, for example, audits of financial statements, algorithmic audit services are as yet entirely voluntary, lack oversight by any type of governing board, and do not carry disclosure requirements or penalties. As a result, no matter how thorough the analysis or comprehensive the results, these types of services are vulnerable to manipulation or exploitation by their customers for “ethics-washing” purposes.
  • , we must broaden our understanding of platforms beyond social media sites to other types of business platforms, examine those risks in context, and approach governance in a way that accounts not only for the technologies themselves, but also for the disparate impacts among industries and business models.
  • This is a time-sensitive issue
  • arge technology companies — for a range of reasons — are trying to fill the policy void, creating the potential for a kind of demilitarized zone for AI, one in which neither established laws nor corporate policy hold sway.
Carsten Ullrich

Internet law - 0 views

  •  
    "ntelligence platform "
1 - 8 of 8
Showing 20 items per page