Skip to main content

Home/ Duty of care + Standards _ CU/ Group items tagged amazon

Rss Feed Group items tagged

Carsten Ullrich

A New Blueprint for Platform Governance | Centre for International Governance Innovation - 0 views

  • We often talk about the “online environment.” This metaphorical language makes it seem like the online space looks similar to our offline world. For example, the term “information pollution,” coined by Claire Wardle, is increasingly being used to discuss disinformation online.  
  • It is even harder to prove direct connections between online platforms and offline harms. This is partly because platforms are not transparent.
  • Finally, this analogy reminds us that both problems are dispiritingly hard to solve. Two scholars, Whitney Phillips and Ryan Milner, have suggested that our online information problems are ecosystemic, similar to the climate crisis.
  • ...12 more annotations...
  • As Phillips argues, “we’re not going to solve the climate crisis if people just stop drinking water out of water bottles. But we need to start minimizing the amount of pollution that’s even put into the landscape. It’s a place to start; it’s not the place to end.”
  • There may not be a one-size-fits-all analogy for platforms, but “horizontalizing” can help us to understand which solutions worked in other industries, which were under-ambitious and which had unintended consequences. Comparing horizontally also reminds us that the problems of how to regulate the online world are not unique, and will prove as difficult to resolve as those of other large industries.  
  • The key to vertical thinking is to figure out how not to lock in incumbents or to tilt the playing field even more toward them. We often forget that small rivals do exist, and our regulation should think about how to include them. This means fostering a market that has room for ponies and stable horses as well as unicorns.
  • Vertical thinking has started to spread in Washington, DC. In mid January, the antitrust subcommittee in Congress held a hearing with four smaller tech firms. All of them asked for regulatory intervention. The CEO of phone accessory maker PopSockets called Amazon’s behaviour “bullying with a smile.” Amazon purportedly ignored the selling of counterfeited PopSocket products on its platform and punished PopSocket for wanting to end its relationship with Amazon. Both Republicans and Democrats seemed sympathetic to smaller firms’ travails. The question is how to adequately address vertical concerns.
  • Without Improved Governance, Big Firms Will Weaponize Regulation
  • One is the question of intellectual property. Pa
  • Big companies can marshall an army of lawyers, which even medium-sized firms could never afford to do.
  • A second aspect to consider is sliding scales of regulation.
  • A third aspect is burden of proof. One option is to flip the present default and make big companies prove that they are not engaging in harmful behaviour
  • The EU head of antitrust, Margrethe Vestager, is considering whether to turn this on its head: in cases where the European Union suspects monopolistic behaviour, major digital platforms would have to prove that users benefit from their services.
  • Companies would have to prove gains, rather than Brussels having to prove damages. This change would relieve pressure on smaller companies to show harms. It would put obligations on companies such as Google, which Vestager sees as so dominant that she has called them “de facto regulators” in their markets. 
  • A final aspect to consider is possibly mandating larger firms to open up.
Carsten Ullrich

Is the Era of "Permissionless Innovation" and Avoidance of Regulation on the Internet F... - 0 views

  • avoidance of regulation that the Silicon Valley platforms
  • It hasn’t been a great couple of weeks for the “Don’t Be Evil” company.
  • The Supreme Court had upheld a lower court ruling requiring Google to delist from its global search results references to a rogue Canadian company that is the subject of an injunction in British Columbia (B.C) f
  • ...14 more annotations...
  • intellectual property infringement.
  • The Google/Equustek case is not one of permissionless innovation, but is still an example of a large internet intermediary taking the position that it can do as it damned well pleases because, after all, it operates in multiple jurisdictions—in fact it operates in cyberspace, where, according to some, normal regulatory practices and laws shouldn’t apply or we will “stifle innovation”.
  • One innovation that Google has instituted is to tweak its geolocation system
  • The excuse of “it’s not my fault; blame the algorithm”, also won’t fly anymore. Google’s algorithms are the “secret sauce” that differentiates it from its competitors, and the dominance of Google is proof of the effectiveness of its search formulae.
    • Carsten Ullrich
       
      courts have become streetwise on the "algorithm"
  • But scooping up every bit of information and interpreting what people want (or what Google thinks they want) through an algorithm has its downsides. A German court has found that Google cannot hide behind its algorithms when it comes to producing perverse search results
  • AI is great, until it isn’t, and there is no doubt that regulators will start to look at legal issues surrounding AI.
  • Companies like Google and Facebook will not be able to duck their responsibility just because results that are potentially illegal are produced by algorithms or AI
  • One area where human judgement is very much involved is in the placing of ads, although Youtube and others are quick to blame automated programs when legitimate ads appear alongside questionable or illegal content. Platforms have no obligation to accept ads as long as they don’t engage in non-competitive trade practices
  • Google has already learned its lesson on pharmaceutical products the hard way, having been fined $500 million in 2011 for running ads on its Adwords service from unlicenced Canadian online pharmacies illegally (according to US law) selling prescriptions to US consumers.
  • Google is a deep-pocketed corporation but it seems to have got the message when it comes to pharmaceuticals. What galls me is that if Google can remove Adwords placements promoting illegal drug products, why, when I google “watch pirated movies”, do I get an Adwords listing on page 1 of search that says “Watch HD Free Full Movies Online”.
  • At the end of the day whether it is Google, Facebook, Amazon, or any other major internet intermediary, the old wheeze that respect for privacy, respect for copyright and just plain old respect for the law in general gets in the way of innovation is being increasingly shown to be a threadbare argument.
  • What is interesting is that many cyber-libertarians who oppose any attempt to impose copyright obligations and publishing liability on internet platforms are suddenly starting to get nervous about misuse of data by these same platforms when it comes to privacy.
  • This is a remarkable revelation for someone who has not only advocated that Canada adopt in NAFTA the overly-broad US safe harbour provisions found in the Communications Decency Act, a provision that has been widely abused in the US by internet intermediaries as a way of ducking any responsibility for the content they make available, but who has consistently crusaded against any strengthening of copyright laws that might impose greater obligations on internet platforms.
  • proponents of reasonable internet regulation
Carsten Ullrich

The Next Wave of Platform Governance - Centre for International Governance Innovation - 0 views

  • he shift from product- and service-based to platform-based business creates a new set of platform governance implications — especially when these businesses rely upon shared infrastructure from a small, powerful group of technology providers (Figure 1).
  • The industries in which AI is deployed, and the primary use cases it serves, will naturally determine the types and degrees of risk, from health and physical safety to discrimination and human-rights violations. Just as disinformation and hate speech are known risks of social media platforms, fatal accidents are a known risk of automobiles and heavy machinery, whether they are operated by people or by machines. Bias and discrimination are potential risks of any automated system, but they are amplified and pronounced in technologies that learn, whether autonomously or by training, from existing data.
  • Business Model-Specific Implications
  • ...7 more annotations...
  • The implications of cloud platforms such as Salesforce, Microsoft, Apple, Amazon and others differ again. A business built on a technology platform with a track record of well-developed data and model governance, audit capability, responsible product development practices and a culture and track record of transparency will likely reduce some risks related to biased data and model transparency, while encouraging (and even enforcing) adoption of those same practices and norms throughout its ecosystem.
  • policies that govern their internal practices for responsible technology development; guidance, tools and educational resources for their customers’ responsible use of their technologies; and policies (enforced in terms of service) that govern the acceptable use of not only their platforms but also specific technologies, such as face recognition or gait detection.
  • At the same time, overreliance on a small, well-funded, global group of technology vendors to set the agenda for responsible and ethical use of AI may create a novel set of risks.
  • Audit is another area that, while promising, is also fraught with potential conflict. Companies such as O’Neil Risk Consulting and Algorithmic Auditing, founded by the author of Weapons of Math Destruction, Cathy O’Neil, provide algorithmic audit and other services intended to help companies better understand and remediate data and model issues related to discriminatory outcomes. Unlike, for example, audits of financial statements, algorithmic audit services are as yet entirely voluntary, lack oversight by any type of governing board, and do not carry disclosure requirements or penalties. As a result, no matter how thorough the analysis or comprehensive the results, these types of services are vulnerable to manipulation or exploitation by their customers for “ethics-washing” purposes.
  • , we must broaden our understanding of platforms beyond social media sites to other types of business platforms, examine those risks in context, and approach governance in a way that accounts not only for the technologies themselves, but also for the disparate impacts among industries and business models.
  • This is a time-sensitive issue
  • arge technology companies — for a range of reasons — are trying to fill the policy void, creating the potential for a kind of demilitarized zone for AI, one in which neither established laws nor corporate policy hold sway.
1 - 9 of 9
Showing 20 items per page