While AI tools still exist in a relative legal vacuum, this blog post explores: 1) the extent of protection granted to algorithms as trade secrets with exceptions of overriding public interest; 2) how the new generation of regulations on the EU and national levels attempt to provide algorithm transparency while preserving trade secrecy; and 3) why the latter development is not a futile endeavour.
Algorithm Transparency: How to Eat the Cake and Have It Too - European Law Blog - 0 views
-
-
most complex algorithms dominating our lives (including those developed by Google and Facebook), are proprietary, i.e. shielded as trade secrets, while only a negligible minority of algorithms are open source.
-
Article 2 of the EU Trade Secrets Directive
- ...11 more annotations...
Article - 0 views
-
elf-assessment reports submitted by Facebook, Google, Microsoft, Mozilla and Twitter
-
bserved that “[a]ll platform signatories deployed policies and systems to ensure transparency around political advertising, including a requirement that all political ads be clearly labelled as sponsored content and include a ‘paid for by’ disclaimer.”
-
While some of the platforms have gone to the extent of banning political ads, the transparency of issue-based advertising is still significantly neglected.
- ...5 more annotations...
The Next Wave of Platform Governance - Centre for International Governance Innovation - 0 views
-
he shift from product- and service-based to platform-based business creates a new set of platform governance implications — especially when these businesses rely upon shared infrastructure from a small, powerful group of technology providers (Figure 1).
-
The industries in which AI is deployed, and the primary use cases it serves, will naturally determine the types and degrees of risk, from health and physical safety to discrimination and human-rights violations. Just as disinformation and hate speech are known risks of social media platforms, fatal accidents are a known risk of automobiles and heavy machinery, whether they are operated by people or by machines. Bias and discrimination are potential risks of any automated system, but they are amplified and pronounced in technologies that learn, whether autonomously or by training, from existing data.
-
Business Model-Specific Implications
- ...7 more annotations...
1 - 4 of 4
Showing 20▼ items per page