Opinion | Artificial Intelligence Requires Specific Safety Rules - The New York Times - 0 views
-
For about five years, OpenAI used a system of nondisclosure agreements to stifle public criticism from outgoing employees. Current and former OpenAI staffers were paranoid about talking to the press. In May, one departing employee refused to sign and went public in The Times. The company apologized and scrapped the agreements. Then the floodgates opened. Exiting employees began criticizing OpenAI’s safety practices, and a wave of articles emerged about its broken promises.
-
These stories came from people who were willing to risk their careers to inform the public. How many more are silenced because they’re too scared to speak out? Since existing whistle-blower protections typically cover only the reporting of illegal conduct, they are inadequate here. Artificial intelligence can be dangerous without being illegal
-
A.I. needs stronger protections — like those in place in parts of the public sector, finance and publicly traded companies — that prohibit retaliation and establish anonymous reporting channels.
- ...19 more annotations...