""The ethics commission has done pioneering work and has developed the world's first guidelines for automated driving. We are now implementing these guidelines."
The ethics rules address a classic thought experiment: the "trolley problem.""
"Nemitz identifies four bases of digital power which create and then reinforce its unhealthy concentration in too few hands: lots of money, which means influence; control of "infrastructures of public discourse"; collection of personal data and profiling of people; and domination of investment in AI, most of it a "black box" not open to public scrutiny.
The key question is which of the challenges of AI "can be safely and with good conscience left to ethics" and which need law. Nemitz sees much that needs law."
"In many ways, we've barely even scratched the surface of AI and what it can do. In recent times, we have seen both positive and negative implications. However, like many great innovations, it's important to remember that just because we can, doesn't mean that we ought to do it. Here are some examples of how AI can bring out ethical dilemmas."
"Can AI and analytics be used in a way that improves operational efficiency without jeopardizing our ethical principles? The answer is "yes" - if moral objectives and constraints, now often treated as an afterthought, are considered from the outset when designing models. We will discuss a recent attempt to combine ethics, analytics, and operational efficiency in the world of organ allocation and examine the lessons it holds for other areas of health care and beyond."
"The founders of Predictim want to be clear with me: Their product-an algorithm that scans the online footprint of a prospective babysitter to determine their "risk" levels for parents-is not racist. It is not biased.
"We take ethics and bias extremely seriously," Sal Parsa, Predictim's CEO, tells me warily over the phone. "In fact, in the last 18 months we trained our product, our machine, our algorithm to make sure it was ethical and not biased. We took sensitive attributes, protected classes, sex, gender, race, away from our training set. We continuously audit our model. And on top of that we added a human review process.""
"The possibility of digitally interacting with someone from beyond the grave is no longer the stuff of science fiction. The technology to create convincing digital surrogates of the dead is here, and it's rapidly evolving, with researchers predicting its mainstream viability within a decade. But what about the ethics of bereavement-and the privacy of the deceased? Speaking with a loved one evokes a powerful emotional response. The ability to do so in the wake of their death will inevitably affect the human process of grieving in ways we're only beginning to explore."
""No-limit Texas Hold'em is a game of incomplete information where the AI must infer a human player's intentions and then act in ways that incorporate both the direct odds of winning and bluffing behaviour to try to fool the other player." The designers said their computer didn't "bluff" the human players. But by learning from its mistakes and practising its moves at night between games, the AI was working out how to defeat its human opponents."
"O'Neill recounts an exercise to improve service to homeless families in New York City, in which data-analysis was used to identify risk-factors for long-term homelessness. The problem, O'Neill describes, was that many of the factors in the existing data on homelessness were entangled with things like race (and its proxies, like ZIP codes, which map extensively to race in heavily segregated cities like New York). Using data that reflects racism in the system to train a machine-learning algorithm whose conclusions can't be readily understood runs the risk of embedding that racism in a new set of policies, these ones scrubbed clean of the appearance of bias with the application of objective-seeming mathematics. "
"The president of Baidu, Ya-Qin Zhang, said in a statement: "As AI technology keeps advancing and the application of AI expands, we recognise the importance of joining the global discussion around the future of AI. Ensuring AI's safety, fairness and transparency should not be an afterthought but rather highly considered at the onset of every project or system we build.""
"Amid mounting financial pressure, at least a dozen police forces are using or considering predictive analytics, despite warnings from campaigners that use of algorithms and "predictive policing" models risks locking discrimination into the criminal justice system."
"When Fry returned to London, she realised how mathematicians, computer engineers and physicists are so used to working on abstract problems that they rarely stop to consider the ethics of how their work might be used"
"But maybe I'm wrong. Because, if we believe tech gurus at least, the Trolley Problem is about to become of huge real-world importance. Human beings might not find themselves in all that many Trolley Problem-style scenarios over the course of their lives, but soon we're going to start seeing self-driving cars on our streets, and they're going to have to make these judgments all the time."
"My personal coding projects have presented similarly thorny ethical questions. Should I write a computer program that will download the communications of thousands of teenagers suffering from eating disorders posted on an anorexia advice website? Write a program to post anonymous, suicidal messages on hundreds of college forums to see which colleges offer the most support? My answer to these questions, incidentally, was "no". But I considered it. And the glory and peril of computers is that they magnify the impact of your whims: an impulse becomes a program that can hurt thousands of people."
"Marx had a point. Especially when it comes to ethics, philosophy is often better at finding complications and problems than proposing changes. Silicon Valley has been better at changing the world (even if through breaking things) than taking pause to think through the consequences."
"A flagship artificial intelligence system designed to predict gun and knife violence before it happens had serious flaws that made it unusable, police have admitted. The error led to large drops in accuracy and the system was ultimately rejected by all of the experts reviewing it for ethical problems."
"Looking back on my experience of videoconferencing, I still get an odd emotional pain. The feeling is a kind of shame. Not so much for my own wooden performance and the failure of the technology. But rather a feeling that we have all lost a bit of our humanity through it. My interest in these technologies is ethically motivated. I am not at all happy with the banal dehumanisation that results from bad videoconferencing experiences. If, for example, students and teachers can't express their humanity in education, through its technologies, then we're just not doing it right."
""Often the problem is that the topic itself is unethical," said Gemma Galdon Clavell, an independent tech ethicist who has evaluated many Horizon 2020 security research projects and worked as a partner on more than a dozen. "Some topics encourage partners to develop biometric tech that can work from afar, and so consent is not possible - this is what concerns me." One project aiming to develop such technology refers to it as "unobtrusive person identification" that can be used on people as they cross borders. ¨If we're talking about developing technology that people don't know is being used," said Galdon Clavell, "how can you make that ethical?"
"
"The data wars to come?
Worryingly, there was one question where the AI simply couldn't come up with a counter argument. When arguing for the motion that "Data will become the most fought-over resource of the 21st century", the Megatron said:
The ability to provide information, rather than the ability to provide goods and services, will be the defining feature of the economy of the 21st century.
But when we asked it to oppose the motion - in other words, to argue that data wasn't going to be the most vital of resources, worth fighting a war over - it simply couldn't, or wouldn't, make the case. In fact, it undermined its own position:
We will able to see everything about a person, everywhere they go, and it will be stored and used in ways that we cannot even imagine."