Skip to main content

Home/ GAVNet Collaborative Curation/ Group items tagged machine ethics

Rss Feed Group items tagged

Bill Fulkerson

Anatomy of an AI System - 1 views

shared by Bill Fulkerson on 14 Sep 18 - No Cached
  •  
    "With each interaction, Alexa is training to hear better, to interpret more precisely, to trigger actions that map to the user's commands more accurately, and to build a more complete model of their preferences, habits and desires. What is required to make this possible? Put simply: each small moment of convenience - be it answering a question, turning on a light, or playing a song - requires a vast planetary network, fueled by the extraction of non-renewable materials, labor, and data. The scale of resources required is many magnitudes greater than the energy and labor it would take a human to operate a household appliance or flick a switch. A full accounting for these costs is almost impossible, but it is increasingly important that we grasp the scale and scope if we are to understand and govern the technical infrastructures that thread through our lives. III The Salar, the world's largest flat surface, is located in southwest Bolivia at an altitude of 3,656 meters above sea level. It is a high plateau, covered by a few meters of salt crust which are exceptionally rich in lithium, containing 50% to 70% of the world's lithium reserves. 4 The Salar, alongside the neighboring Atacama regions in Chile and Argentina, are major sites for lithium extraction. This soft, silvery metal is currently used to power mobile connected devices, as a crucial material used for the production of lithium-Ion batteries. It is known as 'grey gold.' Smartphone batteries, for example, usually have less than eight grams of this material. 5 Each Tesla car needs approximately seven kilograms of lithium for its battery pack. 6 All these batteries have a limited lifespan, and once consumed they are thrown away as waste. Amazon reminds users that they cannot open up and repair their Echo, because this will void the warranty. The Amazon Echo is wall-powered, and also has a mobile battery base. This also has a limited lifespan and then must be thrown away as waste. According to the Ay
Steve Bosserman

Opinion | It's Westworld. What's Wrong With Cruelty to Robots? - 1 views

  • The biggest concern is that we might one day create conscious machines: sentient beings with beliefs, desires and, most morally pressing, the capacity to suffer. Nothing seems to be stopping us from doing this. Philosophers and scientists remain uncertain about how consciousness emerges from the material world, but few doubt that it does. This suggests that the creation of conscious machines is possible.
  • If we did create conscious beings, conventional morality tells us that it would be wrong to harm them — precisely to the degree that they are conscious, and can suffer or be deprived of happiness. Just as it would be wrong to breed animals for the sake of torturing them, or to have children only to enslave them, it would be wrong to mistreat the conscious machines of the future.
  • Anything that looks and acts like the hosts on “Westworld” will appear conscious to us, whether or not we understand how consciousness emerges in physical systems. Indeed, experiments with AI and robotics have already shown how quick we are to attribute feelings to machines that look and behave like independent agents.
  • ...3 more annotations...
  • This is where actually watching “Westworld” matters. The pleasure of entertainment aside, the makers of the series have produced a powerful work of philosophy. It’s one thing to sit in a seminar and argue about what it would mean, morally, if robots were conscious. It’s quite another to witness the torments of such creatures, as portrayed by actors such as Evan Rachel Wood and Thandie Newton. You may still raise the question intellectually, but in your heart and your gut, you already know the answer.
  • But the prospect of building a place like “Westworld” is much more troubling, because the experience of harming a host isn’t merely similar to that of harming a person; it’s identical. We have no idea what repeatedly indulging such fantasies would do to us, ethically or psychologically — but there seems little reason to think that it would be good.
  • For the first time in our history, then, we run the risk of building machines that only monsters would use as they please.
Steve Bosserman

Teaching an Algorithm to Understand Right and Wrong - 0 views

  • The rise of artificial intelligence is forcing us to take abstract ethical dilemmas much more seriously because we need to code in moral principles concretely. Should a self-driving car risk killing its passenger to save a pedestrian? To what extent should a drone take into account the risk of collateral damage when killing a terrorist? Should robots make life-or-death decisions about humans at all? We will have to make concrete decisions about what we will leave up to humans and what we will encode into software.
Steve Bosserman

Creating robots capable of moral reasoning is like parenting | Aeon Essays - 0 views

  • Intelligent machines will be our intellectual children, our progeny. They will start off inheriting many of our moral norms, because we will not allow anything else. But they will come to reflect on their nature, including their relationships with us and with each other. If we are wise and benevolent, we will have prepared the way for them to make their own choices – just as we do with our adolescent children.What does this mean in practice? It means being ready to accept that machines might eventually make moral decisions that none of us find acceptable. The only condition is that they must be able to give intelligible reasons for what they’re doing. An intelligible reason is one you can at least see why someone might find morally motivating, even if you don’t necessarily agree.
1 - 9 of 9
Showing 20 items per page