Skip to main content

Home/ GAVNet Collaborative Curation/ Group items tagged answer

Rss Feed Group items tagged

Steve Bosserman

How teaching AI to be curious helps machines learn for themselves - The Verge - 0 views

  • The problem with Montezuma’s Revenge is that it doesn’t provide regular rewards for the AI agent. It’s a puzzle-platformer where players have to explore an underground pyramid, dodging traps and enemies while collecting keys that unlock doors and special items. If you were training an AI agent to beat the game, you could reward it for staying alive and collecting keys, but how do you teach it to save certain keys for certain items, and use those items to overcome traps and complete the level? The answer: curiosity.
Steve Bosserman

The Boundary Between Our Bodies and Our Tech - Pacific Standard - 0 views

  • At the beginning of his recent book, The Internet of Us, Lynch uses a thought experiment to illustrate how thin this boundary is. Imagine a device that could implant the functions of a smartphone directly into your brain so that your thoughts could control these functions. It would be a remarkable extension of the brain's abilities, but also, in a sense, it wouldn't be all that different from our current lives, in which the varied and almost limitless connective powers of the smartphone are with us nearly 100 percent of the time, even if they aren't—yet—a physiological part of us.
  • The debate over what it means for us to be so connected all the time is still in its infancy, and there are wildly differing perspectives on what it could mean for us as a species. One result of these collapsing borders, however, is less ambiguous, and it's becoming a common subject of activism and advocacy among the technologically minded. While many of us think of the smartphone as a portal for accessing the outside world, the reciprocity of the device, as well as the larger pattern of our behavior online, means the portal goes the other way as well: It's a means for others to access us.
  • "This is where the fundamental democracy deficit comes from: You have this incredibly concentrated private power with zero transparency or democratic oversight or accountability, and then they have this unprecedented wealth of data about their users to work with," Weigel says. "We've allowed these private companies to take over a lot of functions that we have historically thought of as public functions or social goods, like letting Google be the world's library. Democracy and the very concept of social goods—that tradition is so eroded in the United States that people were ready to let these private companies assume control."
  • ...1 more annotation...
  • Lynch, the University of Connecticut philosophy professor, also believes that one of our best hopes comes from the bottom up, in the form of actually educating people about the products that they spend so much time using. We should know and be aware of how these companies work, how they track our behavior, and how they make recommendations to us based on our behavior and that of others. Essentially, we need to understand the fundamental difference between our behavior IRL and in the digital sphere—a difference that, despite the erosion of boundaries, still stands."Whether we know it or not, the connections that we make on the Internet are being used to cultivate an identity for us—an identity that is then sold to us afterward," Lynch says. "Google tells you what questions to ask, and then it gives you the answers to those questions."
Steve Bosserman

Opinion | It's Westworld. What's Wrong With Cruelty to Robots? - 1 views

  • The biggest concern is that we might one day create conscious machines: sentient beings with beliefs, desires and, most morally pressing, the capacity to suffer. Nothing seems to be stopping us from doing this. Philosophers and scientists remain uncertain about how consciousness emerges from the material world, but few doubt that it does. This suggests that the creation of conscious machines is possible.
  • If we did create conscious beings, conventional morality tells us that it would be wrong to harm them — precisely to the degree that they are conscious, and can suffer or be deprived of happiness. Just as it would be wrong to breed animals for the sake of torturing them, or to have children only to enslave them, it would be wrong to mistreat the conscious machines of the future.
  • Anything that looks and acts like the hosts on “Westworld” will appear conscious to us, whether or not we understand how consciousness emerges in physical systems. Indeed, experiments with AI and robotics have already shown how quick we are to attribute feelings to machines that look and behave like independent agents.
  • ...3 more annotations...
  • This is where actually watching “Westworld” matters. The pleasure of entertainment aside, the makers of the series have produced a powerful work of philosophy. It’s one thing to sit in a seminar and argue about what it would mean, morally, if robots were conscious. It’s quite another to witness the torments of such creatures, as portrayed by actors such as Evan Rachel Wood and Thandie Newton. You may still raise the question intellectually, but in your heart and your gut, you already know the answer.
  • But the prospect of building a place like “Westworld” is much more troubling, because the experience of harming a host isn’t merely similar to that of harming a person; it’s identical. We have no idea what repeatedly indulging such fantasies would do to us, ethically or psychologically — but there seems little reason to think that it would be good.
  • For the first time in our history, then, we run the risk of building machines that only monsters would use as they please.
Bill Fulkerson

Our Machines Now Have Knowledge We'll Never Understand - 1 views

  •  
    "Since we first started carving notches in sticks, we have used things in the world to help us to know that world. But never before have we relied on things that did not mirror human patterns of reasoning - we knew what each notch represented - and that we could not later check to see how our non-sentient partners in knowing came up with those answers. If knowing has always entailed being able to explain and justify our true beliefs - Plato's notion, which has persisted for over two thousand years - what are we to make of a new type of knowledge, in which that task of justification is not just difficult or daunting but impossible?"
« First ‹ Previous 41 - 60 of 60
Showing 20 items per page