Skip to main content

Home/ TOK Friends/ Group items tagged robot

Rss Feed Group items tagged

Javier E

Why these friendly robots can't be good friends to our kids - The Washington Post - 0 views

  • before adding a sociable robot to the holiday gift list, parents may want to pause to consider what they would be inviting into their homes. These machines are seductive and offer the wrong payoff: the illusion of companionship without the demands of friendship, the illusion of connection without the reciprocity of a mutual relationship. And interacting with these empathy machines may get in the way of children’s ability to develop a capacity for empathy themselves.
  • In our study, the children were so invested in their relationships with Kismet and Cog that they insisted on understanding the robots as living beings, even when the roboticists explained how the machines worked or when the robots were temporarily broken.
  • The children took the robots’ behavior to signify feelings. When the robots interacted with them, the children interpreted this as evidence that the robots liked them. And when the robots didn’t work on cue, the children likewise took it personally. Their relationships with the robots affected their state of mind and self-esteem.
  • ...14 more annotations...
  • We were led to wonder whether a broken robot can break a child.
  • Kids are central to the sociable-robot project, because its agenda is to make people more comfortable with robots in roles normally reserved for humans, and robotics companies know that children are vulnerable consumers who can bring the whole family along.
  • In October, Mattel scrapped plans for Aristotle — a kind of Alexa for the nursery, designed to accompany children as they progress from lullabies and bedtime stories through high school homework — after lawmakers and child advocacy groups argued that the data the device collected about children could be misused by Mattel, marketers, hackers and other third parties. I was part of that campaign: There is something deeply unsettling about encouraging children to confide in machines that are in turn sharing their conversations with countless others.
  • Recently, I opened my MIT mail and found a “call for subjects” for a study involving sociable robots that will engage children in conversation to “elicit empathy.” What will these children be empathizing with, exactly? Empathy is a capacity that allows us to put ourselves in the place of others, to know what they are feeling. Robots, however, have no emotions to share
  • What they can do is push our buttons. When they make eye contact and gesture toward us, they predispose us to view them as thinking and caring. They are designed to be cute, to provoke a nurturing response. And when it comes to sociable AI, nurturance is the killer app: We nurture what we love, and we love what we nurture. If a computational object or robot asks for our help, asks us to teach it or tend to it, we attach. That is our human vulnerability.
  • digital companions don’t understand our emotional lives. They present themselves as empathy machines, but they are missing the essential equipment: They have not known the arc of a life. They have not been born; they don’t know pain, or mortality, or fear. Simulated thinking may be thinking, but simulated feeling is never feeling, and simulated love is never love.
  • Breazeal’s position is this: People have relationships with many classes of things. They have relationships with children and with adults, with animals and with machines. People, even very little people, are good at this. Now, we are going to add robots to the list of things with which we can have relationships. More powerful than with pets. Less powerful than with people. We’ll figure it out.
  • The nature of the attachments to dolls and sociable machines is different. When children play with dolls, they project thoughts and emotions onto them. A girl who has broken her mother’s crystal will put her Barbies into detention and use them to work on her feelings of guilt. The dolls take the role she needs them to take.
  • Sociable machines, by contrast, have their own agenda. Playing with robots is not about the psychology of projection but the psychology of engagement. Children try to meet the robot’s needs, to understand the robot’s unique nature and wants. There is an attempt to build a mutual relationship.
  • Some people might consider that a good thing: encouraging children to think beyond their own needs and goals. Except the whole commercial program is an exercise in emotional deception.
  • when we offer these robots as pretend friends to our children, it’s not so clear they can wink with us. We embark on an experiment in which our children are the human subjects.
  • it is hard to imagine what those “right types” of ties might be. These robots can’t be in a two-way relationship with a child. They are machines whose art is to put children in a position of pretend empathy. And if we put our children in that position, we shouldn’t expect them to understand what empathy is. If we give them pretend relationships, we shouldn’t expect them to learn how real relationships — messy relationships — work. On the contrary. They will learn something superficial and inauthentic, but mistake it for real connection.
  • In the process, we can forget what is most central to our humanity: truly understanding each other.
  • For so long, we dreamed of artificial intelligence offering us not only instrumental help but the simple salvations of conversation and care. But now that our fantasy is becoming reality, it is time to confront the emotional downside of living with the robots of our dreams.
Javier E

Welcome, Robot Overlords. Please Don't Fire Us? | Mother Jones - 0 views

  • There will be no place to go but the unemployment line.
  • There will be no place to go but the unemployment line.
  • at this point our tale takes a darker turn. What do we do over the next few decades as robots become steadily more capable and steadily begin taking away all our jobs?
  • ...34 more annotations...
  • The economics community just hasn't spent much time over the past couple of decades focusing on the effect that machine intelligence is likely to have on the labor marke
  • The Digital Revolution is different because computers can perform cognitive tasks too, and that means machines will eventually be able to run themselves. When that happens, they won't just put individuals out of work temporarily. Entire classes of workers will be out of work permanently. In other words, the Luddites weren't wrong. They were just 200 years too early
  • Slowly but steadily, labor's share of total national income has gone down, while the share going to capital owners has gone up. The most obvious effect of this is the skyrocketing wealth of the top 1 percent, due mostly to huge increases in capital gains and investment income.
  • Robotic pets are growing so popular that Sherry Turkle, an MIT professor who studies the way we interact with technology, is uneasy about it: "The idea of some kind of artificial companionship," she says, "is already becoming the new normal."
  • robots will take over more and more jobs. And guess who will own all these robots? People with money, of course. As this happens, capital will become ever more powerful and labor will become ever more worthless. Those without money—most of us—will live on whatever crumbs the owners of capital allow us.
  • Economist Paul Krugman recently remarked that our long-standing belief in skills and education as the keys to financial success may well be outdated. In a blog post titled "Rise of the Robots," he reviewed some recent economic data and predicted that we're entering an era where the prime cause of income inequality will be something else entirely: capital vs. labor.
  • while it's easy to believe that some jobs can never be done by machines—do the elderly really want to be tended by robots?—that may not be true.
  • Third, as more people compete for fewer jobs, we'd expect to see middle-class incomes flatten in a race to the bottom.
  • The question we want to answer is simple: If CBTC is already happening—not a lot, but just a little bit—what trends would we expect to see? What are the signs of a computer-driven economy?
  • if automation were displacing labor, we'd expect to see a steady decline in the share of the population that's employed.
  • Second, we'd expect to see fewer job openings than in the past.
  • In the economics literature, the increase in the share of income going to capital owners is known as capital-biased technological change
  • Fourth, with consumption stagnant, we'd expect to see corporations stockpile more cash and, fearing weaker sales, invest less in new products and new factories
  • Fifth, as a result of all this, we'd expect to see labor's share of national income decline and capital's share rise.
  • We're already seeing them, and not just because of the crash of 2008. They started showing up in the statistics more than a decade ago. For a while, though, they were masked by the dot-com and housing bubbles, so when the financial crisis hit, years' worth of decline was compressed into 24 months. The trend lines dropped off the cliff.
  • Corporate executives should worry too. For a while, everything will seem great for them: Falling labor costs will produce heftier profits and bigger bonuses. But then it will all come crashing down. After all, robots might be able to produce goods and services, but they can't consume them
  • in another sense, we should be very alarmed. It's one thing to suggest that robots are going to cause mass unemployment starting in 2030 or so. We'd have some time to come to grips with that. But the evidence suggests that—slowly, haltingly—it's happening already, and we're simply not prepared for it.
  • the first jobs to go will be middle-skill jobs. Despite impressive advances, robots still don't have the dexterity to perform many common kinds of manual labor that are simple for humans—digging ditches, changing bedpans. Nor are they any good at jobs that require a lot of cognitive skill—teaching classes, writing magazine articles
  • in the middle you have jobs that are both fairly routine and require no manual dexterity. So that may be where the hollowing out starts: with desk jobs in places like accounting or customer support.
  • In fact, there's even a digital sports writer. It's true that a human being wrote this story—ask my mother if you're not sure—but in a decade or two I might be out of a job too
  • Doctors should probably be worried as well. Remember Watson, the Jeopardy!-playing computer? It's now being fed millions of pages of medical information so that it can help physicians do a better job of diagnosing diseases. In another decade, there's a good chance that Watson will be able to do this without any human help at all.
  • Take driverless cars.
  • The next step might be passenger vehicles on fixed routes, like airport shuttles. Then long-haul trucks. Then buses and taxis. There are 2.5 million workers who drive trucks, buses, and taxis for a living, and there's a good chance that, one by one, all of them will be displaced
  • There will be no place to go but the unemployment lin
  • we'll need to let go of some familiar convictions. Left-leaning observers may continue to think that stagnating incomes can be improved with better education and equality of opportunity. Conservatives will continue to insist that people without jobs are lazy bums who shouldn't be coddled. They'll both be wrong.
  • The modern economy is complex, and most of these trends have multiple causes.
  • we'll probably have only a few options open to us. The simplest, because it's relatively familiar, is to tax capital at high rates and use the money to support displaced workers. In other words, as The Economist's Ryan Avent puts it, "redistribution, and a lot of it."
  • would we be happy in a society that offers real work to a dwindling few and bread and circuses for the rest?
  • Most likely, owners of capital would strongly resist higher taxes, as they always have, while workers would be unhappy with their enforced idleness. Still, the ancient Romans managed to get used to it—with slave labor playing the role of robots—and we might have to, as well.
  •  economist Noah Smith suggests that we might have to fundamentally change the way we think about how we share economic growth. Right now, he points out, everyone is born with an endowment of labor by virtue of having a body and a brain that can be traded for income. But what to do when that endowment is worth a fraction of what it is today? Smith's suggestion: "Why not also an endowment of capital? What if, when each citizen turns 18, the government bought him or her a diversified portfolio of equity?"
  • In simple terms, if owners of capital are capturing an increasing fraction of national income, then that capital needs to be shared more widely if we want to maintain a middle-class society.
  • it's time to start thinking about our automated future in earnest. The history of mass economic displacement isn't encouraging—fascists in the '20s, Nazis in the '30s—and recent high levels of unemployment in Greece and Italy have already produced rioting in the streets and larger followings for right-wing populist parties. And that's after only a few years of misery.
  • When the robot revolution finally starts to happen, it's going to happen fast, and it's going to turn our world upside down. It's easy to joke about our future robot overlords—R2-D2 or the Terminator?—but the challenge that machine intelligence presents really isn't science fiction anymore. Like Lake Michigan with an inch of water in it, it's happening around us right now even if it's hard to see
  • A robotic paradise of leisure and contemplation eventually awaits us, but we have a long and dimly lit tunnel to navigate before we get there.
grayton downing

Send in the Bots | The Scientist Magazine® - 0 views

  • any hypothesis, his idea needed to be tested. But measuring brain activity in a moving ant—the most direct way to determine cognitive processing during animal decision making—was not possible. So Garnier didn’t study ants; he studied robots. U
  • The robots then navigated the environment by sensing light intensity through two sensors on their “heads.”
  • , several groups have used autonomous robots that sense and react to their environments to “debunk the idea that you need higher cognitive processing to do what look like cognitive things,”
  • ...10 more annotations...
  • a growing number of scientists are using autonomous robots to interrogate animal behavior and cognition. Researchers have designed robots to behave like ants, cockroaches, rodents, chickens, and more, then deployed their bots in the lab or in the environment to see how similarly they behave to their flesh-and-blood counterparts.
  • robots give behavioral biologists the freedom to explore the mind of an animal in ways that would not be possible with living subjects, says University of Sheffield researcher James Marshall, who in March helped launch a 3-year collaborative project to build a flying robot controlled by a computer-run simulation of the entire honeybee brain.
  • “I really think there is a lot to be discovered by doing the engineering side along with the science.”
  • Not only did the bots move around the space like the rat pups did, they aggregated in remarkably similar ways to the real animals.3 Then Schank realized that there was a bug in his program. The robots weren’t following his predetermined rules; they were moving randomly.
  • Animal experiments are still needed to advance neuroscience.” But, he adds, robots may prove to be an indispensable new ethological tool for focusing the scope of research. “If you can have good physical models,” Prescott says, “then you can reduce the number of experiments and only do the ones that answer really important questions.”
  • animal-mimicking robots is not easy, however, particularly when knowledge of the system’s biology is lacking.
  • However, when the researchers also gave the robots a sense of flow, and programmed them to assume that odors come from upstream, the bots much more closely mimicked real lobster behavior. “That was a demonstration that the animals’ brains were multimodal—that they were using chemical information and flow information,” says Grasso, who has since worked on robotic models of octopus arms and crayfish.
  • some sense, the use of robotics in animal-behavior research is not that new. Since the inception of the field of ethology, researchers have been using simple physical models of animals—“dummies”—to examine the social behavior of real animals, and biologists began animating their dummies as soon as technology would allow. “The fundamental problem when you’re studying an interaction between two individuals is that it’s a two-way interaction—you’ve got two players whose behaviors are both variable,”
  • building a robot that animals will accept as one of their own is complicated, to say the least.
  • handful of other researchers have also successfully integrated robots with live animals—including fish, ducks, and chickens. There are several notable benefits to intermixing robots and animals; first and foremost, control. “One of the problems when studying behavior is that, of course, it’s very difficult to have control of animals, and so it’s hard for us to interpret fully how they interact with each other
sissij

There's a Major Problem with AI's Decision Making | Big Think - 0 views

  • For eons, God has served as a standby for “things we don’t understand.” Once an innovative researcher or tinkering alchemist figures out the science behind the miracle, humans harness the power of chemistry, biology, or computer science.
  • The process of ‘deep learning’—in which a machine extracts information, often in an unsupervised manner, to teach and transform itself—exploits a longstanding human paradox: we believe ourselves to have free will, but really we’re a habit-making and -performing animal repeatedly playing out its own patterns.
  • When we place our faith in an algorithm we don’t understand—autonomous cars, stock trades, educational policies, cancer screenings—we’re risking autonomy, as well as the higher cognitive and emotional qualities that make us human, such as compassion, empathy, and altruism.
  • ...2 more annotations...
  • Of course, defining terms is of primary importance, a task that has proven impossible when discussing the nuances of consciousness, which is effectively the power we’re attempting to imbue our machines with.
  • What type of machines are we creating if we only recognize a “sort of” intelligence under the hood of our robots? For over a century, dystopian novelists have envisioned an automated future in which our machines best us. This is no longer a future scenario.
  •  
    In the fiction books, we can always see a scene that the AI robots start to take over the world. We humans are always afraid of AI robots having emotions. As we discussed in TOK, there is a phenomenon that the more robots are like human, the more people despise of them. I think that's because if robots start to have emotions, then they would be easily out of our control. We still see AI robots as lifeless gears and machines, what if they are more than that? --Sissi (4/23/2017)
sissij

iPhone manufacturer Foxconn plans to replace almost every human worker with robots - Th... - 0 views

  • The first phase of Foxconn’s automation plans involve replacing the work that is either dangerous or involves repetitious labor humans are unwilling to do.
  • In the long term, robots are cheaper than human labor. However, the initial investment can be costly.
  • There is, however, a central side effect to automation that would specifically benefit a company like Foxconn.
  • ...2 more annotations...
  • So much so in fact that Foxconn had to install suicide netting at factories throughout China and take measures to protect itself against employee litigation.
  • But in doing so, it will ultimately end up putting hundreds of thousands, if not millions, of people out of work.
  •  
    It has always been debatable that to what extent can robot replace human. Foxconn has long been blamed on how it treats its workers. By replacing human by robots, the company can save a lot of money and avoid a lot of condemnations and lawsuits. I think robots are definitely going to replace human on dangerous and tired work, but it is very important that the society is prepared for that change. The government should improve the education so that people can explore other possibilities of what they can do. --Sissi (12/31/2016)
tongoscar

Can Robots Reduce Racism And Sexism? - 0 views

  • Robots are becoming a regular part of our workplaces, serving as supermarket cashiers and building our cars.
  • Apparently, just thinking about robot workers leads people to think they have more in common with other human groups according to research published in American Psychologist.
  • Basically, the robots reduced prejudice by highlighting the existence of a group that is not human. Study authors, Joshua Conrad Jackson, Noah Castelo and Kurt Gray, summarized, “The large differences between humans and robots may make the differences between humans seem smaller than they normally appear. Christians and Muslims have different beliefs, but at least both are made from flesh and blood; Latinos and Asians may eat different foods, but at least they eat.”
  • ...3 more annotations...
  • Most importantly, the awareness of robots didn’t just change people’s attitudes, but also changed their behavior.
  • Although the previous study illustrated how robots can help eliminate the racial pay gap, it’s not clear what robots’ impact would be on the gender pay gap.
  • Computers are given gendered voices and human appearances so they seem more like us.
Emily Horwitz

Paralyzed Mom Controls Robotic Arm Using Her Thoughts - Yahoo! News - 0 views

  • After years of paralysis, the one thing Jan Scheuermann wanted was to feed herself. Now, thanks to a mind-controlled robotic arm, Scheuermann has done just that.
  • By implanting two quarter-inch-by-quarter-inch electrodes in her brain and connecting them to a sophisticated robotic arm, researchers at the University of Pittsburgh School of Medicine and University of Pittsburgh Medical Center have allowed the mother of two to manipulate objects by using only her thoughts through a brain-computer interface, or BCI.
  • "They asked me if there was something special I wanted to do," Sheuermann said. "And I said my goal is to feed myself a bar of chocolate. And I did that today."
  • ...6 more annotations...
  • Quadriplegics like Scheuermann have manipulated robotic arms using BCI before.
  • "With three degrees of control, you can do things like manipulate a computer screen and that gentleman was able to reach out and touch his daughter
  • But to actually manipulate objects, to feed yourself for example, you need more than those three dimensions of control. That's what makes Jan so remarkable
  • "The biggest change," Boninger said, "is the sophistication with which we've learned to interpret electrical activity in the brain."
  • "I wouldn't say we have decoded the brain," Boninger said. "But we are getting closer. We can't read emotions but we can interpret motions the brain wants the body to make."
  • "For me, it's been one of the most exciting endeavors I have ever undertaken," Sheuermann wrote on the University of Pittsburgh Medical Center blog. "Being with a team of scientists and using cutting-edge technology that makes me the only person in the world who can scratch her nose with a robotic arm, well, that's thrilling."
Ryan Beneck

Soft Touch: Squishy Robots Could Lead to Cheaper, Safer Medical Devices: Scientific Ame... - 0 views

  • Hard robots require a sophisticated feedback mechanism to help them determine how much force to apply during surgery so they do not damage our delicate tissues and organs. Soft robots could take advantage of their rubbery appendages to reduce the likelihood of surgical damage
  • Soft robots can be 3-D printed in a day or two from silicone and other materials that cost about $20.
Javier E

Welcome, Robot Overlords. Please Don't Fire Us? | Mother Jones - 0 views

  • This is the happy version. It's the one where computers keep getting smarter and smarter, and clever engineers keep building better and better robots. By 2040, computers the size of a softball are as smart as human beings. Smarter, in fact. Plus they're computers: They never get tired, they're never ill-tempered, they never make mistakes, and they have instant access to all of human knowledge.
  • , just as it took us until 2025 to fill up Lake Michigan, the simple exponential curve of Moore's Law suggests it's going to take us until 2025 to build a computer with the processing power of the human brain. And it's going to happen the same way: For the first 70 years, it will seem as if nothing is happening, even though we're doubling our progress every 18 months. Then, in the final 15 years, seemingly out of nowhere, we'll finish the job.
  • And that's exactly where we are. We've moved from computers with a trillionth of the power of a human brain to computers with a billionth of the power. Then a millionth. And now a thousandth. Along the way, computers progressed from ballistics to accounting to word processing to speech recognition, and none of that really seemed like progress toward artificial intelligence. That's because even a thousandth of the power of a human brain is—let's be honest—a bit of a joke.
  • ...4 more annotations...
  • But there's another reason as well: Every time computers break some new barrier, we decide—or maybe just finally get it through our thick skulls—that we set the bar too low.
  • the best estimates of the human brain suggest that our own processing power is about equivalent to 10 petaflops. ("Peta" comes after giga and tera.) That's a lot of flops, but last year an IBM Blue Gene/Q supercomputer at Lawrence Livermore National Laboratory was clocked at 16.3 petaflops.
  • in Lake Michigan terms, we finally have a few inches of water in the lake bed, and we can see it rising. All those milestones along the way—playing chess, translating web pages, winning at Jeopardy!, driving a car—aren't just stunts. They're precisely the kinds of things you'd expect as we struggle along with platforms that aren't quite powerful enough—yet. True artificial intelligence will very likely be here within a couple of decades. Making it small, cheap, and ubiquitous might take a decade more.
  • In other words, by about 2040 our robot paradise awaits.
Javier E

Look At Me by Patricia Snow | Articles | First Things - 0 views

  • Maurice stumbles upon what is still the gold standard for the treatment of infantile autism: an intensive course of behavioral therapy called applied behavioral analysis that was developed by psychologist O. Ivar Lovaas at UCLA in the 1970s
  • in a little over a year’s time she recovers her daughter to the point that she is indistinguishable from her peers.
  • Let Me Hear Your Voice is not a particularly religious or pious work. It is not the story of a miracle or a faith healing
  • ...54 more annotations...
  • Maurice discloses her Catholicism, and the reader is aware that prayer undergirds the therapy, but the book is about the therapy, not the prayer. Specifically, it is about the importance of choosing methods of treatment that are supported by scientific data. Applied behavioral analysis is all about data: its daily collection and interpretation. The method is empirical, hard-headed, and results-oriented.
  • on a deeper level, the book is profoundly religious, more religious perhaps than its author intended. In this reading of the book, autism is not only a developmental disorder afflicting particular individuals, but a metaphor for the spiritual condition of fallen man.
  • Maurice’s autistic daughter is indifferent to her mother
  • In this reading of the book, the mother is God, watching a child of his wander away from him into darkness: a heartbroken but also a determined God, determined at any cost to bring the child back
  • the mother doesn’t turn back, concedes nothing to the condition that has overtaken her daughter. There is no political correctness in Maurice’s attitude to autism; no nod to “neurodiversity.” Like the God in Donne’s sonnet, “Batter my heart, three-personed God,” she storms the walls of her daughter’s condition
  • Like God, she sets her sights high, commits both herself and her child to a demanding, sometimes painful therapy (life!), and receives back in the end a fully alive, loving, talking, and laughing child
  • the reader realizes that for God, the harrowing drama of recovery is never a singular, or even a twice-told tale, but a perennial one. Every child of his, every child of Adam and Eve, wanders away from him into darkness
  • we have an epidemic of autism, or “autism spectrum disorder,” which includes classic autism (Maurice’s children’s diagnosis); atypical autism, which exhibits some but not all of the defects of autism; and Asperger’s syndrome, which is much more common in boys than in girls and is characterized by average or above average language skills but impaired social skills.
  • At the same time, all around us, we have an epidemic of something else. On the street and in the office, at the dinner table and on a remote hiking trail, in line at the deli and pushing a stroller through the park, people go about their business bent over a small glowing screen, as if praying.
  • This latter epidemic, or experiment, has been going on long enough that people are beginning to worry about its effects.
  • for a comprehensive survey of the emerging situation on the ground, the interested reader might look at Sherry Turkle’s recent book, Reclaiming Conversation: The Power of Talk in a Digital Age.
  • she also describes in exhaustive, chilling detail the mostly horrifying effects recent technology has had on families and workplaces, educational institutions, friendships and romance.
  • many of the promises of technology have not only not been realized, they have backfired. If technology promised greater connection, it has delivered greater alienation. If it promised greater cohesion, it has led to greater fragmentation, both on a communal and individual level.
  • If thinking that the grass is always greener somewhere else used to be a marker of human foolishness and a temptation to be resisted, today it is simply a possibility to be checked out. The new phones, especially, turn out to be portable Pied Pipers, irresistibly pulling people away from the people in front of them and the tasks at hand.
  • all it takes is a single phone on a table, even if that phone is turned off, for the conversations in the room to fade in number, duration, and emotional depth.
  • an infinitely malleable screen isn’t an invitation to stability, but to restlessness
  • Current media, and the fear of missing out that they foster (a motivator now so common it has its own acronym, FOMO), drive lives of continual interruption and distraction, of virtual rather than real relationships, and of “little” rather than “big” talk
  • if you may be interrupted at any time, it makes sense, as a student explains to Turkle, to “keep things light.”
  • we are reaping deficits in emotional intelligence and empathy; loneliness, but also fears of unrehearsed conversations and intimacy; difficulties forming attachments but also difficulties tolerating solitude and boredom
  • consider the testimony of the faculty at a reputable middle school where Turkle is called in as a consultant
  • The teachers tell Turkle that their students don’t make eye contact or read body language, have trouble listening, and don’t seem interested in each other, all markers of autism spectrum disorder
  • Like much younger children, they engage in parallel play, usually on their phones. Like autistic savants, they can call up endless information on their phones, but have no larger context or overarching narrative in which to situate it
  • Students are so caught up in their phones, one teacher says, “they don’t know how to pay attention to class or to themselves or to another person or to look in each other’s eyes and see what is going on.
  • “It is as though they all have some signs of being on an Asperger’s spectrum. But that’s impossible. We are talking about a schoolwide problem.”
  • Can technology cause Asperger’
  • “It is not necessary to settle this debate to state the obvious. If we don’t look at our children and engage them in conversation, it is not surprising if they grow up awkward and withdrawn.”
  • In the protocols developed by Ivar Lovaas for treating autism spectrum disorder, every discrete trial in the therapy, every drill, every interaction with the child, however seemingly innocuous, is prefaced by this clear command: “Look at me!”
  • If absence of relationship is a defining feature of autism, connecting with the child is both the means and the whole goal of the therapy. Applied behavioral analysis does not concern itself with when exactly, how, or why a child becomes autistic, but tries instead to correct, do over, and even perhaps actually rewire what went wrong, by going back to the beginning
  • Eye contact—which we know is essential for brain development, emotional stability, and social fluency—is the indispensable prerequisite of the therapy, the sine qua non of everything that happens.
  • There are no shortcuts to this method; no medications or apps to speed things up; no machines that can do the work for us. This is work that only human beings can do
  • it must not only be started early and be sufficiently intensive, but it must also be carried out in large part by parents themselves. Parents must be trained and involved, so that the treatment carries over into the home and continues for most of the child’s waking hours.
  • there are foundational relationships that are templates for all other relationships, and for learning itself.
  • Maurice’s book, in other words, is not fundamentally the story of a child acquiring skills, though she acquires them perforce. It is the story of the restoration of a child’s relationship with her parents
  • it is also impossible to overstate the time and commitment that were required to bring it about, especially today, when we have so little time, and such a faltering, diminished capacity for sustained engagement with small children
  • The very qualities that such engagement requires, whether our children are sick or well, are the same qualities being bred out of us by technologies that condition us to crave stimulation and distraction, and by a culture that, through a perverse alchemy, has changed what was supposed to be the freedom to work anywhere into an obligation to work everywhere.
  • In this world of total work (the phrase is Josef Pieper’s), the work of helping another person become fully human may be work that is passing beyond our reach, as our priorities, and the technologies that enable and reinforce them, steadily unfit us for the work of raising our own young.
  • in Turkle’s book, as often as not, it is young people who are distressed because their parents are unreachable. Some of the most painful testimony in Reclaiming Conversation is the testimony of teenagers who hope to do things differently when they have children, who hope someday to learn to have a real conversation, and so o
  • it was an older generation that first fell under technology’s spell. At the middle school Turkle visits, as at many other schools across the country, it is the grown-ups who decide to give every child a computer and deliver all course content electronically, meaning that they require their students to work from the very medium that distracts them, a decision the grown-ups are unwilling to reverse, even as they lament its consequences.
  • we have approached what Turkle calls the robotic moment, when we will have made ourselves into the kind of people who are ready for what robots have to offer. When people give each other less, machines seem less inhuman.
  • robot babysitters may not seem so bad. The robots, at least, will be reliable!
  • If human conversations are endangered, what of prayer, a conversation like no other? All of the qualities that human conversation requires—patience and commitment, an ability to listen and a tolerance for aridity—prayer requires in greater measure.
  • this conversation—the Church exists to restore. Everything in the traditional Church is there to facilitate and nourish this relationship. Everything breathes, “Look at me!”
  • there is a second path to God, equally enjoined by the Church, and that is the way of charity to the neighbor, but not the neighbor in the abstract.
  • “Who is my neighbor?” a lawyer asks Jesus in the Gospel of Luke. Jesus’s answer is, the one you encounter on the way.
  • Virtue is either concrete or it is nothing. Man’s path to God, like Jesus’s path on the earth, always passes through what the Jesuit Jean Pierre de Caussade called “the sacrament of the present moment,” which we could equally call “the sacrament of the present person,” the way of the Incarnation, the way of humility, or the Way of the Cross.
  • The tradition of Zen Buddhism expresses the same idea in positive terms: Be here now.
  • Both of these privileged paths to God, equally dependent on a quality of undivided attention and real presence, are vulnerable to the distracting eye-candy of our technologies
  • Turkle is at pains to show that multitasking is a myth, that anyone trying to do more than one thing at a time is doing nothing well. We could also call what she was doing multi-relating, another temptation or illusion widespread in the digital age. Turkle’s book is full of people who are online at the same time that they are with friends, who are texting other potential partners while they are on dates, and so on.
  • This is the situation in which many people find themselves today: thinking that they are special to someone because of something that transpired, only to discover that the other person is spread so thin, the interaction was meaningless. There is a new kind of promiscuity in the world, in other words, that turns out to be as hurtful as the old kind.
  • Who can actually multitask and multi-relate? Who can love everyone without diluting or cheapening the quality of love given to each individual? Who can love everyone without fomenting insecurity and jealousy? Only God can do this.
  • When an individual needs to be healed of the effects of screens and machines, it is real presence that he needs: real people in a real world, ideally a world of God’s own making
  • Nature is restorative, but it is conversation itself, unfolding in real time, that strikes these boys with the force of revelation. More even than the physical vistas surrounding them on a wilderness hike, unrehearsed conversation opens up for them new territory, open-ended adventures. “It was like a stream,” one boy says, “very ongoing. It wouldn’t break apart.”
  • in the waters of baptism, the new man is born, restored to his true parent, and a conversation begins that over the course of his whole life reminds man of who he is, that he is loved, and that someone watches over him always.
  • Even if the Church could keep screens out of her sanctuaries, people strongly attached to them would still be people poorly positioned to take advantage of what the Church has to offer. Anxious people, unable to sit alone with their thoughts. Compulsive people, accustomed to checking their phones, on average, every five and a half minutes. As these behaviors increase in the Church, what is at stake is man’s relationship with truth itself.
cvanderloo

Sewage-testing robots process wastewater faster to predict COVID-19 outbreaks sooner - 0 views

  • By using a sewage-handling robot, our laboratory has been able to detect coronavirus in wastewater 30 times faster than nonautomated large-scale systems.
  • When clinical studies emerged showing that people who test positive for SARS-CoV-2 shed the virus in their stool, the sewer seemed like an obvious place to look for it.
  • Surveillance depends on concentrating the viral particles from the wastewater to detect these low levels.
  • ...5 more annotations...
  • Wastewater surveillance is especially useful as an early-alert system for high-risk areas, such as communities where undocumented residents may be cautious about individual testing.
  • Our new protocol concentrates 24 samples in a single 40-minute run.
  • The sewage-handling robot is equipped with a specialized magnetic head that snags the magnetic beads, with viruses attached.
  • Overall, our system can process 96 samples in 4.5 hours, dramatically reducing the time from specimen to result.
  • We’re now using the viral genome sequencing part of our system to track the emergence of new SARS-CoV-2 variants.
sissij

Prejudice AI? Machine Learning Can Pick up Society's Biases | Big Think - 1 views

  • We think of computers as emotionless automatons and artificial intelligence as stoic, zen-like programs, mirroring Mr. Spock, devoid of prejudice and unable to be swayed by emotion.
  • They say that AI picks up our innate biases about sex and race, even when we ourselves may be unaware of them. The results of this study were published in the journal Science.
  • After interacting with certain users, she began spouting racist remarks.
  • ...2 more annotations...
  • It just learns everything from us and as our echo, picks up the prejudices we’ve become deaf to.
  • AI will have to be programmed to embrace equality.
  •  
    I just feel like this is so ironic. As the parents of the AI, humans themselves can't even be equal , how can we expect the robot we made to be perform perfect humanity and embrace flawless equality. I think equality itself is flawed. How can we define equality? Just like we cannot define fairness, we cannot define equality. I think this robot picking up racist remarks just shows that how children become racist. It also reflects how powerful the cultural context and social norms are. They can shape us subconsciously. --Sissi (4/20/2017)
julia rhodes

Opinion: Is Google redefining 'don't be evil'? - CNN.com - 0 views

  • Well, some of Google's recent forays are waking people up to the fact that evil is in the eyes of the beholder. The company just acquired military robot maker Boston Dynamics, leading to great consternation in the Twitterverse. As @BrentButt put it this week in a tweet that caught fire:
  • What we have to ask, and keep asking at every turn, is: To what end? What real purpose are we serving?
  • Not doing evil is actually a pretty low bar to begin with. Is this really a high aspiration? To avoid embodying Satan in silicon?
  • ...5 more annotations...
  • We can't employ an entirely programmatic approach to human affairs. However well we think we might be embedding our technologies with the values we hope to express, more often than not we also get unexpected consequences.
  • Still, we can't help but do a bit of evil when we build technology upon technology, without taking a pause to ask what it's all for. New technologies give us the opportunity to reevaluate the systems we have been using up until now, and consider doing things differently.
  • our best Stanford computer science graduates end up writing algorithms that better extract money from the stock market, rather than exploring whether capital is even serving its original purpose of getting funds to new businesses.
  • When we develop technology in a vacuum, disconnected from the reality in which people really live, we are too likely to spend our energy designing some abstract vision of a future life rather than addressing the pains and injustices around us right now. Technology becomes a way of escaping the world's problems, whether through virtual reality or massive Silicon Valley stock options packages, rather than engaging with them.
  • . It's not enough to computerize and digitize the society we have, and exacerbate its problems by new means. We must transcend the mere avoidance of the patently evil and instead seek to do good. That may involve actually overturning and remaking some institutions and processes from the ground up. That's the real potential of digital technology. To retrieve the values and ideas that may have seemed impossible before and see whether we can realize them today in this very new world.
pier-paolo

Computers Already Learn From Us. But Can They Teach Themselves? - The New York Times - 0 views

  • We teach computers to see patterns, much as we teach children to read. But the future of A.I. depends on computer systems that learn on their own, without supervision, researchers say.
  • When a mother points to a dog and tells her baby, “Look at the doggy,” the child learns what to call the furry four-legged friends. That is supervised learning. But when that baby stands and stumbles, again and again, until she can walk, that is something else.Computers are the same.
  • ven if a supervised learning system read all the books in the world, he noted, it would still lack human-level intelligence because so much of our knowledge is never written down.
  • ...9 more annotations...
  • upervised learning depends on annotated data: images, audio or text that is painstakingly labeled by hordes of workers. They circle people or outline bicycles on pictures of street traffic. The labeled data is fed to computer algorithms, teaching the algorithms what to look for. After ingesting millions of labeled images, the algorithms become expert at recognizing what they have been taught to see.
  • There is also reinforcement learning, with very limited supervision that does not rely on training data. Reinforcement learning in computer science,
  • is modeled after reward-driven learning in the brain: Think of a rat learning to push a lever to receive a pellet of food. The strategy has been developed to teach computer systems to take actions.
  • My money is on self-supervised learning,” he said, referring to computer systems that ingest huge amounts of unlabeled data and make sense of it all without supervision or reward. He is working on models that learn by observation, accumulating enough background knowledge that some sort of common sense can emerge.
  • redict outcomes and choose a course of action. “Everybody agrees we need predictive learning, but we disagree about how to get there,”
  • A more inclusive term for the future of A.I., he said, is “predictive learning,” meaning systems that not only recognize patterns but also p
  • A huge fraction of what we do in our day-to-day jobs is constantly refining our mental models of the world and then using those mental models to solve problems,” he said. “That encapsulates an awful lot of what we’d like A.I. to do.”Image
  • Currently, robots can operate only in well-defined environments with little variation.
  • “Our working assumption is that if we build sufficiently general algorithms, then all we really have to do, once that’s done, is to put them in robots that are out there in the real world doing real things,”
knudsenlu

You Are Already Living Inside a Computer - The Atlantic - 1 views

  • Nobody really needs smartphone-operated bike locks or propane tanks. And they certainly don’t need gadgets that are less trustworthy than the “dumb” ones they replace, a sin many smart devices commit. But people do seem to want them—and in increasing numbers.
  • Why? One answer is that consumers buy what is on offer, and manufacturers are eager to turn their dumb devices smart. Doing so allows them more revenue, more control, and more opportunity for planned obsolescence. It also creates a secondary market for data collected by means of these devices. Roomba, for example, hopes to deduce floor plans from the movement of its robotic home vacuums so that it can sell them as business intelligence.
  • And the more people love using computers for everything, the more life feels incomplete unless it takes place inside them.
  • ...15 more annotations...
  • Computers already are predominant, human life already takes place mostly within them, and people are satisfied with the results.
  • These devices pose numerous problems. Cost is one. Like a cheap propane gauge, a traditional bike lock is a commodity. It can be had for $10 to $15, a tenth of the price of Nokē’s connected version. Security and privacy are others. The CIA was rumored to have a back door into Samsung TVs for spying. Disturbed people have been caught speaking to children over hacked baby monitors. A botnet commandeered thousands of poorly secured internet-of-things devices to launch a massive distributed denial-of-service attack against the domain-name syste
  • Reliability plagues internet-connected gadgets, too. When the network is down, or the app’s service isn’t reachable, or some other software behavior gets in the way, the products often cease to function properly—or at all.
  • Turing guessed that machines would become most compelling when they became convincing companions, which is essentially what today’s smartphones (and smart toasters) do.
  • But Turing never claimed that machines could think, let alone that they might equal the human mind. Rather, he surmised that machines might be able to exhibit convincing behavior.
  • People choose computers as intermediaries for the sensual delight of using computers
  • ne such affection is the pleasure of connectivity. You don’t want to be offline. Why would you want your toaster or doorbell to suffer the same fate? Today, computational absorption is an ideal. The ultimate dream is to be online all the time, or at least connected to a computational machine of some kind.
  • Doorbells and cars and taxis hardly vanish in the process. Instead, they just get moved inside of computers.
  • “Being a computer” means something different today than in 1950, when Turing proposed the imitation game. Contra the technical prerequisites of artificial intelligence, acting like a computer often involves little more than moving bits of data around, or acting as a controller or actuator. Grill as computer, bike lock as computer, television as computer. An intermediary
  • Or consider doorbells once more. Forget Ring, the doorbell has already retired in favor of the computer. When my kids’ friends visit, they just text a request to come open the door. The doorbell has become computerized without even being connected to an app or to the internet. Call it “disruption” if you must, but doorbells and cars and taxis hardly vanish in the process. Instead, they just get moved inside of computers, where they can produce new affections.
  • The present status of intelligent machines is more powerful than any future robot apocalypse.
  • Why would anyone ever choose a solution that doesn’t involve computers, when computers are available? Propane tanks and bike locks are still edge cases, but ordinary digital services work similarly: The services people seek out are the ones that allow them to use computers to do things—from finding information to hailing a cab to ordering takeout. This is a feat of aesthetics as much as it is one of business. People choose computers as intermediaries for the sensual delight of using computers, not just as practical, efficient means for solving problems.
  • This is not where anyone thought computing would end up. Early dystopic scenarios cautioned that the computer could become a bureaucrat or a fascist, reducing human behavior to the predetermined capacities of a dumb machine. Or else, that obsessive computer use would be deadening, sucking humans into narcotic detachment.Those fears persist to some extent, partly because they have been somewhat realized. But they have also been inverted. Being away from them now feels deadening, rather than being attached to them without end. And thus, the actions computers take become self-referential: to turn more and more things into computers to prolong that connection.
  • But the real present status of intelligent machines is both humdrum and more powerful than any future robot apocalypse. Turing is often called the father of AI, but he only implied that machines might become compelling enough to inspire interaction. That hardly counts as intelligence, artificial or real. It’s also far easier to achieve. Computers already have persuaded people to move their lives inside of them. The machines didn’t need to make people immortal, or promise to serve their every whim, or to threaten to destroy them absent assent. They just needed to become a sufficient part of everything human beings do such that they can’t—or won’t—imagine doing those things without them.
  • . The real threat of computers isn’t that they might overtake and destroy humanity with their future power and intelligence. It’s that they might remain just as ordinary and impotent as they are today, and yet overtake us anyway.
katherineharron

CES 2020: Toyota is building a 'smart' city to test AI, robots and self-driving cars - ... - 0 views

  • armaker Toyota has unveiled plans for a 2,000-person "city of the future," where it will test autonomous vehicles, smart technology and robot-assisted living.
  • "With people buildings and vehicles all connected and communicating with each other through data and sensors, we will be able to test AI technology, in both the virtual and the physical world, maximizing its potential," he said on stage during Tuesday's unveiling. "We want to turn artificial intelligence into intelligence amplified."
  • The project is a collaboration between the Japanese carmaker and Danish architecture firm Bjarke Ingels Group (BIG), which designed the city's master plan. Buildings on the site will be made primarily from wood, and partly constructed using robotics. But the designs also look to Japan's past for inspiration, incorporating traditional joinery techniques and the sweeping roofs characteristic of the country's architecture.
  • ...2 more annotations...
  • Smart technology will extend inside residents' homes, according to Ingels, whose firm also designed the 2 World Trade Center in New York, and Google's headquarters in both London and Silicon Valley.
  • "In an age when technology, social media and online retail is replacing and eliminating our natural meeting places, the Woven City will explore ways to stimulate human interaction in the urban space," he said. "After all, human connectivity is the kind of connectivity that triggers wellbeing and happiness, productivity and innovation."
Javier E

Scientists See Advances in Deep Learning, a Part of Artificial Intelligence - NYTimes.com - 1 views

  • Using an artificial intelligence technique inspired by theories about how the brain recognizes patterns, technology companies are reporting startling gains in fields as diverse as computer vision, speech recognition and the identification of promising new molecules for designing drugs.
  • They offer the promise of machines that converse with humans and perform tasks like driving cars and working in factories, raising the specter of automated robots that could replace human workers.
  • what is new in recent months is the growing speed and accuracy of deep-learning programs, often called artificial neural networks or just “neural nets” for their resemblance to the neural connections in the brain.
  • ...3 more annotations...
  • With greater accuracy, for example, marketers can comb large databases of consumer behavior to get more precise information on buying habits. And improvements in facial recognition are likely to make surveillance technology cheaper and more commonplace.
  • Modern artificial neural networks are composed of an array of software components, divided into inputs, hidden layers and outputs. The arrays can be “trained” by repeated exposures to recognize patterns like images or sounds.
  • “The point about this approach is that it scales beautifully. Basically you just need to keep making it bigger and faster, and it will get better. There’s no looking back now.”
Javier E

Moral code | Rough Type - 0 views

  • So you’re happily tweeting away as your Google self-driving car crosses a bridge, its speed precisely synced to the 50 m.p.h. limit. A group of frisky schoolchildren is also heading across the bridge, on the pedestrian walkway. Suddenly, there’s a tussle, and three of the kids are pushed into the road, right in your vehicle’s path. Your self-driving car has a fraction of a second to make a choice: Either it swerves off the bridge, possibly killing you, or it runs over the children. What does the Google algorithm tell it to do?
  • As we begin to have computer-controlled cars, robots, and other machines operating autonomously out in the chaotic human world, situations will inevitably arise in which the software has to choose between a set of bad, even horrible, alternatives. How do you program a computer to choose the lesser of two evils? What are the criteria, and how do you weigh them?
  • Since we humans aren’t very good at codifying responses to moral dilemmas ourselves, particularly when the precise contours of a dilemma can’t be predicted ahead of its occurrence, programmers will find themselves in an extraordinarily difficult situation. And one assumes that they will carry a moral, not to mention a legal, burden for the code they write.
  • ...1 more annotation...
  • We don’t even really know what a conscience is, but somebody’s going to have to program one nonetheless.
Emily Horwitz

With Limited Budgets, Pursuing Science Smartly - NYTimes.com - 0 views

  • With the first presidential debate coming up on Wednesday, it is striking — if not surprising — how bland and predictable the candidates have been in discussing America’s role in space.
  • neither candidate, and neither party, has addressed the scientific question of why we want to bother with exploring space.
  • Maybe scientists should simply face reality and accept that science doesn’t play a central role in the government’s equation.
  • ...3 more annotations...
  • unmanned space exploration seems to me much more exciting and scientifically worthwhile than human spaceflight, especially at a time of restricted budgets and nascent technology.
  • human geologist could do in a few days what Curiosity may do in a year or two. But we aren’t likely to send geologists there anytime soon — and if we did, they wouldn’t be able to stay much longer than a few days, while Curiosity can silently and gently move about the planet for a decade or more, powered by its plutonium generator. Moreover, by the time we might get around to sending humans, in two to three decades at best, robots will have advanced to the point where they might easily compete in real time.
  • Over the coming decades we may send more robotic explorers to even harsher climes, maybe to explore deep oceans on Jupiter’s moon Europa, or to search on comets for telltale signs of life’s origins.
1 - 20 of 80 Next › Last »
Showing 20 items per page