Skip to main content

Home/ TOK Friends/ Group items tagged augmented reality

Rss Feed Group items tagged

Javier E

Review: Vernor Vinge's 'Fast Times' | KurzweilAI - 0 views

  • Vernor Vinge’s Hugo-award-winning short science fiction story “Fast Times at Fairmont High” takes place in a near future in which everyone lives in a ubiquitous, wireless, networked world using wearable computers and contacts or glasses on which computer graphics are projected to create an augmented reality.
  • So what is life like in Vinge’s 2020?The biggest technological change involves ubiquitous computing, wearables, and augmented reality (although none of those terms are used). Everyone wears contacts or glasses which mediate their view of the world. This allows computer graphics to be superimposed on what they see. The computers themselves are actually built into the clothing (apparently because that is the cheapest way to do it) and everything communicates wirelessly.
  • If you want a computer display, it can appear in thin air, or be attached to a wall or any other surface. If people want to watch TV together they can agree on where the screen should appear and what show they watch. When doing your work, you can have screens on all your walls, menus attached here and there, however you want to organize things. But none of it is "really" there.
  • ...7 more annotations...
  • Does your house need a new coat of paint? Don’t bother, just enter it into your public database and you have a nice new mint green paint job that everyone will see. Want to redecorate? Do it with computer graphics. You can have a birdbath in the front yard inhabited by Disneyesque animals who frolic and play. Even indoors, don’t buy artwork, just download it from the net and have it appear where you want.
  • Got a zit? No need to cover up with Clearsil, just erase it from your public face and people will see the improved version. You can dress up your clothes and hairstyle as well.
  • Of course, anyone can turn off their enhancements and see the plain old reality, but most people don’t bother most of the time because things are ugly that way.
  • Some of the kids attending Fairmont Junior High do so remotely. They appear as "ghosts", indistinguishable from the other kids except that you can walk through them. They go to classes and raise their hands to ask questions just like everyone else. They see the school and everyone at the school sees them. Instead of visiting friends, the kids can all instantly appear at one another’s locations.
  • The computer synthesizing visual imagery is able to call on the localizer network for views beyond what the person is seeing. In this way you can have 360 degree vision, or even see through walls. This is a transparent society with a vengeance!
  • The cumulative effect of all this technology was absolutely amazing and completely believable
  • One thing that was believable is that it seemed that a lot of the kids cheated, and it was almost impossible for the adults to catch them. With universal network connectivity it would be hard to make sure kids are doing their work on their own. I got the impression the school sort of looked the other way, the idea being that as long as the kids solved their problems, even if they got help via the net, that was itself a useful skill that they would be relying on all their lives.
Javier E

Is our world a simulation? Why some scientists say it's more likely than not | Technolo... - 3 views

  • Musk is just one of the people in Silicon Valley to take a keen interest in the “simulation hypothesis”, which argues that what we experience as reality is actually a giant computer simulation created by a more sophisticated intelligence
  • Oxford University’s Nick Bostrom in 2003 (although the idea dates back as far as the 17th-century philosopher René Descartes). In a paper titled “Are You Living In a Simulation?”, Bostrom suggested that members of an advanced “posthuman” civilization with vast computing power might choose to run simulations of their ancestors in the universe.
  • If we believe that there is nothing supernatural about what causes consciousness and it’s merely the product of a very complex architecture in the human brain, we’ll be able to reproduce it. “Soon there will be nothing technical standing in the way to making machines that have their own consciousness,
  • ...14 more annotations...
  • At the same time, videogames are becoming more and more sophisticated and in the future we’ll be able to have simulations of conscious entities inside them.
  • “Forty years ago we had Pong – two rectangles and a dot. That’s where we were. Now 40 years later, we have photorealistic, 3D simulations with millions of people playing simultaneously and it’s getting better every year. And soon we’ll have virtual reality, we’ll have augmented reality,” said Musk. “If you assume any rate of improvement at all, then the games will become indistinguishable from reality.”
  • “If one progresses at the current rate of technology a few decades into the future, very quickly we will be a society where there are artificial entities living in simulations that are much more abundant than human beings.
  • If there are many more simulated minds than organic ones, then the chances of us being among the real minds starts to look more and more unlikely. As Terrile puts it: “If in the future there are more digital people living in simulated environments than there are today, then what is to say we are not part of that already?”
  • Reasons to believe that the universe is a simulation include the fact that it behaves mathematically and is broken up into pieces (subatomic particles) like a pixelated video game. “Even things that we think of as continuous – time, energy, space, volume – all have a finite limit to their size. If that’s the case, then our universe is both computable and finite. Those properties allow the universe to be simulated,” Terrile said
  • “Is it logically possible that we are in a simulation? Yes. Are we probably in a simulation? I would say no,” said Max Tegmark, a professor of physics at MIT.
  • “In order to make the argument in the first place, we need to know what the fundamental laws of physics are where the simulations are being made. And if we are in a simulation then we have no clue what the laws of physics are. What I teach at MIT would be the simulated laws of physics,”
  • Terrile believes that recognizing that we are probably living in a simulation is as game-changing as Copernicus realizing that the Earth was not the center of the universe. “It was such a profound idea that it wasn’t even thought of as an assumption,”
  • That we might be in a simulation is, Terrile argues, a simpler explanation for our existence than the idea that we are the first generation to rise up from primordial ooze and evolve into molecules, biology and eventually intelligence and self-awareness. The simulation hypothesis also accounts for peculiarities in quantum mechanics, particularly the measurement problem, whereby things only become defined when they are observed.
  • “For decades it’s been a problem. Scientists have bent over backwards to eliminate the idea that we need a conscious observer. Maybe the real solution is you do need a conscious entity like a conscious player of a video game,
  • How can the hypothesis be put to the test
  • scientists can look for hallmarks of simulation. “Suppose someone is simulating our universe – it would be very tempting to cut corners in ways that makes the simulation cheaper to run. You could look for evidence of that in an experiment,” said Tegmark
  • First, it provides a scientific basis for some kind of afterlife or larger domain of reality above our world. “You don’t need a miracle, faith or anything special to believe it. It comes naturally out of the laws of physics,”
  • it means we will soon have the same ability to create our own simulations. “We will have the power of mind and matter to be able to create whatever we want and occupy those worlds.”
Javier E

Specs that see right through you - tech - 05 July 2011 - New Scientist - 0 views

  • a number of "social X-ray specs" that are set to transform how we interact with each other. By sensing emotions that we would otherwise miss, these technologies can thwart disastrous social gaffes and help us understand each other better.
  • In conversation, we pantomime certain emotions that act as social lubricants. We unconsciously nod to signal that we are following the other person's train of thought, for example, or squint a bit to indicate that we are losing track. Many of these signals can be misinterpreted - sometimes because different cultures have their own specific signals.
  • n 2005, she enlisted Simon Baron-Cohen, also at Cambridge, to help her identify a set of more relevant emotional facial states. They settled on six: thinking, agreeing, concentrating, interested - and, of course, the confused and disagreeing expressions
  • ...16 more annotations...
  • More often, we fail to spot them altogether. D
  • To create this lexicon, they hired actors to mime the expressions, then asked volunteers to describe their meaning, taking the majority response as the accurate one.
  • The camera tracks 24 "feature points" on your conversation partner's face, and software developed by Picard analyses their myriad micro-expressions, how often they appear and for how long. It then compares that data with its bank of known expressions (see diagram).
  • Eventually, she thinks the system could be incorporated into a pair of augmented-reality glasses, which would overlay computer graphics onto the scene in front of the wearer.
  • the average person only managed to interpret, correctly, 54 per cent of Baron-Cohen's expressions on real, non-acted faces. This suggested to them that most people - not just those with autism - could use some help sensing the mood of people they are talking to.
  • set up a company called Affectiva, based in Waltham, Massachusetts, which is selling their expression recognition software. Their customers include companies that, for example, want to measure how people feel about their adverts or movie.
  • it's hard to fool the machine for long
  • In addition to facial expressions, we radiate a panoply of involuntary "honest signals", a term identified by MIT Media Lab researcher Alex Pentland in the early 2000s to describe the social signals that we use to augment our language. They include body language such as gesture mirroring, and cues such as variations in the tone and pitch of the voice. We do respond to these cues, but often not consciously. If we were more aware of them in others and ourselves, then we would have a fuller picture of the social reality around us, and be able to react more deliberately.
  • develop a small electronic badge that hangs around the neck. Its audio sensors record how aggressive the wearer is being, the pitch, volume and clip of their voice, and other factors. They called it the "jerk-o-meter".
  • it helped people realise when they were being either obnoxious or unduly self-effacing.
  • y the end of the experiment, all the dots had gravitated towards more or less the same size and colour. Simply being able to see their role in a group made people behave differently, and caused the group dynamics to become more even. The entire group's emotional intelligence had increased (
  • Some of our body's responses during a conversation are not designed for broadcast to another person - but it's possible to monitor those too. Your temperature and skin conductance can also reveal secrets about your emotional state, and Picard can tap them with a glove-like device called the Q Sensor. In response to stresses, good or bad, our skin becomes clammy, increasing its conductance, and the Q Sensor picks this up.
  • Physiological responses can now even be tracked remotely, in principle without your consent. Last year, Picard and one of her graduate students showed that it was possible to measure heart rate without any surface contact with the body. They used software linked to an ordinary webcam to read information about heart rate, blood pressure and skin temperature based on, among other things, colour changes in the subject's face
  • In Rio de Janeiro and Sao Paolo, police officers can decide whether someone is a criminal just by looking at them. Their glasses scan the features of a face, and match them against a database of criminal mugshots. A red light blinks if there's a match.
  • Thad Starner at Georgia Institute of Technology in Atlanta wears a small device he has built that looks like a monocle. It can retrieve video, audio or text snippets of past conversations with people he has spoken with, and even provide real-time links between past chats and topics he is currently discussing.
  • The US military has built a radar-imaging device that can see through walls to capture 3D images of people and objects beyond.
Javier E

FaceApp helped a middle-aged man become a popular younger woman. His fan base has never... - 1 views

  • Soya’s fame illustrated a simple truth: that social media is less a reflection of who we are, and more a performance of who we want to be.
  • It also seemed to herald a darker future where our fundamental senses of reality are under siege: The AI that allows anyone to fabricate a face can also be used to harass women with “deepfake” pornography, invent fraudulent LinkedIn personas and digitally impersonate political enemies.
  • As the photos began receiving hundreds of likes, Soya’s personality and style began to come through. She was relentlessly upbeat. She never sneered or bickered or trolled. She explored small towns, savored scenic vistas, celebrated roadside restaurants’ simple meals.
  • ...25 more annotations...
  • She took pride in the basic things, like cleaning engine parts. And she only hinted at the truth: When one fan told her in October, “It’s great to be young,” Soya replied, “Youth does not mean a certain period of life, but how to hold your heart.”
  • She seemed, well, happy, and FaceApp had made her that way. Creating the lifelike impostor had taken only a few taps: He changed the “Gender” setting to “Female,” the “Age” setting to “Teen,” and the “Impression” setting — a mix of makeup filters — to a glamorous look the app calls “Hollywood.”
  • Users in the Internet’s early days rarely had any presumptions of authenticity, said Melanie C. Green, a University of Buffalo professor who studies technology and social trust. Most people assumed everyone else was playing a character clearly distinguished from their real life.
  • Nakajima grew his shimmering hair below his shoulders and raided his local convenience store for beauty supplies he thought would make the FaceApp images more convincing: blushes, eyeliners, concealers, shampoos.
  • “When I compare how I feel when I started to tweet as a woman and now, I do feel that I’m gradually gravitating toward this persona … this fantasy world that I created,” Nakajima said. “When I see photos of what I tweeted, I feel like, ‘Oh. That’s me.’ ”
  • The sensation Nakajima was feeling is so common that there’s a term for it: the Proteus effect, named for the shape-shifting Greek god. Stanford University researchers first coined it in 2007 to describe how people inhabiting the body of a digital avatar began to act the part
  • People made to appear taller in virtual-reality simulations acted more assertively, even after the experience ended. Prettier characters began to flirt.
  • What is it about online disguises? Why are they so good at bending people’s sense of self-perception?
  • they tap into this “very human impulse to play with identity and pretend to be someone you’re not.”
  • Soya pouted and scowled on rare occasions when Nakajima himself felt frustrated. But her baseline expression was an extra-wide smile, activated with a single tap.
  • “This identity play was considered one of the huge advantages of being online,” Green said. “You could switch your gender and try on all of these different personas. It was a playground for people to explore.”
  • But wasn’t this all just a big con? Nakajima had tricked people with a “cool girl” stereotype to boost his Twitter numbers. He hadn’t elevated the role of women in motorcycling; if anything, he’d supplanted them. And the character he’d created was paper thin: Soya had no internal complexity outside of what Nakajima had projected, just that eternally superimposed smile.
  • The Web’s big shift from text to visuals — the rise of photo-sharing apps, live streams and video calls — seemed at first to make that unspoken rule of real identities concrete. It seemed too difficult to fake one’s appearance when everyone’s face was on constant display.
  • Now, researchers argue, advances in image-editing artificial intelligence have done for the modern Internet what online pseudonyms did for the world’s first chat rooms. Facial filters have allowed anyone to mold themselves into the character they want to play.
  • researchers fear these augmented reality tools could end up distorting the beauty standards and expectations of actual reality.
  • Some political and tech theorists worry this new world of synthetic media threatens to detonate our concept of truth, eroding our shared experiences and infusing every online relationship with suspicion and self-doubt.
  • Deceptive political memes, conspiracy theories, anti-vaccine hoaxes and other scams have torn the fabric of our democracy, culture and public health.
  • But she also thinks about her kids, who assume “that everything online is fabricated,” and wonders whether the rules of online identity require a bit more nuance — and whether that generational shift is already underway.
  • “Bots pretending to be people, automated representations of humanity — that, they perceive as exploitative,” she said. “But if it’s just someone engaging in identity experimentation, they’re like: ‘Yeah, that’s what we’re all doing.'
  • To their generation, “authenticity is not about: ‘Does your profile picture match your real face?’ Authenticity is: ‘Is your voice your voice?’
  • “Their feeling is: ‘The ideas are mine. The voice is mine. The content is mine. I’m just looking for you to receive it without all the assumptions and baggage that comes with it.’ That’s the essence of a person’s identity. That’s who they really are.”
  • It wasn’t until the rise of giant social networks like Facebook — which used real identities to, among other things, supercharge targeted advertising — that this big game of pretend gained an air of duplicity. Spaces for playful performance shrank, and the biggest Internet watering holes began demanding proof of authenticity as a way to block out malicious intent.
  • Perhaps he should have accepted his irrelevance and faded into the digital sunset, sharing his life for few to see. But some of Soya’s followers have said they never felt deceived: It was Nakajima — his enthusiasm, his attitude about life — they’d been charmed by all along. “His personality,” as one Twitter follower said, “shined through.”
  • In Nakajima’s mind, he’d used the tools of a superficial medium to craft genuine connections. He had not felt real until he had become noticed for being fake.
  • Nakajima said he doesn’t know how long he’ll keep Soya alive. But he said he’s grateful for the way she helped him feel: carefree, adventurous, seen.
Javier E

Unease for What Microsoft's HoloLens Will Mean for Our Screen-Obsessed Lives - NYTimes.com - 0 views

  • What is it about our current reality that is so insufficient that we feel compelled to augment or improve it? I understand why people bury themselves in their phones on elevator rides, on subways and in the queue for coffee, but it has gotten to the point where even our distractions require distractions. No media viewing experience seems complete without a second screen, where we can yammer with our friends on social media or in instant messages about what we are watching.
  • Every form of media is now companion media, none meriting a single, acute focus. We are either the most bored people in the history of our species or the ubiquity of distractions has made us act that way.
  • As adults, we make “friends” who are not actually friends, develop “followers” composed of people who would not follow us out of a room, and “like” things whether we really like them or not. We no longer even have to come up with a good line at a bar to meet someone. We already know he or she swiped right after seeing us on Tinder, so the social risk is low.
  • ...1 more annotation...
  • If Windows or something like it becomes the operating system not just for my desktop but for my world, how much will I actually have to venture out into it? I can have holographic conferences with my colleagues, virtually ski the KT-22 runs at Squaw Valley in California during my downtime and ask my virtual assistant to run my day, my house and my life. After all, I already talk to my phone and it talks back to me. We are BFFs, even though only one of us is actually human.
Javier E

Martha C. Nussbaum and David V. Johnson: The New Religious Intolerance - 2 views

  • you analyze fear as the emotion principally responsible for religious intolerance. You label fear the “narcissistic emotion.” But why think that the logic of fear—erring on the side of caution (“better to be safe than sorry”)—is narcissism rather than just good common sense, especially in an era of global terrorism and instability? MN: Biological and psychological research on fear shows that it is in some respects more primitive than other emotions, involving parts of the brain that do not deal in reflection and balancing. It also focuses narrowly on the person’s own survival, which is useful in evolutionary terms, but not so useful if one wants a good society. These tendencies to narrowness can be augmented, as I show in my book, through rhetorical manipulation. Fear is a major source of the denial of equal respect to others. Fear is sometimes appropriate, of course, and I give numerous examples of this. But its tendencies toward narrowness make it easily manipulable by false information and rhetorical hype.
  • DJ: In comparing fear and empathy, you say that empathy “has its own narcissism.” Do all emotions have their own forms of narcissism, and if so, why call fear "a narcissistic emotion"? MN: What I meant by my remarks about empathy is that empathy typically functions within a small circle, and is activated by vivid narratives, as Daniel Batson’s wonderful research has shown. So it is uneven and partial. But it is not primarily self-focused, as fear is. As John Stuart Mill said, fear tells us what we need to protect against for ourselves, and empathy helps us extend that protection to others.
  • MN: I think it’s OK to teach religious texts as literature, but better to teach them as history and social reality as part of learning what other people in one’s society believe and take seriously. I urge that all young people should get a rich and non-stereotypical understanding of all the major world religions. In the process, of course, the teacher must be aware of the multiplicity of interpretations and sects within each religion
  • ...8 more annotations...
  • DJ: Of the basic values of French liberalism—liberty, equality, and fraternity—the last, fraternity, always seems to get short shrift. Your book, by contrast, argues that religious tolerance and liberalism in general can only flourish if people cultivate active respect, civility, and civic friendship with their fellow citizens. If this is so crucial, why do traditional liberals fail to make it more central to their program?
  • MN: I think liberals associate the cultivation of public emotion with fascism and other illiberal ideologies. But if they study history more closely they will find many instances in which emotions are deliberately cultivated in the service of liberal ideals. My next book, Political Emotions, will study all of this in great detail. Any political principles that ask people to go beyond their own self-interest for the sake of justice requires the cultivation of emotion.
  • In the history of philosophy this was well understood, and figures as diverse as [Jean-Jacques] Rousseau, [Johann Gottfried von] Herder, [Giuseppe] Mazzini, Auguste Comte, John Stuart Mill, and John Rawls had a lot to say about the issue. In Mill’s case, he set about solving the problem posed by the confluence of liberalism and emotion: how can a society that cultivates emotion to support its political principles also preserve enough space for dissent, critique, and experimentation? My own proposal in the forthcoming book follows the lead of Mill—and, in India, of Rabindranath Tagore—and tries to show how a public culture of emotions, supporting the stability of good political principles, can also be liberal and protective of dissent. Some of the historical figures I study in this regard are Franklin Delano Roosevelt, Martin Luther King, Jr., Gandhi, and Nehru.
  • critics of the burqa typically look at the practices of others and find sexism and “objectification” of women there, while failing to look at the practices of the dominant culture, which are certainly suffused with sexism and objectification. I was one of the feminist philosophers who wrote about objectification as a fundamental problem, and what we were talking about was the portrayal of women as commodities for male use and control in violent pornography, in a great deal of our media culture, and in other cultural practices, such as plastic surgery. I would say that this type of objectification is not on the retreat but may even be growing. Go to a high school dance—even at a high-brow school such as the John Dewey Laboratory School on our campus [at the University of Chicago]—and you will see highly individual and intelligent teenage girls marketing themselves for male consumption in indistinguishable microskirts, prior to engaging in a form of group dancing that mimes sex, and effaces their individuality. (Boys wear regular and not particularly sexy clothing.)
  • Lots of bad things are and will remain legal: unkindness, emotional blackmail, selfishness. And though I think the culture of pornographic objectification does great damage to personal relations, I don’t think that legal bans are the answer.
  • we should confront sexism by argument and persuasion, and that to render all practices that objectify women illegal would be both too difficult (who would judge?) and too tyrannical.
  • the Palin reaction was a whole lot better than the standard reaction in Europe, which is that we should just ban things that we fear. It is really unbelievable, having just lectured on this topic here in Germany: my views, which are pretty mainstream in America, are found “extreme” and even “offensive” in Germany, and all sorts of quite refined people think that Islam poses a unique problem and that the law should be dragged in to protect the culture.
  • The problem with these Europeans is that they don’t want to ban platform shoes or spike heels either; they just want to ban practices of others which they have never tried to understand.
peterconnelly

Google's I/O Conference Offers Modest Vision of the Future - The New York Times - 0 views

  • SAN FRANCISCO — There was a time when Google offered a wondrous vision of the future, with driverless cars, augmented-reality eyewear, unlimited storage of emails and photos, and predictive texts to complete sentences in progress.
  • The bold vision is still out there — but it’s a ways away. The professional executives who now run Google are increasingly focused on wringing money out of those years of spending on research and development.
  • The company’s biggest bet in artificial intelligence does not, at least for now, mean science fiction come to life. It means more subtle changes to existing products.
  • ...2 more annotations...
  • At the same time, it was not immediately clear how some of the other groundbreaking work, like language models that better understand natural conversation or that can break down a task into logical smaller steps, will ultimately lead to the next generation of computing that Google has touted.
  • Much of those capabilities are powered by the deep technological work Google has done for years using so-called machine learning, image recognition and natural language understanding. It’s a sign of an evolution rather than revolution for Google and other large tech giants.
Javier E

'The Godfather of AI' Quits Google and Warns of Danger Ahead - The New York Times - 0 views

  • he officially joined a growing chorus of critics who say those companies are racing toward danger with their aggressive campaign to create products based on generative artificial intelligence, the technology that powers popular chatbots like ChatGPT.
  • Dr. Hinton said he has quit his job at Google, where he has worked for more than decade and became one of the most respected voices in the field, so he can freely speak out about the risks of A.I. A part of him, he said, now regrets his life’s work.
  • “I console myself with the normal excuse: If I hadn’t done it, somebody else would have,”
  • ...24 more annotations...
  • Industry leaders believe the new A.I. systems could be as important as the introduction of the web browser in the early 1990s and could lead to breakthroughs in areas ranging from drug research to education.
  • But gnawing at many industry insiders is a fear that they are releasing something dangerous into the wild. Generative A.I. can already be a tool for misinformation. Soon, it could be a risk to jobs. Somewhere down the line, tech’s biggest worriers say, it could be a risk to humanity.
  • “It is hard to see how you can prevent the bad actors from using it for bad things,” Dr. Hinton said.
  • After the San Francisco start-up OpenAI released a new version of ChatGPT in March, more than 1,000 technology leaders and researchers signed an open letter calling for a six-month moratorium on the development of new systems because A.I technologies pose “profound risks to society and humanity.
  • Several days later, 19 current and former leaders of the Association for the Advancement of Artificial Intelligence, a 40-year-old academic society, released their own letter warning of the risks of A.I. That group included Eric Horvitz, chief scientific officer at Microsoft, which has deployed OpenAI’s technology across a wide range of products, including its Bing search engine.
  • Dr. Hinton, often called “the Godfather of A.I.,” did not sign either of those letters and said he did not want to publicly criticize Google or other companies until he had quit his job
  • Dr. Hinton, a 75-year-old British expatriate, is a lifelong academic whose career was driven by his personal convictions about the development and use of A.I. In 1972, as a graduate student at the University of Edinburgh, Dr. Hinton embraced an idea called a neural network. A neural network is a mathematical system that learns skills by analyzing data. At the time, few researchers believed in the idea. But it became his life’s work.
  • Dr. Hinton is deeply opposed to the use of artificial intelligence on the battlefield — what he calls “robot soldiers.”
  • Around the same time, Google, OpenAI and other companies began building neural networks that learned from huge amounts of digital text. Dr. Hinton thought it was a powerful way for machines to understand and generate language, but it was inferior to the way humans handled language.
  • In 2018, Dr. Hinton and two other longtime collaborators received the Turing Award, often called “the Nobel Prize of computing,” for their work on neural networks.
  • In 2012, Dr. Hinton and two of his students in Toronto, Ilya Sutskever and Alex Krishevsky, built a neural network that could analyze thousands of photos and teach itself to identify common objects, such as flowers, dogs and cars.
  • Then, last year, as Google and OpenAI built systems using much larger amounts of data, his view changed. He still believed the systems were inferior to the human brain in some ways but he thought they were eclipsing human intelligence in others.
  • “Maybe what is going on in these systems,” he said, “is actually a lot better than what is going on in the brain.”
  • As companies improve their A.I. systems, he believes, they become increasingly dangerous. “Look at how it was five years ago and how it is now,” he said of A.I. technology. “Take the difference and propagate it forwards. That’s scary.”
  • Until last year, he said, Google acted as a “proper steward” for the technology, careful not to release something that might cause harm. But now that Microsoft has augmented its Bing search engine with a chatbot — challenging Google’s core business — Google is racing to deploy the same kind of technology. The tech giants are locked in a competition that might be impossible to stop, Dr. Hinton said.
  • His immediate concern is that the internet will be flooded with false photos, videos and text, and the average person will “not be able to know what is true anymore.”
  • He is also worried that A.I. technologies will in time upend the job market. Today, chatbots like ChatGPT tend to complement human workers, but they could replace paralegals, personal assistants, translators and others who handle rote tasks. “It takes away the drudge work,” he said. “It might take away more than that.”
  • Down the road, he is worried that future versions of the technology pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyze. This becomes an issue, he said, as individuals and companies allow A.I. systems not only to generate their own computer code but actually run that code on their ow
  • And he fears a day when truly autonomous weapons — those killer robots — become reality.
  • “The idea that this stuff could actually get smarter than people — a few people believed that,” he said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”
  • Many other experts, including many of his students and colleagues, say this threat is hypothetical. But Dr. Hinton believes that the race between Google and Microsoft and others will escalate into a global race that will not stop without some sort of global regulation.
  • But that may be impossible, he said. Unlike with nuclear weapons, he said, there is no way of knowing whether companies or countries are working on the technology in secret. The best hope is for the world’s leading scientists to collaborate on ways of controlling the technology. “I don’t think they should scale this up more until they have understood whether they can control it,” he said.
  • Dr. Hinton said that when people used to ask him how he could work on technology that was potentially dangerous, he would paraphrase Robert Oppenheimer, who led the U.S. effort to build the atomic bomb: “When you see something that is technically sweet, you go ahead and do it.”
  • He does not say that anymore.
1 - 8 of 8
Showing 20 items per page