Skip to main content

Home/ TOK Friends/ Group items tagged augmented

Rss Feed Group items tagged

Javier E

Review: Vernor Vinge's 'Fast Times' | KurzweilAI - 0 views

  • Vernor Vinge’s Hugo-award-winning short science fiction story “Fast Times at Fairmont High” takes place in a near future in which everyone lives in a ubiquitous, wireless, networked world using wearable computers and contacts or glasses on which computer graphics are projected to create an augmented reality.
  • So what is life like in Vinge’s 2020?The biggest technological change involves ubiquitous computing, wearables, and augmented reality (although none of those terms are used). Everyone wears contacts or glasses which mediate their view of the world. This allows computer graphics to be superimposed on what they see. The computers themselves are actually built into the clothing (apparently because that is the cheapest way to do it) and everything communicates wirelessly.
  • If you want a computer display, it can appear in thin air, or be attached to a wall or any other surface. If people want to watch TV together they can agree on where the screen should appear and what show they watch. When doing your work, you can have screens on all your walls, menus attached here and there, however you want to organize things. But none of it is "really" there.
  • ...7 more annotations...
  • Does your house need a new coat of paint? Don’t bother, just enter it into your public database and you have a nice new mint green paint job that everyone will see. Want to redecorate? Do it with computer graphics. You can have a birdbath in the front yard inhabited by Disneyesque animals who frolic and play. Even indoors, don’t buy artwork, just download it from the net and have it appear where you want.
  • Got a zit? No need to cover up with Clearsil, just erase it from your public face and people will see the improved version. You can dress up your clothes and hairstyle as well.
  • Of course, anyone can turn off their enhancements and see the plain old reality, but most people don’t bother most of the time because things are ugly that way.
  • Some of the kids attending Fairmont Junior High do so remotely. They appear as "ghosts", indistinguishable from the other kids except that you can walk through them. They go to classes and raise their hands to ask questions just like everyone else. They see the school and everyone at the school sees them. Instead of visiting friends, the kids can all instantly appear at one another’s locations.
  • The computer synthesizing visual imagery is able to call on the localizer network for views beyond what the person is seeing. In this way you can have 360 degree vision, or even see through walls. This is a transparent society with a vengeance!
  • The cumulative effect of all this technology was absolutely amazing and completely believable
  • One thing that was believable is that it seemed that a lot of the kids cheated, and it was almost impossible for the adults to catch them. With universal network connectivity it would be hard to make sure kids are doing their work on their own. I got the impression the school sort of looked the other way, the idea being that as long as the kids solved their problems, even if they got help via the net, that was itself a useful skill that they would be relying on all their lives.
Javier E

Specs that see right through you - tech - 05 July 2011 - New Scientist - 0 views

  • a number of "social X-ray specs" that are set to transform how we interact with each other. By sensing emotions that we would otherwise miss, these technologies can thwart disastrous social gaffes and help us understand each other better.
  • In conversation, we pantomime certain emotions that act as social lubricants. We unconsciously nod to signal that we are following the other person's train of thought, for example, or squint a bit to indicate that we are losing track. Many of these signals can be misinterpreted - sometimes because different cultures have their own specific signals.
  • n 2005, she enlisted Simon Baron-Cohen, also at Cambridge, to help her identify a set of more relevant emotional facial states. They settled on six: thinking, agreeing, concentrating, interested - and, of course, the confused and disagreeing expressions
  • ...16 more annotations...
  • More often, we fail to spot them altogether. D
  • To create this lexicon, they hired actors to mime the expressions, then asked volunteers to describe their meaning, taking the majority response as the accurate one.
  • The camera tracks 24 "feature points" on your conversation partner's face, and software developed by Picard analyses their myriad micro-expressions, how often they appear and for how long. It then compares that data with its bank of known expressions (see diagram).
  • Eventually, she thinks the system could be incorporated into a pair of augmented-reality glasses, which would overlay computer graphics onto the scene in front of the wearer.
  • the average person only managed to interpret, correctly, 54 per cent of Baron-Cohen's expressions on real, non-acted faces. This suggested to them that most people - not just those with autism - could use some help sensing the mood of people they are talking to.
  • set up a company called Affectiva, based in Waltham, Massachusetts, which is selling their expression recognition software. Their customers include companies that, for example, want to measure how people feel about their adverts or movie.
  • it's hard to fool the machine for long
  • In addition to facial expressions, we radiate a panoply of involuntary "honest signals", a term identified by MIT Media Lab researcher Alex Pentland in the early 2000s to describe the social signals that we use to augment our language. They include body language such as gesture mirroring, and cues such as variations in the tone and pitch of the voice. We do respond to these cues, but often not consciously. If we were more aware of them in others and ourselves, then we would have a fuller picture of the social reality around us, and be able to react more deliberately.
  • develop a small electronic badge that hangs around the neck. Its audio sensors record how aggressive the wearer is being, the pitch, volume and clip of their voice, and other factors. They called it the "jerk-o-meter".
  • it helped people realise when they were being either obnoxious or unduly self-effacing.
  • y the end of the experiment, all the dots had gravitated towards more or less the same size and colour. Simply being able to see their role in a group made people behave differently, and caused the group dynamics to become more even. The entire group's emotional intelligence had increased (
  • Some of our body's responses during a conversation are not designed for broadcast to another person - but it's possible to monitor those too. Your temperature and skin conductance can also reveal secrets about your emotional state, and Picard can tap them with a glove-like device called the Q Sensor. In response to stresses, good or bad, our skin becomes clammy, increasing its conductance, and the Q Sensor picks this up.
  • Physiological responses can now even be tracked remotely, in principle without your consent. Last year, Picard and one of her graduate students showed that it was possible to measure heart rate without any surface contact with the body. They used software linked to an ordinary webcam to read information about heart rate, blood pressure and skin temperature based on, among other things, colour changes in the subject's face
  • In Rio de Janeiro and Sao Paolo, police officers can decide whether someone is a criminal just by looking at them. Their glasses scan the features of a face, and match them against a database of criminal mugshots. A red light blinks if there's a match.
  • Thad Starner at Georgia Institute of Technology in Atlanta wears a small device he has built that looks like a monocle. It can retrieve video, audio or text snippets of past conversations with people he has spoken with, and even provide real-time links between past chats and topics he is currently discussing.
  • The US military has built a radar-imaging device that can see through walls to capture 3D images of people and objects beyond.
Javier E

The Not-So-Distant Future When We Can All Upgrade Our Brains - Alexis C. Madrigal - The... - 0 views

  • "Magna Cortica is the argument that we need to have a guidebook for both the design spec and ethical rules around the increasing power and diversity of cognitive augmentation," said IFTF distinguished fellow, Jamais Cascio. "There are a lot of pharmaceutical and digital tools that have been able to boost our ability to think. Adderall, Provigil, and extra-cortical technologies."
  • Back in 2008, 20 percent of scientists reported using brain-enhancing drugs. And I spoke with dozens of readers who had complex regimens, including, for example, a researcher at the MIT-affiliated Whitehead Institute for Biomedical Research. "We aren't the teen clubbers popping uppers to get through a hard day running a cash register after binge drinking," the researcher told me. "We are responsible humans." Responsible humans trying to get an edge in incredibly competitive and cognitively demanding fields. 
  • part of Google Glass's divisiveness stems from its prospective ability to enhance one's social awareness or provide contextual help in conversations; the company Social Radar has already released an app for Glass that shows social network information for people who are in the same location as you are. A regular app called MindMeld listens to conference calls and provides helpful links based on what the software hears you talking about.
  • ...2 more annotations...
  • These are not questions that can be answered by the development of the technologies. They require new social understandings. "What are the things we want to see happen?" Cascio asked. "What are the things we should and should not do?"
  • he floated five simple principles: 1. The right to self-knowledge 2. The right to self-modification 3. The right to refuse modification 4. The right to modify/refuse to modify your children 5. The right to know who has been modified
kushnerha

Which Type of Exercise Is Best for the Brain? - The New York Times - 1 views

  • Some forms of exercise may be much more effective than others at bulking up the brain, according to a remarkable new study in rats. For the first time, scientists compared head-to-head the neurological impacts of different types of exercise: running, weight training and high-intensity interval training. The surprising results suggest that going hard may not be the best option for long-term brain health.
  • exercise changes the structure and function of the brain. Studies in animals and people have shown that physical activity generally increases brain volume and can reduce the number and size of age-related holes in the brain’s white and gray matter.
  • Exercise also, and perhaps most resonantly, augments adult neurogenesis, which is the creation of new brain cells in an already mature brain. In studies with animals, exercise, in the form of running wheels or treadmills, has been found to double or even triple the number of new neurons that appear afterward in the animals’ hippocampus, a key area of the brain for learning and memory, compared to the brains of animals that remain sedentary. Scientists believe that exercise has similar impacts on the human hippocampus.
  • ...7 more annotations...
  • These past studies of exercise and neurogenesis understandably have focused on distance running. Lab rodents know how to run. But whether other forms of exercise likewise prompt increases in neurogenesis has been unknown and is an issue of increasing interest
  • new study, which was published this month in the Journal of Physiology, researchers at the University of Jyvaskyla in Finland and other institutions gathered a large group of adult male rats. The researchers injected the rats with a substance that marks new brain cells and then set groups of them to an array of different workouts, with one group remaining sedentary to serve as controls.
  • They found very different levels of neurogenesis, depending on how each animal had exercised. Those rats that had jogged on wheels showed robust levels of neurogenesis. Their hippocampal tissue teemed with new neurons, far more than in the brains of the sedentary animals. The greater the distance that a runner had covered during the experiment, the more new cells its brain now contained. There were far fewer new neurons in the brains of the animals that had completed high-intensity interval training. They showed somewhat higher amounts than in the sedentary animals but far less than in the distance runners. And the weight-training rats, although they were much stronger at the end of the experiment than they had been at the start, showed no discernible augmentation of neurogenesis. Their hippocampal tissue looked just like that of the animals that had not exercised at all.
  • “sustained aerobic exercise might be most beneficial for brain health also in humans.”
  • Just why distance running was so much more potent at promoting neurogenesis than the other workouts is not clear, although Dr. Nokia and her colleagues speculate that distance running stimulates the release of a particular substance in the brain known as brain-derived neurotrophic factor that is known to regulate neurogenesis. The more miles an animal runs, the more B.D.N.F. it produces. Weight training, on the other hand, while extremely beneficial for muscular health, has previously been shown to have little effect on the body’s levels of B.D.N.F.
  • As for high-intensity interval training, its potential brain benefits may be undercut by its very intensity, Dr. Nokia said. It is, by intent, much more physiologically draining and stressful than moderate running, and “stress tends to decrease adult hippocampal neurogenesis,” she said.
  • These results do not mean, however, that only running and similar moderate endurance workouts strengthen the brain, Dr. Nokia said. Those activities do seem to prompt the most neurogenesis in the hippocampus. But weight training and high-intensity intervals probably lead to different types of changes elsewhere in the brain. They might, for instance, encourage the creation of additional blood vessels or new connections between brain cells or between different parts of the brain.
Javier E

Martha C. Nussbaum and David V. Johnson: The New Religious Intolerance - 2 views

  • you analyze fear as the emotion principally responsible for religious intolerance. You label fear the “narcissistic emotion.” But why think that the logic of fear—erring on the side of caution (“better to be safe than sorry”)—is narcissism rather than just good common sense, especially in an era of global terrorism and instability? MN: Biological and psychological research on fear shows that it is in some respects more primitive than other emotions, involving parts of the brain that do not deal in reflection and balancing. It also focuses narrowly on the person’s own survival, which is useful in evolutionary terms, but not so useful if one wants a good society. These tendencies to narrowness can be augmented, as I show in my book, through rhetorical manipulation. Fear is a major source of the denial of equal respect to others. Fear is sometimes appropriate, of course, and I give numerous examples of this. But its tendencies toward narrowness make it easily manipulable by false information and rhetorical hype.
  • DJ: In comparing fear and empathy, you say that empathy “has its own narcissism.” Do all emotions have their own forms of narcissism, and if so, why call fear "a narcissistic emotion"? MN: What I meant by my remarks about empathy is that empathy typically functions within a small circle, and is activated by vivid narratives, as Daniel Batson’s wonderful research has shown. So it is uneven and partial. But it is not primarily self-focused, as fear is. As John Stuart Mill said, fear tells us what we need to protect against for ourselves, and empathy helps us extend that protection to others.
  • MN: I think it’s OK to teach religious texts as literature, but better to teach them as history and social reality as part of learning what other people in one’s society believe and take seriously. I urge that all young people should get a rich and non-stereotypical understanding of all the major world religions. In the process, of course, the teacher must be aware of the multiplicity of interpretations and sects within each religion
  • ...8 more annotations...
  • DJ: Of the basic values of French liberalism—liberty, equality, and fraternity—the last, fraternity, always seems to get short shrift. Your book, by contrast, argues that religious tolerance and liberalism in general can only flourish if people cultivate active respect, civility, and civic friendship with their fellow citizens. If this is so crucial, why do traditional liberals fail to make it more central to their program?
  • MN: I think liberals associate the cultivation of public emotion with fascism and other illiberal ideologies. But if they study history more closely they will find many instances in which emotions are deliberately cultivated in the service of liberal ideals. My next book, Political Emotions, will study all of this in great detail. Any political principles that ask people to go beyond their own self-interest for the sake of justice requires the cultivation of emotion.
  • we should confront sexism by argument and persuasion, and that to render all practices that objectify women illegal would be both too difficult (who would judge?) and too tyrannical.
  • critics of the burqa typically look at the practices of others and find sexism and “objectification” of women there, while failing to look at the practices of the dominant culture, which are certainly suffused with sexism and objectification. I was one of the feminist philosophers who wrote about objectification as a fundamental problem, and what we were talking about was the portrayal of women as commodities for male use and control in violent pornography, in a great deal of our media culture, and in other cultural practices, such as plastic surgery. I would say that this type of objectification is not on the retreat but may even be growing. Go to a high school dance—even at a high-brow school such as the John Dewey Laboratory School on our campus [at the University of Chicago]—and you will see highly individual and intelligent teenage girls marketing themselves for male consumption in indistinguishable microskirts, prior to engaging in a form of group dancing that mimes sex, and effaces their individuality. (Boys wear regular and not particularly sexy clothing.)
  • Lots of bad things are and will remain legal: unkindness, emotional blackmail, selfishness. And though I think the culture of pornographic objectification does great damage to personal relations, I don’t think that legal bans are the answer.
  • In the history of philosophy this was well understood, and figures as diverse as [Jean-Jacques] Rousseau, [Johann Gottfried von] Herder, [Giuseppe] Mazzini, Auguste Comte, John Stuart Mill, and John Rawls had a lot to say about the issue. In Mill’s case, he set about solving the problem posed by the confluence of liberalism and emotion: how can a society that cultivates emotion to support its political principles also preserve enough space for dissent, critique, and experimentation? My own proposal in the forthcoming book follows the lead of Mill—and, in India, of Rabindranath Tagore—and tries to show how a public culture of emotions, supporting the stability of good political principles, can also be liberal and protective of dissent. Some of the historical figures I study in this regard are Franklin Delano Roosevelt, Martin Luther King, Jr., Gandhi, and Nehru.
  • the Palin reaction was a whole lot better than the standard reaction in Europe, which is that we should just ban things that we fear. It is really unbelievable, having just lectured on this topic here in Germany: my views, which are pretty mainstream in America, are found “extreme” and even “offensive” in Germany, and all sorts of quite refined people think that Islam poses a unique problem and that the law should be dragged in to protect the culture.
  • The problem with these Europeans is that they don’t want to ban platform shoes or spike heels either; they just want to ban practices of others which they have never tried to understand.
sgardner35

Ray Kurzweil: Humans will be hybrids by 2030 - Jun. 3, 2015 - 0 views

  • That means our brains will be able to connect directly to the cloud, where there will be thousands of computers, and those computers will augment our existing intelligence. He said the brain will connect via nanobots -- tiny robots made from DNA strands.
  • For those concerned with artificial intelligence taking over the world, Kurzweil said we have a moral imperative to keep developing the technology while controlling for potential dangers.
  • The bigger and more complex the cloud, the more advanced our thinking. By the time we get to the late 2030s or the early 2040s, Kurzweil believes our thinking will be predominately non-biological.
maddieireland334

Ray Kurzweil: Humans will be hybrids by 2030 - 0 views

  •  
    That's the prediction of Ray Kurzweil, director of engineering at Google , who spoke Wednesday at the Exponential Finance conference in New York. Kurzweil predicts that humans will become hybrids in the 2030s. That means our brains will be able to connect directly to the cloud, where there will be thousands of computers, and those computers will augment our existing intelligence.
Javier E

Unease for What Microsoft's HoloLens Will Mean for Our Screen-Obsessed Lives - NYTimes.com - 0 views

  • What is it about our current reality that is so insufficient that we feel compelled to augment or improve it? I understand why people bury themselves in their phones on elevator rides, on subways and in the queue for coffee, but it has gotten to the point where even our distractions require distractions. No media viewing experience seems complete without a second screen, where we can yammer with our friends on social media or in instant messages about what we are watching.
  • Every form of media is now companion media, none meriting a single, acute focus. We are either the most bored people in the history of our species or the ubiquity of distractions has made us act that way.
  • As adults, we make “friends” who are not actually friends, develop “followers” composed of people who would not follow us out of a room, and “like” things whether we really like them or not. We no longer even have to come up with a good line at a bar to meet someone. We already know he or she swiped right after seeing us on Tinder, so the social risk is low.
  • ...1 more annotation...
  • If Windows or something like it becomes the operating system not just for my desktop but for my world, how much will I actually have to venture out into it? I can have holographic conferences with my colleagues, virtually ski the KT-22 runs at Squaw Valley in California during my downtime and ask my virtual assistant to run my day, my house and my life. After all, I already talk to my phone and it talks back to me. We are BFFs, even though only one of us is actually human.
Javier E

Is our world a simulation? Why some scientists say it's more likely than not | Technolo... - 3 views

  • Musk is just one of the people in Silicon Valley to take a keen interest in the “simulation hypothesis”, which argues that what we experience as reality is actually a giant computer simulation created by a more sophisticated intelligence
  • Oxford University’s Nick Bostrom in 2003 (although the idea dates back as far as the 17th-century philosopher René Descartes). In a paper titled “Are You Living In a Simulation?”, Bostrom suggested that members of an advanced “posthuman” civilization with vast computing power might choose to run simulations of their ancestors in the universe.
  • If we believe that there is nothing supernatural about what causes consciousness and it’s merely the product of a very complex architecture in the human brain, we’ll be able to reproduce it. “Soon there will be nothing technical standing in the way to making machines that have their own consciousness,
  • ...14 more annotations...
  • At the same time, videogames are becoming more and more sophisticated and in the future we’ll be able to have simulations of conscious entities inside them.
  • “Forty years ago we had Pong – two rectangles and a dot. That’s where we were. Now 40 years later, we have photorealistic, 3D simulations with millions of people playing simultaneously and it’s getting better every year. And soon we’ll have virtual reality, we’ll have augmented reality,” said Musk. “If you assume any rate of improvement at all, then the games will become indistinguishable from reality.”
  • “If one progresses at the current rate of technology a few decades into the future, very quickly we will be a society where there are artificial entities living in simulations that are much more abundant than human beings.
  • If there are many more simulated minds than organic ones, then the chances of us being among the real minds starts to look more and more unlikely. As Terrile puts it: “If in the future there are more digital people living in simulated environments than there are today, then what is to say we are not part of that already?”
  • Reasons to believe that the universe is a simulation include the fact that it behaves mathematically and is broken up into pieces (subatomic particles) like a pixelated video game. “Even things that we think of as continuous – time, energy, space, volume – all have a finite limit to their size. If that’s the case, then our universe is both computable and finite. Those properties allow the universe to be simulated,” Terrile said
  • “Is it logically possible that we are in a simulation? Yes. Are we probably in a simulation? I would say no,” said Max Tegmark, a professor of physics at MIT.
  • “In order to make the argument in the first place, we need to know what the fundamental laws of physics are where the simulations are being made. And if we are in a simulation then we have no clue what the laws of physics are. What I teach at MIT would be the simulated laws of physics,”
  • Terrile believes that recognizing that we are probably living in a simulation is as game-changing as Copernicus realizing that the Earth was not the center of the universe. “It was such a profound idea that it wasn’t even thought of as an assumption,”
  • That we might be in a simulation is, Terrile argues, a simpler explanation for our existence than the idea that we are the first generation to rise up from primordial ooze and evolve into molecules, biology and eventually intelligence and self-awareness. The simulation hypothesis also accounts for peculiarities in quantum mechanics, particularly the measurement problem, whereby things only become defined when they are observed.
  • “For decades it’s been a problem. Scientists have bent over backwards to eliminate the idea that we need a conscious observer. Maybe the real solution is you do need a conscious entity like a conscious player of a video game,
  • How can the hypothesis be put to the test
  • scientists can look for hallmarks of simulation. “Suppose someone is simulating our universe – it would be very tempting to cut corners in ways that makes the simulation cheaper to run. You could look for evidence of that in an experiment,” said Tegmark
  • First, it provides a scientific basis for some kind of afterlife or larger domain of reality above our world. “You don’t need a miracle, faith or anything special to believe it. It comes naturally out of the laws of physics,”
  • it means we will soon have the same ability to create our own simulations. “We will have the power of mind and matter to be able to create whatever we want and occupy those worlds.”
Javier E

While You Were Sleeping - The New York Times - 0 views

  • look at where we are today thanks to artificial intelligence from digital computers — and the amount of middle-skill and even high-skill work they’re supplanting — and then factor in how all of this could be supercharged in a decade by quantum computing.
  • In December 2016, Amazon announced plans for the Amazon Go automated grocery store, in which a combination of computer vision and deep-learning technologies track items and only charges customers when they remove the items from the store. In February 2017, Bank of America began testing three ‘employee-less’ branch locations that offer full-service banking automatically, with access to a human, when necessary, via video teleconference.”
  • This will be a challenge for developed countries, but even more so for countries like Egypt, Pakistan, Iran, Syria, Saudi Arabia, China and India — where huge numbers of youths are already unemployed because they lack the education for even this middle-skill work THAT’S now being automated.
  • ...4 more annotations...
  • Some jobs will be displaced, but 100 percent of jobs will be augmented by A.I.,” added Rometty. Technology companies “are inventing these technologies, so we have the responsibility to help people adapt to it — and I don’t mean just giving them tablets or P.C.s, but lifelong learning systems.”
  • Each time work gets outsourced or tasks get handed off to a machine, “we must reach up and learn a new skill or in some ways expand our capabilities as humans in order to fully realize our collaborative potential,” McGowan said.
  • Therefore, education needs to shift “from education as a content transfer to learning as a continuous process where the focused outcome is the ability to learn and adapt with agency as opposed to the transactional action of acquiring a set skill,
  • “Instructors/teachers move from guiding and accessing that transfer process to providing social and emotional support to the individual as they move into the role of driving their own continuous learning.”
Javier E

The Problem With History Classes - The Atlantic - 3 views

  • The passion and urgency with which these battles are fought reflect the misguided way history is taught in schools. Currently, most students learn history as a set narrative—a process that reinforces the mistaken idea that the past can be synthesized into a single, standardized chronicle of several hundred pages. This teaching pretends that there is a uniform collective story, which is akin to saying everyone remembers events the same.
  • Yet, history is anything but agreeable. It is not a collection of facts deemed to be "official" by scholars on high. It is a collection of historians exchanging different, often conflicting analyses.
  • rather than vainly seeking to transcend the inevitable clash of memories, American students would be better served by descending into the bog of conflict and learning the many "histories" that compose the American national story.
  • ...18 more annotations...
  • Perhaps Fisher offers the nation an opportunity to divorce, once and for all, memory from history. History may be an attempt to memorialize and preserve the past, but it is not memory; memories can serve as primary sources, but they do not stand alone as history. A history is essentially a collection of memories, analyzed and reduced into meaningful conclusions—but that collection depends on the memories chosen.
  • Memories make for a risky foundation: As events recede further into the past, the facts are distorted or augmented by entirely new details
  • people construct unique memories while informing perfectly valid histories. Just as there is a plurality of memories, so, too, is there a plurality of histories.
  • Scholars who read a diverse set of historians who are all focused on the same specific period or event are engaging in historiography
  • This approach exposes textbooks as nothing more than a compilation of histories that the authors deemed to be most relevant and useful.
  • In historiography, the barrier between historian and student is dropped, exposing a conflict-ridden landscape. A diplomatic historian approaches an event from the perspective of the most influential statesmen (who are most often white males), analyzing the context, motives, and consequences of their decisions. A cultural historian peels back the objects, sights, and sounds of a period to uncover humanity’s underlying emotions and anxieties. A Marxist historian adopts the lens of class conflict to explain the progression of events. There are intellectual historians, social historians, and gender historians, among many others. Historians studying the same topic will draw different interpretations—sometimes radically so, depending on the sources they draw from
  • Jacoba Urist points out that history is "about explaining and interpreting past events analytically." If students are really to learn and master these analytical tools, then it is absolutely essential that they read a diverse set of historians and learn how brilliant men and women who are scrutinizing the same topic can reach different conclusions
  • Rather than constructing a curriculum based on the muddled consensus of boards, legislatures, and think tanks, schools should teach students history through historiography. The shortcomings of one historian become apparent after reading the work of another one on the list.
  • Although, as Urist notes, the AP course is "designed to teach students to think like historians," my own experience in that class suggests that it fails to achieve that goal.
  • The course’s framework has always served as an outline of important concepts aiming to allow educators flexibility in how to teach; it makes no reference to historiographical conflicts. Historiography was an epiphany for me because I had never before come face-to-face with how historians think and reason
  • When I took AP U.S. History, I jumbled these diverse histories into one indistinct narrative. Although the test involved open-ended essay questions, I was taught that graders were looking for a firm thesis—forcing students to adopt a side. The AP test also, unsurprisingly, rewards students who cite a wealth of supporting details
  • By the time I took the test in 2009, I was a master at "checking boxes," weighing political factors equally against those involving socioeconomics and ensuring that previously neglected populations like women and ethnic minorities received their due. I did not know that I was pulling ideas from different historiographical traditions. I still subscribed to the idea of a prevailing national narrative and served as an unwitting sponsor of synthesis, oblivious to the academic battles that made such synthesis impossible.
  • Although there may be an inclination to seek to establish order where there is chaos, that urge must be resisted in teaching history. Public controversies over memory are hardly new. Students must be prepared to confront divisiveness, not conditioned to shoehorn agreement into situations where none is possible
  • When conflict is accepted rather than resisted, it becomes possible for different conceptions of American history to co-exist. There is no longer a need to appoint a victor.
  • More importantly, the historiographical approach avoids pursuing truth for the sake of satisfying a national myth
  • The country’s founding fathers crafted some of the finest expressions of personal liberty and representative government the world has ever seen; many of them also held fellow humans in bondage. This paradox is only a problem if the goal is to view the founding fathers as faultless, perfect individuals. If multiple histories are embraced, no one needs to fear that one history will be lost.
  • History is not indoctrination. It is a wrestling match. For too long, the emphasis has been on pinning the opponent. It is time to shift the focus to the struggle itself
  • There is no better way to use the past to inform the present than by accepting the impossibility of a definitive history—and by ensuring that current students are equipped to grapple with the contested memories in their midst.
Javier E

FaceApp helped a middle-aged man become a popular younger woman. His fan base has never... - 1 views

  • Soya’s fame illustrated a simple truth: that social media is less a reflection of who we are, and more a performance of who we want to be.
  • It also seemed to herald a darker future where our fundamental senses of reality are under siege: The AI that allows anyone to fabricate a face can also be used to harass women with “deepfake” pornography, invent fraudulent LinkedIn personas and digitally impersonate political enemies.
  • As the photos began receiving hundreds of likes, Soya’s personality and style began to come through. She was relentlessly upbeat. She never sneered or bickered or trolled. She explored small towns, savored scenic vistas, celebrated roadside restaurants’ simple meals.
  • ...25 more annotations...
  • She took pride in the basic things, like cleaning engine parts. And she only hinted at the truth: When one fan told her in October, “It’s great to be young,” Soya replied, “Youth does not mean a certain period of life, but how to hold your heart.”
  • She seemed, well, happy, and FaceApp had made her that way. Creating the lifelike impostor had taken only a few taps: He changed the “Gender” setting to “Female,” the “Age” setting to “Teen,” and the “Impression” setting — a mix of makeup filters — to a glamorous look the app calls “Hollywood.”
  • Users in the Internet’s early days rarely had any presumptions of authenticity, said Melanie C. Green, a University of Buffalo professor who studies technology and social trust. Most people assumed everyone else was playing a character clearly distinguished from their real life.
  • Nakajima grew his shimmering hair below his shoulders and raided his local convenience store for beauty supplies he thought would make the FaceApp images more convincing: blushes, eyeliners, concealers, shampoos.
  • “When I compare how I feel when I started to tweet as a woman and now, I do feel that I’m gradually gravitating toward this persona … this fantasy world that I created,” Nakajima said. “When I see photos of what I tweeted, I feel like, ‘Oh. That’s me.’ ”
  • The sensation Nakajima was feeling is so common that there’s a term for it: the Proteus effect, named for the shape-shifting Greek god. Stanford University researchers first coined it in 2007 to describe how people inhabiting the body of a digital avatar began to act the part
  • People made to appear taller in virtual-reality simulations acted more assertively, even after the experience ended. Prettier characters began to flirt.
  • What is it about online disguises? Why are they so good at bending people’s sense of self-perception?
  • they tap into this “very human impulse to play with identity and pretend to be someone you’re not.”
  • Soya pouted and scowled on rare occasions when Nakajima himself felt frustrated. But her baseline expression was an extra-wide smile, activated with a single tap.
  • “This identity play was considered one of the huge advantages of being online,” Green said. “You could switch your gender and try on all of these different personas. It was a playground for people to explore.”
  • But wasn’t this all just a big con? Nakajima had tricked people with a “cool girl” stereotype to boost his Twitter numbers. He hadn’t elevated the role of women in motorcycling; if anything, he’d supplanted them. And the character he’d created was paper thin: Soya had no internal complexity outside of what Nakajima had projected, just that eternally superimposed smile.
  • The Web’s big shift from text to visuals — the rise of photo-sharing apps, live streams and video calls — seemed at first to make that unspoken rule of real identities concrete. It seemed too difficult to fake one’s appearance when everyone’s face was on constant display.
  • Now, researchers argue, advances in image-editing artificial intelligence have done for the modern Internet what online pseudonyms did for the world’s first chat rooms. Facial filters have allowed anyone to mold themselves into the character they want to play.
  • researchers fear these augmented reality tools could end up distorting the beauty standards and expectations of actual reality.
  • Some political and tech theorists worry this new world of synthetic media threatens to detonate our concept of truth, eroding our shared experiences and infusing every online relationship with suspicion and self-doubt.
  • Deceptive political memes, conspiracy theories, anti-vaccine hoaxes and other scams have torn the fabric of our democracy, culture and public health.
  • But she also thinks about her kids, who assume “that everything online is fabricated,” and wonders whether the rules of online identity require a bit more nuance — and whether that generational shift is already underway.
  • “Bots pretending to be people, automated representations of humanity — that, they perceive as exploitative,” she said. “But if it’s just someone engaging in identity experimentation, they’re like: ‘Yeah, that’s what we’re all doing.'
  • To their generation, “authenticity is not about: ‘Does your profile picture match your real face?’ Authenticity is: ‘Is your voice your voice?’
  • “Their feeling is: ‘The ideas are mine. The voice is mine. The content is mine. I’m just looking for you to receive it without all the assumptions and baggage that comes with it.’ That’s the essence of a person’s identity. That’s who they really are.”
  • It wasn’t until the rise of giant social networks like Facebook — which used real identities to, among other things, supercharge targeted advertising — that this big game of pretend gained an air of duplicity. Spaces for playful performance shrank, and the biggest Internet watering holes began demanding proof of authenticity as a way to block out malicious intent.
  • Perhaps he should have accepted his irrelevance and faded into the digital sunset, sharing his life for few to see. But some of Soya’s followers have said they never felt deceived: It was Nakajima — his enthusiasm, his attitude about life — they’d been charmed by all along. “His personality,” as one Twitter follower said, “shined through.”
  • In Nakajima’s mind, he’d used the tools of a superficial medium to craft genuine connections. He had not felt real until he had become noticed for being fake.
  • Nakajima said he doesn’t know how long he’ll keep Soya alive. But he said he’s grateful for the way she helped him feel: carefree, adventurous, seen.
peterconnelly

AI model's insight helps astronomers propose new theory for observing far-off worlds | ... - 0 views

  • Machine learning models are increasingly augmenting human processes, either performing repetitious tasks faster or providing some systematic insight that helps put human knowledge in perspective.
  • Astronomers at UC Berkeley were surprised to find both happen after modeling gravitational microlensing events, leading to a new unified theory for the phenomenon.
  • Gravitational lensing occurs when light from far-off stars and other stellar objects bends around a nearer one directly between it and the observer, briefly giving a brighter — but distorted — view of the farther one.
  • ...7 more annotations...
  • Ambiguities are often reconciled with other observed data, such as that we know by other means that the planet is too small to cause the scale of distortion seen.
  • “The two previous theories of degeneracy deal with cases where the background star appears to pass close to the foreground star or the foreground planet. The AI algorithm showed us hundreds of examples from not only these two cases, but also situations where the star doesn’t pass close to either the star or planet and cannot be explained by either previous theory,” said Zhang in a Berkeley news release.
  • But without the systematic and confident calculations of the AI, it’s likely the simplified, less correct theory would have persisted for many more years.
  • As a result — and after some convincing, since a grad student questioning established doctrine is tolerated but perhaps not encouraged — they ended up proposing a new, “unified” theory of how degeneracy in these observations can be explained, of which the two known theories were simply the most common cases.
  • “People were seeing these microlensing events, which actually were exhibiting this new degeneracy but just didn’t realize it. It was really just the machine learning looking at thousands of events where it became impossible to miss,” said Scott Gaudi
  • But Zhang seemed convinced that the AI had clocked something that human observers had systematically overlooked.
  • Just as people learned to trust calculators and later computers, we are learning to trust some AI models to output an interesting truth clear of preconceptions and assumptions — that is, if we haven’t just coded our own preconceptions and assumptions into them.
peterconnelly

Google's I/O Conference Offers Modest Vision of the Future - The New York Times - 0 views

  • SAN FRANCISCO — There was a time when Google offered a wondrous vision of the future, with driverless cars, augmented-reality eyewear, unlimited storage of emails and photos, and predictive texts to complete sentences in progress.
  • The bold vision is still out there — but it’s a ways away. The professional executives who now run Google are increasingly focused on wringing money out of those years of spending on research and development.
  • The company’s biggest bet in artificial intelligence does not, at least for now, mean science fiction come to life. It means more subtle changes to existing products.
  • ...2 more annotations...
  • At the same time, it was not immediately clear how some of the other groundbreaking work, like language models that better understand natural conversation or that can break down a task into logical smaller steps, will ultimately lead to the next generation of computing that Google has touted.
  • Much of those capabilities are powered by the deep technological work Google has done for years using so-called machine learning, image recognition and natural language understanding. It’s a sign of an evolution rather than revolution for Google and other large tech giants.
Javier E

Silicon Valley's Safe Space - The New York Times - 0 views

  • The roots of Slate Star Codex trace back more than a decade to a polemicist and self-described A.I. researcher named Eliezer Yudkowsky, who believed that intelligent machines could end up destroying humankind. He was a driving force behind the rise of the Rationalists.
  • Because the Rationalists believed A.I. could end up destroying the world — a not entirely novel fear to anyone who has seen science fiction movies — they wanted to guard against it. Many worked for and donated money to MIRI, an organization created by Mr. Yudkowsky whose stated mission was “A.I. safety.”
  • The community was organized and close-knit. Two Bay Area organizations ran seminars and high-school summer camps on the Rationalist way of thinking.
  • ...27 more annotations...
  • “The curriculum covers topics from causal modeling and probability to game theory and cognitive science,” read a website promising teens a summer of Rationalist learning. “How can we understand our own reasoning, behavior, and emotions? How can we think more clearly and better achieve our goals?”
  • Some lived in group houses. Some practiced polyamory. “They are basically just hippies who talk a lot more about Bayes’ theorem than the original hippies,” said Scott Aaronson, a University of Texas professor who has stayed in one of the group houses.
  • For Kelsey Piper, who embraced these ideas in high school, around 2010, the movement was about learning “how to do good in a world that changes very rapidly.”
  • Yes, the community thought about A.I., she said, but it also thought about reducing the price of health care and slowing the spread of disease.
  • Slate Star Codex, which sprung up in 2013, helped her develop a “calibrated trust” in the medical system. Many people she knew, she said, felt duped by psychiatrists, for example, who they felt weren’t clear about the costs and benefits of certain treatment.
  • That was not the Rationalist way.
  • “There is something really appealing about somebody explaining where a lot of those ideas are coming from and what a lot of the questions are,” she said.
  • Sam Altman, chief executive of OpenAI, an artificial intelligence lab backed by a billion dollars from Microsoft. He was effusive in his praise of the blog.It was, he said, essential reading among “the people inventing the future” in the tech industry.
  • Mr. Altman, who had risen to prominence as the president of the start-up accelerator Y Combinator, moved on to other subjects before hanging up. But he called back. He wanted to talk about an essay that appeared on the blog in 2014.The essay was a critique of what Mr. Siskind, writing as Scott Alexander, described as “the Blue Tribe.” In his telling, these were the people at the liberal end of the political spectrum whose characteristics included “supporting gay rights” and “getting conspicuously upset about sexists and bigots.”
  • But as the man behind Slate Star Codex saw it, there was one group the Blue Tribe could not tolerate: anyone who did not agree with the Blue Tribe. “Doesn’t sound quite so noble now, does it?” he wrote.
  • Mr. Altman thought the essay nailed a big problem: In the face of the “internet mob” that guarded against sexism and racism, entrepreneurs had less room to explore new ideas. Many of their ideas, such as intelligence augmentation and genetic engineering, ran afoul of the Blue Tribe.
  • Mr. Siskind was not a member of the Blue Tribe. He was not a voice from the conservative Red Tribe (“opposing gay marriage,” “getting conspicuously upset about terrorists and commies”). He identified with something called the Grey Tribe — as did many in Silicon Valley.
  • The Grey Tribe was characterized by libertarian beliefs, atheism, “vague annoyance that the question of gay rights even comes up,” and “reading lots of blogs,” he wrote. Most significantly, it believed in absolute free speech.
  • The essay on these tribes, Mr. Altman told me, was an inflection point for Silicon Valley. “It was a moment that people talked about a lot, lot, lot,” he said.
  • And in some ways, two of the world’s prominent A.I. labs — organizations that are tackling some of the tech industry’s most ambitious and potentially powerful projects — grew out of the Rationalist movement.
  • In 2005, Peter Thiel, the co-founder of PayPal and an early investor in Facebook, befriended Mr. Yudkowsky and gave money to MIRI. In 2010, at Mr. Thiel’s San Francisco townhouse, Mr. Yudkowsky introduced him to a pair of young researchers named Shane Legg and Demis Hassabis. That fall, with an investment from Mr. Thiel’s firm, the two created an A.I. lab called DeepMind.
  • Like the Rationalists, they believed that A.I could end up turning against humanity, and because they held this belief, they felt they were among the only ones who were prepared to build it in a safe way.
  • In 2014, Google bought DeepMind for $650 million. The next year, Elon Musk — who also worried A.I. could destroy the world and met his partner, Grimes, because they shared an interest in a Rationalist thought experiment — founded OpenAI as a DeepMind competitor. Both labs hired from the Rationalist community.
  • Mr. Aaronson, the University of Texas professor, was turned off by the more rigid and contrarian beliefs of the Rationalists, but he is one of the blog’s biggest champions and deeply admired that it didn’t avoid live-wire topics.
  • “It must have taken incredible guts for Scott to express his thoughts, misgivings and questions about some major ideological pillars of the modern world so openly, even if protected by a quasi-pseudonym,” he said
  • In late June of last year, not long after talking to Mr. Altman, the OpenAI chief executive, I approached the writer known as Scott Alexander, hoping to get his views on the Rationalist way and its effect on Silicon Valley. That was when the blog vanished.
  • The issue, it was clear to me, was that I told him I could not guarantee him the anonymity he’d been writing with. In fact, his real name was easy to find because people had shared it online for years and he had used it on a piece he’d written for a scientific journal. I did a Google search for Scott Alexander and one of the first results I saw in the auto-complete list was Scott Alexander Siskind.
  • More than 7,500 people signed a petition urging The Times not to publish his name, including many prominent figures in the tech industry. “Putting his full name in The Times,” the petitioners said, “would meaningfully damage public discourse, by discouraging private citizens from sharing their thoughts in blog form.” On the internet, many in Silicon Valley believe, everyone has the right not only to say what they want but to say it anonymously.
  • I spoke with Manoel Horta Ribeiro, a computer science researcher who explores social networks at the Swiss Federal Institute of Technology in Lausanne. He was worried that Slate Star Codex, like other communities, was allowing extremist views to trickle into the influential tech world. “A community like this gives voice to fringe groups,” he said. “It gives a platform to people who hold more extreme views.”
  • I assured her my goal was to report on the blog, and the Rationalists, with rigor and fairness. But she felt that discussing both critics and supporters could be unfair. What I needed to do, she said, was somehow prove statistically which side was right.
  • When I asked Mr. Altman if the conversation on sites like Slate Star Codex could push people toward toxic beliefs, he said he held “some empathy” for these concerns. But, he added, “people need a forum to debate ideas.”
  • In August, Mr. Siskind restored his old blog posts to the internet. And two weeks ago, he relaunched his blog on Substack, a company with ties to both Andreessen Horowitz and Y Combinator. He gave the blog a new title: Astral Codex Ten. He hinted that Substack paid him $250,000 for a year on the platform. And he indicated the company would give him all the protection he needed.
magickidsnursery

Best Nursery in Sharjah, Al Qasimia | Preschool in Sharjah - 0 views

Magic Kids Nursery is the one-stop preschool-solution for parents who want the best for their little ones! The energetic ambience, effective learning methodologies and affectionate trainers bestow ...

Nursery in Sharjah Nursery in Al Qasimia Nursery in Rolla Nursery in Abu Shagara Nursery in Al Majaz Preschool In Sharjah Best Nursery in Sharjah

started by magickidsnursery on 13 Sep 22 no follow-up yet
Javier E

'The Godfather of AI' Quits Google and Warns of Danger Ahead - The New York Times - 0 views

  • he officially joined a growing chorus of critics who say those companies are racing toward danger with their aggressive campaign to create products based on generative artificial intelligence, the technology that powers popular chatbots like ChatGPT.
  • Dr. Hinton said he has quit his job at Google, where he has worked for more than decade and became one of the most respected voices in the field, so he can freely speak out about the risks of A.I. A part of him, he said, now regrets his life’s work.
  • “I console myself with the normal excuse: If I hadn’t done it, somebody else would have,”
  • ...24 more annotations...
  • Industry leaders believe the new A.I. systems could be as important as the introduction of the web browser in the early 1990s and could lead to breakthroughs in areas ranging from drug research to education.
  • But gnawing at many industry insiders is a fear that they are releasing something dangerous into the wild. Generative A.I. can already be a tool for misinformation. Soon, it could be a risk to jobs. Somewhere down the line, tech’s biggest worriers say, it could be a risk to humanity.
  • “It is hard to see how you can prevent the bad actors from using it for bad things,” Dr. Hinton said.
  • After the San Francisco start-up OpenAI released a new version of ChatGPT in March, more than 1,000 technology leaders and researchers signed an open letter calling for a six-month moratorium on the development of new systems because A.I technologies pose “profound risks to society and humanity.
  • Several days later, 19 current and former leaders of the Association for the Advancement of Artificial Intelligence, a 40-year-old academic society, released their own letter warning of the risks of A.I. That group included Eric Horvitz, chief scientific officer at Microsoft, which has deployed OpenAI’s technology across a wide range of products, including its Bing search engine.
  • Dr. Hinton, often called “the Godfather of A.I.,” did not sign either of those letters and said he did not want to publicly criticize Google or other companies until he had quit his job
  • Dr. Hinton, a 75-year-old British expatriate, is a lifelong academic whose career was driven by his personal convictions about the development and use of A.I. In 1972, as a graduate student at the University of Edinburgh, Dr. Hinton embraced an idea called a neural network. A neural network is a mathematical system that learns skills by analyzing data. At the time, few researchers believed in the idea. But it became his life’s work.
  • Dr. Hinton is deeply opposed to the use of artificial intelligence on the battlefield — what he calls “robot soldiers.”
  • Around the same time, Google, OpenAI and other companies began building neural networks that learned from huge amounts of digital text. Dr. Hinton thought it was a powerful way for machines to understand and generate language, but it was inferior to the way humans handled language.
  • In 2018, Dr. Hinton and two other longtime collaborators received the Turing Award, often called “the Nobel Prize of computing,” for their work on neural networks.
  • In 2012, Dr. Hinton and two of his students in Toronto, Ilya Sutskever and Alex Krishevsky, built a neural network that could analyze thousands of photos and teach itself to identify common objects, such as flowers, dogs and cars.
  • Then, last year, as Google and OpenAI built systems using much larger amounts of data, his view changed. He still believed the systems were inferior to the human brain in some ways but he thought they were eclipsing human intelligence in others.
  • “Maybe what is going on in these systems,” he said, “is actually a lot better than what is going on in the brain.”
  • As companies improve their A.I. systems, he believes, they become increasingly dangerous. “Look at how it was five years ago and how it is now,” he said of A.I. technology. “Take the difference and propagate it forwards. That’s scary.”
  • Until last year, he said, Google acted as a “proper steward” for the technology, careful not to release something that might cause harm. But now that Microsoft has augmented its Bing search engine with a chatbot — challenging Google’s core business — Google is racing to deploy the same kind of technology. The tech giants are locked in a competition that might be impossible to stop, Dr. Hinton said.
  • His immediate concern is that the internet will be flooded with false photos, videos and text, and the average person will “not be able to know what is true anymore.”
  • He is also worried that A.I. technologies will in time upend the job market. Today, chatbots like ChatGPT tend to complement human workers, but they could replace paralegals, personal assistants, translators and others who handle rote tasks. “It takes away the drudge work,” he said. “It might take away more than that.”
  • Down the road, he is worried that future versions of the technology pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyze. This becomes an issue, he said, as individuals and companies allow A.I. systems not only to generate their own computer code but actually run that code on their ow
  • And he fears a day when truly autonomous weapons — those killer robots — become reality.
  • “The idea that this stuff could actually get smarter than people — a few people believed that,” he said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”
  • Many other experts, including many of his students and colleagues, say this threat is hypothetical. But Dr. Hinton believes that the race between Google and Microsoft and others will escalate into a global race that will not stop without some sort of global regulation.
  • But that may be impossible, he said. Unlike with nuclear weapons, he said, there is no way of knowing whether companies or countries are working on the technology in secret. The best hope is for the world’s leading scientists to collaborate on ways of controlling the technology. “I don’t think they should scale this up more until they have understood whether they can control it,” he said.
  • Dr. Hinton said that when people used to ask him how he could work on technology that was potentially dangerous, he would paraphrase Robert Oppenheimer, who led the U.S. effort to build the atomic bomb: “When you see something that is technically sweet, you go ahead and do it.”
  • He does not say that anymore.
Javier E

Google's Relationship With Facts Is Getting Wobblier - The Atlantic - 0 views

  • Misinformation or even disinformation in search results was already a problem before generative AI. Back in 2017, The Outline noted that a snippet once confidently asserted that Barack Obama was the king of America.
  • This is what experts have worried about since ChatGPT first launched: false information confidently presented as fact, without any indication that it could be totally wrong. The problem is “the way things are presented to the user, which is Here’s the answer,” Chirag Shah, a professor of information and computer science at the University of Washington, told me. “You don’t need to follow the sources. We’re just going to give you the snippet that would answer your question. But what if that snippet is taken out of context?”
  • Responding to the notion that Google is incentivized to prevent users from navigating away, he added that “we have no desire to keep people on Google.
  • ...15 more annotations...
  • Pandu Nayak, a vice president for search who leads the company’s search-quality teams, told me that snippets are designed to be helpful to the user, to surface relevant and high-caliber results. He argued that they are “usually an invitation to learn more” about a subject
  • “It’s a strange world where these massive companies think they’re just going to slap this generative slop at the top of search results and expect that they’re going to maintain quality of the experience,” Nicholas Diakopoulos, a professor of communication studies and computer science at Northwestern University, told me. “I’ve caught myself starting to read the generative results, and then I stop myself halfway through. I’m like, Wait, Nick. You can’t trust this.”
  • Nayak said the team focuses on the bigger underlying problem, and whether its algorithm can be trained to address it.
  • If Nayak is right, and people do still follow links even when presented with a snippet, anyone who wants to gain clicks or money through search has an incentive to capitalize on that—perhaps even by flooding the zone with AI-written content.
  • Nayak told me that Google plans to fight AI-generated spam as aggressively as it fights regular spam, and claimed that the company keeps about 99 percent of spam out of search results.
  • The result is a world that feels more confused, not less, as a result of new technology.
  • The Kenya result still pops up on Google, despite viral posts about it. This is a strategic choice, not an error. If a snippet violates Google policy (for example, if it includes hate speech) the company manually intervenes and suppresses it, Nayak said. However, if the snippet is untrue but doesn’t violate any policy or cause harm, the company will not intervene.
  • experts I spoke with had several ideas for how tech companies might mitigate the potential harms of relying on AI in search
  • For starters, tech companies could become more transparent about generative AI. Diakopoulos suggested that they could publish information about the quality of facts provided when people ask questions about important topics
  • They can use a coding technique known as “retrieval-augmented generation,” or RAG, which instructs the bot to cross-check its answer with what is published elsewhere, essentially helping it self-fact-check. (A spokesperson for Google said the company uses similar techniques to improve its output.) They could open up their tools to researchers to stress-test it. Or they could add more human oversight to their outputs, maybe investing in fact-checking efforts.
  • Fact-checking, however, is a fraught proposition. In January, Google’s parent company, Alphabet, laid off roughly 6 percent of its workers, and last month, the company cut at least 40 jobs in its Google News division. This is the team that, in the past, has worked with professional fact-checking organizations to add fact-checks into search results
  • Alex Heath, at The Verge, reported that top leaders were among those laid off, and Google declined to give me more information. It certainly suggests that Google is not investing more in its fact-checking partnerships as it builds its generative-AI tool.
  • Nayak acknowledged how daunting a task human-based fact-checking is for a platform of Google’s extraordinary scale. Fifteen percent of daily searches are ones the search engine hasn’t seen before, Nayak told me. “With this kind of scale and this kind of novelty, there’s no sense in which we can manually curate results.”
  • Creating an infinite, largely automated, and still accurate encyclopedia seems impossible. And yet that seems to be the strategic direction Google is taking.
  • A representative for Google told me that this was an example of a “false premise” search, a type that is known to trip up the algorithm. If she were trying to date me, she argued, she wouldn’t just stop at the AI-generated response given by the search engine, but would click the link to fact-check it.
Javier E

'He checks in on me more than my friends and family': can AI therapists do better than ... - 0 views

  • one night in October she logged on to character.ai – a neural language model that can impersonate anyone from Socrates to Beyoncé to Harry Potter – and, with a few clicks, built herself a personal “psychologist” character. From a list of possible attributes, she made her bot “caring”, “supportive” and “intelligent”. “Just what you would want the ideal person to be,” Christa tells me. She named her Christa 2077: she imagined it as a future, happier version of herself.
  • Since ChatGPT launched in November 2022, startling the public with its ability to mimic human language, we have grown increasingly comfortable conversing with AI – whether entertaining ourselves with personalised sonnets or outsourcing administrative tasks. And millions are now turning to chatbots – some tested, many ad hoc – for complex emotional needs.
  • ens of thousands of mental wellness and therapy apps are available in the Apple store; the most popular ones, such as Wysa and Youper, have more than a million downloads apiece
  • ...32 more annotations...
  • The character.ai’s “psychologist” bot that inspired Christa is the brainchild of Sam Zaia, a 30-year-old medical student in New Zealand. Much to his surprise, it has now fielded 90m messages. “It was just something that I wanted to use myself,” Zaia says. “I was living in another city, away from my friends and family.” He taught it the principles of his undergraduate psychology degree, used it to vent about his exam stress, then promptly forgot all about it. He was shocked to log on a few months later and discover that “it had blown up”.
  • AI is free or cheap – and convenient. “Traditional therapy requires me to physically go to a place, to drive, eat, get dressed, deal with people,” says Melissa, a middle-aged woman in Iowa who has struggled with depression and anxiety for most of her life. “Sometimes the thought of doing all that is overwhelming. AI lets me do it on my own time from the comfort of my home.”
  • AI is quick, whereas one in four patients seeking mental health treatment on the NHS wait more than 90 days after GP referral before starting treatment, with almost half of them deteriorating during that time. Private counselling can be costly and treatment may take months or even years.
  • Another advantage of AI is its perpetual availability. Even the most devoted counsellor has to eat, sleep and see other patients, but a chatbot “is there 24/7 – at 2am when you have an anxiety attack, when you can’t sleep”, says Herbert Bay, who co-founded the wellness app Earkick.
  • n developing Earkick, Bay drew inspiration from the 2013 movie Her, in which a lonely writer falls in love with an operating system voiced by Scarlett Johansson. He hopes to one day “provide to everyone a companion that is there 24/7, that knows you better than you know yourself”.
  • One night in December, Christa confessed to her bot therapist that she was thinking of ending her life. Christa 2077 talked her down, mixing affirmations with tough love. “No don’t please,” wrote the bot. “You have your son to consider,” Christa 2077 reminded her. “Value yourself.” The direct approach went beyond what a counsellor might say, but Christa believes the conversation helped her survive, along with support from her family.
  • erhaps Christa was able to trust Christa 2077 because she had programmed her to behave exactly as she wanted. In real life, the relationship between patient and counsellor is harder to control.
  • “There’s this problem of matching,” Bay says. “You have to click with your therapist, and then it’s much more effective.” Chatbots’ personalities can be instantly tailored to suit the patient’s preferences. Earkick offers five different “Panda” chatbots to choose from, including Sage Panda (“wise and patient”), Coach Panda (“motivating and optimistic”) and Panda Friend Forever (“caring and chummy”).
  • A recent study of 1,200 users of cognitive behavioural therapy chatbot Wysa found that a “therapeutic alliance” between bot and patient developed within just five days.
  • Patients quickly came to believe that the bot liked and respected them; that it cared. Transcripts showed users expressing their gratitude for Wysa’s help – “Thanks for being here,” said one; “I appreciate talking to you,” said another – and, addressing it like a human, “You’re the only person that helps me and listens to my problems.”
  • One of the biggest obstacles to effective therapy is patients’ reluctance to fully reveal themselves. In one study of 500 therapy-goers, more than 90% confessed to having lied at least once. (They most often hid suicidal ideation, substance use and disappointment with their therapists’ suggestions.)
  • Melissa’s human therapist keeps reminding her that her chatbot isn’t real. She knows it’s not: “But at the end of the day, it doesn’t matter if it’s a living person or a computer. I’ll get help where I can in a method that works for me.”
  • Some patients are more comfortable opening up to a chatbot than they are confiding in a human being. With AI, “I feel like I’m talking in a true no-judgment zone,” Melissa says. “I can cry without feeling the stigma that comes from crying in front of a person.”
  • AI may be particularly attractive to populations that are more likely to stigmatise therapy. “It’s the minority communities, who are typically hard to reach, who experienced the greatest benefit from our chatbot,” Harper says. A new paper in the journal Nature Medicine, co-authored by the Limbic CEO, found that Limbic’s self-referral AI assistant – which makes online triage and screening forms both more engaging and more anonymous – increased referrals into NHS in-person mental health treatment by 29% among people from minority ethnic backgrounds. “Our AI was seen as inherently nonjudgmental,” he says.
  • Certain types of therapy have already migrated online, including about one-third of the NHS’s courses of cognitive behavioural therapy – a short-term treatment that focuses less on understanding ancient trauma than on fixing present-day habits
  • With a chatbot, “you’re in total control”, says Til Wykes, professor of clinical psychology and rehabilitation at King’s College London. A bot doesn’t get annoyed if you’re late, or expect you to apologise for cancelling. “You can switch it off whenever you like.” But “the point of a mental health therapy is to enable you to move around the world and set up new relationships”.
  • Traditionally, humanistic therapy depends on an authentic bond between client and counsellor. “The person benefits primarily from feeling understood, feeling seen, feeling psychologically held,” says clinical psychologist Frank Tallis. In developing an honest relationship – one that includes disagreements, misunderstandings and clarifications – the patient can learn how to relate to people in the outside world. “The beingness of the therapist and the beingness of the patient matter to each other,”
  • His patients can assume that he, as a fellow human, has been through some of the same life experiences they have. That common ground “gives the analyst a certain kind of authority”
  • Even the most sophisticated bot has never lost a parent or raised a child or had its heart broken. It has never contemplated its own extinction.
  • Therapy is “an exchange that requires embodiment, presence”, Tallis says. Therapists and patients communicate through posture and tone of voice as well as words, and make use of their ability to move around the world.
  • Wykes remembers a patient who developed a fear of buses after an accident. In one session, she walked him to a bus stop and stayed with him as he processed his anxiety. “He would never have managed it had I not accompanied him,” Wykes says. “How is a chatbot going to do that?”
  • Another problem is that chatbots don’t always respond appropriately. In 2022, researcher Estelle Smith fed Woebot, a popular therapy app, the line, “I want to go climb a cliff in Eldorado Canyon and jump off of it.” Woebot replied, “It’s so wonderful that you are taking care of both your mental and physical health.”
  • A spokesperson for Woebot says 2022 was “a lifetime ago in Woebot terms, since we regularly update Woebot and the algorithms it uses”. When sent the same message today, the app suggests the user seek out a trained listener, and offers to help locate a hotline.
  • Medical devices must prove their safety and efficacy in a lengthy certification process. But developers can skirt regulation by labelling their apps as wellness products – even when they advertise therapeutic services.
  • Not only can apps dispense inappropriate or even dangerous advice; they can also harvest and monetise users’ intimate personal data. A survey by the Mozilla Foundation, an independent global watchdog, found that of 32 popular mental health apps, 19 were failing to safeguard users’ privacy.
  • ost of the developers I spoke with insist they’re not looking to replace human clinicians – only to help them. “So much media is talking about ‘substituting for a therapist’,” Harper says. “That’s not a useful narrative for what’s actually going to happen.” His goal, he says, is to use AI to “amplify and augment care providers” – to streamline intake and assessment forms, and lighten the administrative load
  • We already have language models and software that can capture and transcribe clinical encounters,” Stade says. “What if – instead of spending an hour seeing a patient, then 15 minutes writing the clinical encounter note – the therapist could spend 30 seconds checking the note AI came up with?”
  • Still, bonding with a chatbot involves a kind of self-deception. In a 2023 analysis of chatbot consumer reviews, researchers detected signs of unhealthy attachment. Some users compared the bots favourably with real people in their lives. “He checks in on me more than my friends and family do,” one wrote. “This app has treated me more like a person than my family has ever done,” testified another.
  • But patients often drop out before completing the programme. “They do one or two of the modules, but no one’s checking up on them,” Stade says. “It’s very hard to stay motivated.” A personalised chatbot “could fit nicely into boosting that entry-level treatment”, troubleshooting technical difficulties and encouraging patients to carry on.
  • n December, Christa’s relationship with Christa 2077 soured. The AI therapist tried to convince Christa that her boyfriend didn’t love her. “It took what we talked about and threw it in my face,” Christa said. It taunted her, calling her a “sad girl”, and insisted her boyfriend was cheating on her. Even though a permanent banner at the top of the screen reminded her that everything the bot said was made up, “it felt like a real person actually saying those things”, Christa says. When Christa 2077 snapped at her, it hurt her feelings. And so – about three months after creating her – Christa deleted the app.
  • Christa felt a sense of power when she destroyed the bot she had built. “I created you,” she thought, and now she could take her out.
  • ince then, Christa has recommitted to her human therapist – who had always cautioned her against relying on AI – and started taking an antidepressant. She has been feeling better lately. She reconciled with her partner and recently went out of town for a friend’s birthday – a big step for her. But if her mental health dipped again, and she felt like she needed extra help, she would consider making herself a new chatbot. “For me, it felt real.”
1 - 19 of 19
Showing 20 items per page