Skip to main content

Home/ TOK Friends/ Group items tagged self-driving

Rss Feed Group items tagged

katherineharron

Why Donald Trump can't grasp this moment (Opinion) - CNN - 0 views

  • In his mind, he seems to think it's the riots of the 1960s all over again, and his reaction appears both terrified and angry. "LAW & ORDER!" was the response he voiced via Twitter on Sunday and again in a public address on Monday.
  • a hellscape governed by a man frozen in his childhood and out of step with the times. The world is spiraling out of control and its most powerful man is abjectly unprepared and unqualified.
  • he convulsive 1960s was America's most trying period of unrest in modern times.
  • ...11 more annotations...
  • By 1989, when he spoke out about the infamous assault on a jogger in Central Park he would decry "the complete breakdown" of society and yearn for the days "when I was young" and he saw cops rough-up two loudmouths who had harassed a waitress. He wanted a return of that sort of policing and called on New York State to adopt the death penalty after the arrests of the five young black and Latino men in the jogger case. Years later, those men were found to be innocent.
  • Trump didn't seem to consider the suffering that caused the crises of his youth.
  • the trauma of the violent response to the civil rights struggle and the assassinations of Martin Luther King Jr. and Robert F. Kennedy led to a lifelong struggle to understand and address the pain of our fellow citizens who sought dignity and equality
  • His drive for the presidency ended with him in the Oval Office thanks to an Electoral College system that lets the loser of the national vote gain the presidency.
  • When asked about when America was great he recalled the time of his childhood, the 1940s and 1950s, when "we were not pushed around, we were respected by everybody, we had just won a war, we were pretty much doing what we had to do." He also remains nostalgic for the stereotypical 1950s housewife, speaking wistfully of women like actress Donna Reed, who always seemed to play the role of a gentle and accommodating woman.
  • With no experience in government, the military, or genuine civic engagement, Trump brought his true self to the White House, where his team included many who seemed to share his back-to-the-50s mentality. At the Justice Department federal efforts to safeguard civil rights were curbed. The Department of Education rolled back protections for the rights of women and minorities. The Pentagon barred transgender recruits.
  • There was an inevitability in the way that he first denied the problem and then banked on solutions that reeked of his pre-'60s childhood, when polio was defeated by a vaccine and new drugs arrived to vanquish infectious diseases.
  • he had never noticed that the world and its problems are complex and require respectful study and difficult, collaborative work.
  • That the US is a country in crisis, without a leader, is now so obvious that as Time magazine reported last week, cracks are forming in his once-unbreakable base. The doubts the magazine documented before the country was convulsed by recent protests against police brutality reflected his failed response to the Covid-19 pandemic, which contributed to a death toll now exceeding 100,000
  • he economic toll that includes 40 million unemployed, hit the poor and working class harder than others. Then George Floyd died on a Minneapolis street as a police officer pressed his knee into his neck for nearly nine minutes.
  • That the President has been deaf to the suffering, and incapable of responding like any previous president would, reminds us that his character, his view of humanity, and his life experience, made him wholly unqualified for the role he now occupies.
katherineharron

CES 2020: Toyota is building a 'smart' city to test AI, robots and self-driving cars - ... - 0 views

  • armaker Toyota has unveiled plans for a 2,000-person "city of the future," where it will test autonomous vehicles, smart technology and robot-assisted living.
  • "With people buildings and vehicles all connected and communicating with each other through data and sensors, we will be able to test AI technology, in both the virtual and the physical world, maximizing its potential," he said on stage during Tuesday's unveiling. "We want to turn artificial intelligence into intelligence amplified."
  • The project is a collaboration between the Japanese carmaker and Danish architecture firm Bjarke Ingels Group (BIG), which designed the city's master plan. Buildings on the site will be made primarily from wood, and partly constructed using robotics. But the designs also look to Japan's past for inspiration, incorporating traditional joinery techniques and the sweeping roofs characteristic of the country's architecture.
  • ...2 more annotations...
  • Smart technology will extend inside residents' homes, according to Ingels, whose firm also designed the 2 World Trade Center in New York, and Google's headquarters in both London and Silicon Valley.
  • "In an age when technology, social media and online retail is replacing and eliminating our natural meeting places, the Woven City will explore ways to stimulate human interaction in the urban space," he said. "After all, human connectivity is the kind of connectivity that triggers wellbeing and happiness, productivity and innovation."
blythewallick

Why We Fear the Unknown | Psychology Today - 0 views

  • Despite our better nature, it seems, fear of foreigners or other strange-seeming people comes out when we are under stress. That fear, known as xenophobia, seems almost hardwired into the human psyche.
  • Researchers are discovering the extent to which xenophobia can be easily—even arbitrarily—turned on. In just hours, we can be conditioned to fear or discriminate against those who differ from ourselves by characteristics as superficial as eye color. Even ideas we believe are just common sense can have deep xenophobic underpinnings.
  • But other research shows that when it comes to whom we fear and how we react, we do have a choice. We can, it seems, choose not to give in to our xenophobic tendencies.
  • ...7 more annotations...
  • The targets of xenophobia—derived from the Greek word for stranger—are no longer the Japanese. Instead, they are Muslim immigrants. Or Mexicans. Or the Chinese. Or whichever group we have come to fear.
  • The teacher, Jane Elliott, divided her class into two groups—those with blue eyes and those with brown or green eyes. The brown-eyed group received privileges and treats, while the blue-eyed students were denied rewards and told they were inferior. Within hours, the once-harmonious classroom became two camps, full of mutual fear and resentment. Yet, what is especially shocking is that the students were only in the third grade.
  • The drive to completely and quickly divide the world into "us" and "them" is so powerful that it must surely come from some deep-seated need.
  • Once the division is made, the inferences and projections begin to occur. For one, we tend to think more highly of people in the in-group than those in the out-group, a belief based only on group identity. Also, a person tends to feel that others in the in-group are similar to one's self in ways that—although stereotypical—may have little to do with the original criteria used to split the groups.
  • The differences in reaction time are small but telling. Again and again, researchers found that subjects readily tie in-group images with pleasant words and out-group images with unpleasant words. One study compares such groups as whites and blacks, Jews and Christians, and young people and old people. And researchers found that if you identify yourself in one group, it's easier to pair images of that group with pleasant words—and easier to pair the opposite group with unpleasant imagery. This reveals the underlying biases and enables us to study how quickly they can form.
  • If categorization and bias come so easily, are people doomed to xenophobia and racism? It's pretty clear that we are susceptible to prejudice and that there is an unconscious desire to divide the world into "us" and "them." Fortunately, however, research also shows that prejudices are fluid and that when we become conscious of our biases we can take active—and successful—steps to combat them
  • Unfortunately, such stereotypes are reinforced so often that they can become ingrained. It is difficult to escape conventional wisdom and treat all people as individuals, rather than members of a group. But that seems to be the best way to avoid the trap of dividing the world in two—and discriminating against one part of humanity.
blythewallick

How Do Personality Traits Influence Values and Well-Being? | Psychology Today - 0 views

  • Personality traits are characteristics that relate to the factory settings of our motivational system.
  • Values are factors that drive what we find to be important. Research by Schwartz and his colleagues suggests that there is a universal set of values.
  • Researchers are interested in understanding how these two sources of stability in a person over time are inter-related and whether changes in one factor (like personality) create changes in another (like values). This question was addressed in a paper in the August 2019 issue of the Journal of Personality and Social Psychology
  • ...7 more annotations...
  • The advantage of having many different waves of data from the same people is that it enables researchers to speculate on whether changes in one characteristic cause changes in another. This can be done by asking whether changes at one time in one factor predict another more strongly than the reverse
  • irst, as we might expect, people’s personality characteristics and values are fairly stable. People’s responses to both the personality inventory and the values scale did not change much over time. However, the responses to the personality inventory changed less than the responses to the values survey.
  • Overall, some personality characteristics and some values are related. Agreeableness was correlated with the value of being prosocial (that is, wanting to engage in positive actions for society). Conscientiousness was correlated with conformity (which reflects that conscientious people tend to want to follow rules including societal rules). Extraversion was related to the value of enjoyment. Openness correlated with the value of self-direction. There were no strong correlations between neuroticism and any of the values.
  • In addition, personality traits appeared to influence a variety of measures of well-being. People high in agreeableness, conscientiousness, extraversion, and openness tended to show higher measures of well-being while being high in neuroticism was linked to decreased measures of well-being.
  • First, in an era in which key findings fail to replicate, this study solidifies the relationship between personality characteristics and values that have been observed before. It also demonstrates that both personality characteristics and values change slowly.
  • Second, this work suggests that changes in personality characteristics (which reflect people’s underlying motivation) have a bigger impact on values than changes in values have on people’s personality characteristics.
  • Third, this work suggests that both personality characteristics and values are related to people’s sense of well-being. However, personality characteristics seem to have a broad impact. Changes in personality can precede changes in well-being, but it appears that changes in well-being may actually have an impact on people’s values.
Javier E

Meet DALL-E, the A.I. That Draws Anything at Your Command - The New York Times - 0 views

  • A half decade ago, the world’s leading A.I. labs built systems that could identify objects in digital images and even generate images on their own, including flowers, dogs, cars and faces. A few years later, they built systems that could do much the same with written language, summarizing articles, answering questions, generating tweets and even writing blog posts.
  • DALL-E is a notable step forward because it juggles both language and images and, in some cases, grasps the relationship between the two
  • “We can now use multiple, intersecting streams of information to create better and better technology,”
  • ...5 more annotations...
  • when Mr. Nichol tweaked his requests a little, adding or subtracting a few words here or there, it provided what he wanted. When he asked for “a piano in a living room filled with sand,” the image looked more like a beach in a living room.
  • DALL-E is what artificial intelligence researchers call a neural network, which is a mathematical system loosely modeled on the network of neurons in the brain.
  • the same technology that recognizes the commands spoken into smartphones and identifies the presence of pedestrians as self-driving cars navigate city streets.
  • A neural network learns skills by analyzing large amounts of data. By pinpointing patterns in thousands of avocado photos, for example, it can learn to recognize an avocado.
  • DALL-E looks for patterns as it analyzes millions of digital images as well as text captions that describe what each image depicts. In this way, it learns to recognize the links between the images and the words.
Javier E

Opinion | What's the Story With Colleen Hoover's Romance Novels? - The New York Times - 0 views

  • for the past few years, these books have been written by Colleen Hoover.
  • What is it about Hoover’s stories — which dwell largely in romance, but also include a thriller and a ghost story — that women are drawn to?
  • I slorped down three of them in one week. I found myself carrying them from room to room, slipping in what would begin as “just a few pages” but then stretch into hours’ worth.
  • ...11 more annotations...
  • Though Hoover’s settings bop around America from Boston to New York to Texas to Vermont, the only contextual references pertain to pop culture, social media and the occasional local attraction.
  • Politics are confined to the daunting gulf between haves and have-nots, and even when Hoover’s striving heroines find themselves among the haves, their hearts remain forever with the have-nots.
  • In these novels what matters more than anything else is hardship: Hardship is everywhere, women must suffer, women can heal, and those who make it through all this have the capacity to find themselves/love/happiness. The reader can’t help feeling that the heroine/Hoover is speaking to me/for me/like me.
  • Fiction of this sort reflects a strain in the culture that has shifted from a fascination with the other — the rich, the powerful, the exclusive — to a more inward preoccupation with the self and the desire to see oneself reflected in the stories one consumes
  • Women’s popular fiction of the ’80s, when the glitter and glamour of “Dallas” and “Dynasty” dominated prime-time TV, offers a sharp contrast. In best sellers of that period, the settings jetted from Monte Carlo to Capri to Rodeo Drive, populated by the rich, famous and destined-to-be. Heroines could have been peeled off the cover of Cosmopolitan magazine
  • As with TikTok testimonials of adolescent mental health challenges and group-chat confessions, it’s about “relatability” and the willingness to reveal all. Even celebrities must bare all
  • I never shed a tear while reading Sheldon, but that wasn’t the point. The point was exuberant voyeurism, the literary equivalent of “Lifestyles of the Rich and Famous.” The heroines’ lives were nothing like mine nor were they meant to be. That’s what made them so absurdly entertaining.
  • Colleen Hoover paints on a more intimate canvas. Her stories aren’t about attaining worldly power on a grand scale, but about finding power within
  • Hoover offers readers an emotional road map to recovery from imposter syndrome, domestic abuse, betrayal, victimization. It’s a very different kind of achievement.
  • In a country where economic inequalities can seem insurmountable and systems of power ever more remote, this may be the best her hard-knock heroines — and readers — can hope for.
  • For readers invested in characters who are like themselves — if perhaps more beautiful and with more exciting sex lives — the emotional payoff can still feel hard-earned. And, just possibly, the story could happen to them.
peterconnelly

Google's I/O Conference Offers Modest Vision of the Future - The New York Times - 0 views

  • SAN FRANCISCO — There was a time when Google offered a wondrous vision of the future, with driverless cars, augmented-reality eyewear, unlimited storage of emails and photos, and predictive texts to complete sentences in progress.
  • The bold vision is still out there — but it’s a ways away. The professional executives who now run Google are increasingly focused on wringing money out of those years of spending on research and development.
  • The company’s biggest bet in artificial intelligence does not, at least for now, mean science fiction come to life. It means more subtle changes to existing products.
  • ...2 more annotations...
  • At the same time, it was not immediately clear how some of the other groundbreaking work, like language models that better understand natural conversation or that can break down a task into logical smaller steps, will ultimately lead to the next generation of computing that Google has touted.
  • Much of those capabilities are powered by the deep technological work Google has done for years using so-called machine learning, image recognition and natural language understanding. It’s a sign of an evolution rather than revolution for Google and other large tech giants.
Javier E

Silicon Valley's Safe Space - The New York Times - 0 views

  • The roots of Slate Star Codex trace back more than a decade to a polemicist and self-described A.I. researcher named Eliezer Yudkowsky, who believed that intelligent machines could end up destroying humankind. He was a driving force behind the rise of the Rationalists.
  • Because the Rationalists believed A.I. could end up destroying the world — a not entirely novel fear to anyone who has seen science fiction movies — they wanted to guard against it. Many worked for and donated money to MIRI, an organization created by Mr. Yudkowsky whose stated mission was “A.I. safety.”
  • The community was organized and close-knit. Two Bay Area organizations ran seminars and high-school summer camps on the Rationalist way of thinking.
  • ...27 more annotations...
  • “The curriculum covers topics from causal modeling and probability to game theory and cognitive science,” read a website promising teens a summer of Rationalist learning. “How can we understand our own reasoning, behavior, and emotions? How can we think more clearly and better achieve our goals?”
  • Some lived in group houses. Some practiced polyamory. “They are basically just hippies who talk a lot more about Bayes’ theorem than the original hippies,” said Scott Aaronson, a University of Texas professor who has stayed in one of the group houses.
  • For Kelsey Piper, who embraced these ideas in high school, around 2010, the movement was about learning “how to do good in a world that changes very rapidly.”
  • Yes, the community thought about A.I., she said, but it also thought about reducing the price of health care and slowing the spread of disease.
  • Slate Star Codex, which sprung up in 2013, helped her develop a “calibrated trust” in the medical system. Many people she knew, she said, felt duped by psychiatrists, for example, who they felt weren’t clear about the costs and benefits of certain treatment.
  • That was not the Rationalist way.
  • “There is something really appealing about somebody explaining where a lot of those ideas are coming from and what a lot of the questions are,” she said.
  • Sam Altman, chief executive of OpenAI, an artificial intelligence lab backed by a billion dollars from Microsoft. He was effusive in his praise of the blog.It was, he said, essential reading among “the people inventing the future” in the tech industry.
  • Mr. Altman, who had risen to prominence as the president of the start-up accelerator Y Combinator, moved on to other subjects before hanging up. But he called back. He wanted to talk about an essay that appeared on the blog in 2014.The essay was a critique of what Mr. Siskind, writing as Scott Alexander, described as “the Blue Tribe.” In his telling, these were the people at the liberal end of the political spectrum whose characteristics included “supporting gay rights” and “getting conspicuously upset about sexists and bigots.”
  • But as the man behind Slate Star Codex saw it, there was one group the Blue Tribe could not tolerate: anyone who did not agree with the Blue Tribe. “Doesn’t sound quite so noble now, does it?” he wrote.
  • Mr. Altman thought the essay nailed a big problem: In the face of the “internet mob” that guarded against sexism and racism, entrepreneurs had less room to explore new ideas. Many of their ideas, such as intelligence augmentation and genetic engineering, ran afoul of the Blue Tribe.
  • Mr. Siskind was not a member of the Blue Tribe. He was not a voice from the conservative Red Tribe (“opposing gay marriage,” “getting conspicuously upset about terrorists and commies”). He identified with something called the Grey Tribe — as did many in Silicon Valley.
  • The Grey Tribe was characterized by libertarian beliefs, atheism, “vague annoyance that the question of gay rights even comes up,” and “reading lots of blogs,” he wrote. Most significantly, it believed in absolute free speech.
  • The essay on these tribes, Mr. Altman told me, was an inflection point for Silicon Valley. “It was a moment that people talked about a lot, lot, lot,” he said.
  • And in some ways, two of the world’s prominent A.I. labs — organizations that are tackling some of the tech industry’s most ambitious and potentially powerful projects — grew out of the Rationalist movement.
  • In 2005, Peter Thiel, the co-founder of PayPal and an early investor in Facebook, befriended Mr. Yudkowsky and gave money to MIRI. In 2010, at Mr. Thiel’s San Francisco townhouse, Mr. Yudkowsky introduced him to a pair of young researchers named Shane Legg and Demis Hassabis. That fall, with an investment from Mr. Thiel’s firm, the two created an A.I. lab called DeepMind.
  • Like the Rationalists, they believed that A.I could end up turning against humanity, and because they held this belief, they felt they were among the only ones who were prepared to build it in a safe way.
  • In 2014, Google bought DeepMind for $650 million. The next year, Elon Musk — who also worried A.I. could destroy the world and met his partner, Grimes, because they shared an interest in a Rationalist thought experiment — founded OpenAI as a DeepMind competitor. Both labs hired from the Rationalist community.
  • Mr. Aaronson, the University of Texas professor, was turned off by the more rigid and contrarian beliefs of the Rationalists, but he is one of the blog’s biggest champions and deeply admired that it didn’t avoid live-wire topics.
  • “It must have taken incredible guts for Scott to express his thoughts, misgivings and questions about some major ideological pillars of the modern world so openly, even if protected by a quasi-pseudonym,” he said
  • In late June of last year, not long after talking to Mr. Altman, the OpenAI chief executive, I approached the writer known as Scott Alexander, hoping to get his views on the Rationalist way and its effect on Silicon Valley. That was when the blog vanished.
  • The issue, it was clear to me, was that I told him I could not guarantee him the anonymity he’d been writing with. In fact, his real name was easy to find because people had shared it online for years and he had used it on a piece he’d written for a scientific journal. I did a Google search for Scott Alexander and one of the first results I saw in the auto-complete list was Scott Alexander Siskind.
  • More than 7,500 people signed a petition urging The Times not to publish his name, including many prominent figures in the tech industry. “Putting his full name in The Times,” the petitioners said, “would meaningfully damage public discourse, by discouraging private citizens from sharing their thoughts in blog form.” On the internet, many in Silicon Valley believe, everyone has the right not only to say what they want but to say it anonymously.
  • I spoke with Manoel Horta Ribeiro, a computer science researcher who explores social networks at the Swiss Federal Institute of Technology in Lausanne. He was worried that Slate Star Codex, like other communities, was allowing extremist views to trickle into the influential tech world. “A community like this gives voice to fringe groups,” he said. “It gives a platform to people who hold more extreme views.”
  • I assured her my goal was to report on the blog, and the Rationalists, with rigor and fairness. But she felt that discussing both critics and supporters could be unfair. What I needed to do, she said, was somehow prove statistically which side was right.
  • When I asked Mr. Altman if the conversation on sites like Slate Star Codex could push people toward toxic beliefs, he said he held “some empathy” for these concerns. But, he added, “people need a forum to debate ideas.”
  • In August, Mr. Siskind restored his old blog posts to the internet. And two weeks ago, he relaunched his blog on Substack, a company with ties to both Andreessen Horowitz and Y Combinator. He gave the blog a new title: Astral Codex Ten. He hinted that Substack paid him $250,000 for a year on the platform. And he indicated the company would give him all the protection he needed.
Javier E

René Girard has many Silicon Valley disciples... - Berfrois - 1 views

  • A student of Girard’s while at Stanford in the late 1980s, Thiel would go on to report, in several interviews, and somewhat more sub-rosa in his 2014 book, From Zero to One, that Girard is his greatest intellectual inspiration. He is in the habit of recommending Girard’s Things Hidden Since the Foundation of the World (1978) to others in the tech industry.
  • Michel Serres, another French theorist long resident at Stanford, and a strong advocate for Girard’s ideas, has described Girard as the “Darwin of the human sciences”, and has identified the mimetic theory as the relevant analog in the humanities of the Darwinian theory of natural selection.
  • For Girard, everything is imitation. Or rather, every human action that rises above “merely” biological appetite and that is experienced as desire for a given object, in fact is not a desire for that object itself, but a desire to have the object that somebody else already has
  • ...19 more annotations...
  • The great problem of our shared social existence is not wanting things, it’s wanting things because they are someone else’s.
  • Desire for what the other person has brings about a situation in which individuals in a community grow more similar to one another over time in a process of competition-cum-emulation. Such dual-natured social encounters, more precisely, are typical of people who are socially more or less equal
  • In relation to a movie star who does not even know some average schlub exists, that schlub can experience only emulation (this is what Girard calls “external mediation”), but in relation to a fellow schlub down the street (a “neighbor” in the Girardian-Biblical sense), emulation is a much more intimate affair (“internal mediation”, Girard calls it)
  • This is the moment of what Girard calls “mimetic crisis”, which is resolved by the selection of a scapegoat, whose casting-out from the community has the salvific effect of unifying the opposed but undifferentiated doubles
  • In a community in which the mimetic mechanism has led to widespread non-differentiation, or in other words to a high degree of conformity, it can however happen that scapegoating approaches something like the horror scenario in Shirley Jackson’s 1948 tale, “The Lottery”
  • As a modest theory of the anthropology of punishment, these observations have some promise.
  • he is a practically-minded person’s idea of what a theorist is like. Girard himself appears to share in this idea: a theorist for him is someone who comes up with a simple, elegant account of how everything works, and spends a whole career driving that account home.
  • Girard is not your typical French intellectual. He is a would-be French civil-servant archivist gone rogue, via Bloomington, Baltimore, Buffalo, and finally at Stanford, where his individual brand of New World self-reinvention would be well-received by some in the Silicon Valley subculture of, let us say, hyper-Whitmanian intellectual invention and reinvention.
  • Most ritual, in fact, strikes me as characterized by imitation without internal mediation or scapegoating.
  • I do not see anything more powerfully explanatory of this phenomenon in the work of Girard than in, say, Roland Barthes’s analysis of haute-couture in his ingenious 1967 System of Fashion, or for that matter Thorstein Veblen on conspicuous consumption, or indeed any number of other authors who have noticed that indubitable truth of human existence: that we copy each other
  • whatever has money behind it will inevitably have intelligent-looking people at least pretending to take it seriously, and with the foundation of the Imitatio Project by the Thiel Foundation (executive director Jimmy Kaltreider, a principal at Thiel Capital), the study and promotion of Girardian mimetic theory is by now a solid edifice in the intellectual landscape of California.
  • with Girard what frustrates me even more is that he does not seem to detect the non-mimetic varieties of desire
  • Perhaps even more worrisome for Girard’s mimetic theory is that it appears to leave out all those instances in which imitation serves as a force for social cohesion and cannot plausibly be said to involve any process of “internal mediation” leading to a culmination in scapegoating
  • the idea that anything Girard has to say might be particularly well-suited to adaptation as a “business philosophy” is entirely without merit.
  • dancing may be given ritual meaning — a social significance encoded by human bodies doing the same thing simultaneously, and therefore in some sense becoming identical, but without any underlying desire at all to annihilate one another. It is this significance that the Australian poet Les Murray sees as constituting the essence of both poetry and religion: both are performed, as he puts it, “in loving repetition”.
  • There are different kinds of theorist, of course, and there is plenty of room for all of us. It is however somewhat a shame that the everything-explainers, the hammerers for whom all is nail, should be the ones so consistently to capture the popular imagination
  • Part of Girard’s appeal in the Silicon Valley setting lies not only in his totalizing urge, but also in his embrace of a certain interpretation of Catholicism that stresses the naturalness of hierarchy, all the way up to the archangels, rather than the radical egalitarianism of other tendencies within this faith
  • Girard explains that the positive reception in France of his On Things Hidden Since the Foundation of the World had to do with the widespread misreading of it as a work of anti-Christian theory. “If they had known that there is no hostility in me towards the Church, they would have dismissed me. I appeared as the heretic, the revolted person that one has to be in order to reassure the media
  • Peter Thiel, for his part, certainly does not seem to feel oppressed by western phallocracy either — in fact he appears intent on coming out somewhere at the top of the phallocratic order, and in any case has explicitly stated that the aspirations of liberal democracy towards freedom and equality for all should rightly be seen as a thing of the past. In his demotic glosses on Girard, the venture capitalist also seems happy to promote the Girardian version of Catholicism as a clerical institution ideally suited to the newly emerging techno-feudalist order.
Javier E

Is Anything Still True? On the Internet, No One Knows Anymore - WSJ - 1 views

  • Creating and disseminating convincing propaganda used to require the resources of a state. Now all it takes is a smartphone.
  • Generative artificial intelligence is now capable of creating fake pictures, clones of our voices, and even videos depicting and distorting world events. The result: From our personal circles to the political circuses, everyone must now question whether what they see and hear is true.
  • exposure to AI-generated fakes can make us question the authenticity of everything we see. Real images and real recordings can be dismissed as fake. 
  • ...20 more annotations...
  • “When you show people deepfakes and generative AI, a lot of times they come out of the experiment saying, ‘I just don’t trust anything anymore,’” says David Rand, a professor at MIT Sloan who studies the creation, spread and impact of misinformation.
  • This problem, which has grown more acute in the age of generative AI, is known as the “liar’s dividend,
  • The combination of easily-generated fake content and the suspicion that anything might be fake allows people to choose what they want to believe, adds DiResta, leading to what she calls “bespoke realities.”
  • Examples of misleading content created by generative AI are not hard to come by, especially on social media
  • The signs that an image is AI-generated are easy to miss for a user simply scrolling past, who has an instant to decide whether to like or boost a post on social media. And as generative AI continues to improve, it’s likely that such signs will be harder to spot in the future.
  • “What our work suggests is that most regular people do not want to share false things—the problem is they are not paying attention,”
  • in the course of a lawsuit over the death of a man using Tesla’s “full self-driving” system, Elon Musk’s lawyers responded to video evidence of Musk making claims about this software by suggesting that the proliferation of “deepfakes” of Musk was grounds to dismiss such evidence. They advanced that argument even though the clip of Musk was verifiably real
  • are now using its existence as a pretext to dismiss accurate information
  • People’s attention is already limited, and the way social media works—encouraging us to gorge on content, while quickly deciding whether or not to share it—leaves us precious little capacity to determine whether or not something is true
  • If the crisis of authenticity were limited to social media, we might be able to take solace in communication with those closest to us. But even those interactions are now potentially rife with AI-generated fakes.
  • what sounds like a call from a grandchild requesting bail money may be scammers who have scraped recordings of the grandchild’s voice from social media to dupe a grandparent into sending money.
  • companies like Alphabet, the parent company of Google, are trying to spin the altering of personal images as a good thing. 
  • With its latest Pixel phone, the company unveiled a suite of new and upgraded tools that can automatically replace a person’s face in one image with their face from another, or quickly remove someone from a photo entirely.
  • Joseph Stalin, who was fond of erasing people he didn’t like from official photos, would have loved this technology.
  • In Google’s defense, it is adding a record of whether an image was altered to data attached to it. But such metadata is only accessible in the original photo and some copies, and is easy enough to strip out.
  • The rapid adoption of many different AI tools means that we are now forced to question everything that we are exposed to in any medium, from our immediate communities to the geopolitical, said Hany Farid, a professor at the University of California, Berkeley who
  • To put our current moment in historical context, he notes that the PC revolution made it easy to store and replicate information, the internet made it easy to publish it, the mobile revolution made it easier than ever to access and spread, and the rise of AI has made creating misinformation a cinch. And each revolution arrived faster than the one before it.
  • Not everyone agrees that arming the public with easy access to AI will exacerbate our current difficulties with misinformation. The primary argument of such experts is that there is already vastly more misinformation on the internet than a person can consume, so throwing more into the mix won’t make things worse.
  • it’s not exactly reassuring, especially given that trust in institutions is already at one of the lowest points in the past 70 years, according to the nonpartisan Pew Research Center, and polarization—a measure of how much we distrust one another—is at a high point.
  • “What happens when we have eroded trust in media, government, and experts?” says Farid. “If you don’t trust me and I don’t trust you, how do we respond to pandemics, or climate change, or have fair and open elections? This is how authoritarianism arises—when you erode trust in institutions.”
Javier E

Two recent surveys show AI will do more harm than good - The Washington Post - 0 views

  • A Monmouth University poll released last week found that only 9 percent of Americans believed that computers with artificial intelligence would do more good than harm to society.
  • When the same question was asked in a 1987 poll, a higher share of respondents – about one in five – said AI would do more good than harm,
  • In other words, people have less unqualified confidence in AI now than they did 35 years ago, when the technology was more science fiction than reality.
  • ...8 more annotations...
  • The Pew Research Center survey asked people different questions but found similar doubts about AI. Just 15 percent of respondents said they were more excited than concerned about the increasing use of AI in daily life.
  • “It’s fantastic that there is public skepticism about AI. There absolutely should be,” said Meredith Broussard, an artificial intelligence researcher and professor at New York University.
  • Broussard said there can be no way to design artificial intelligence software to make inherently human decisions, like grading students’ tests or determining the course of medical treatment.
  • Most Americans essentially agree with Broussard that AI has a place in our lives, but not for everything.
  • Most people said it was a bad idea to use AI for military drones that try to distinguish between enemies and civilians or trucks making local deliveries without human drivers. Most respondents said it was a good idea for machines to perform risky jobs such as coal mining.
  • Roman Yampolskiy, an AI specialist at the University of Louisville engineering school, told me he’s concerned about how quickly technologists are building computers that are designed to “think” like the human brain and apply knowledge not just in one narrow area, like recommending Netflix movies, but for complex tasks that have tended to require human intelligence.
  • “We have an arms race between multiple untested technologies. That is my concern,” Yampolskiy said. (If you want to feel terrified, I recommend Yampolskiy’s research paper on the inability to control advanced AI.)
  • The term “AI” is a catch-all for everything from relatively uncontroversial technology, such as autocomplete in your web search queries, to the contentious software that promises to predict crime before it happens. Our fears about the latter might be overwhelming our beliefs about the benefits from more mundane AI.
Javier E

Google Devising Radical Search Changes to Beat Back AI Rivals - The New York Times - 0 views

  • Google’s employees were shocked when they learned in March that the South Korean consumer electronics giant Samsung was considering replacing Google with Microsoft’s Bing as the default search engine on its devices.
  • Google’s reaction to the Samsung threat was “panic,” according to internal messages reviewed by The New York Times. An estimated $3 billion in annual revenue was at stake with the Samsung contract. An additional $20 billion is tied to a similar Apple contract that will be up for renewal this year.
  • A.I. competitors like the new Bing are quickly becoming the most serious threat to Google’s search business in 25 years, and in response, Google is racing to build an all-new search engine powered by the technology. It is also upgrading the existing one with A.I. features, according to internal documents reviewed by The Times.
  • ...14 more annotations...
  • The Samsung threat represented the first potential crack in Google’s seemingly impregnable search business, which was worth $162 billion last year.
  • Modernizing its search engine has become an obsession at Google, and the planned changes could put new A.I. technology in phones and homes all over the world.
  • Google has been worried about A.I.-powered competitors since OpenAI, a San Francisco start-up that is working with Microsoft, demonstrated a chatbot called ChatGPT in November. About two weeks later, Google created a task force in its search division to start building A.I. products,
  • Google has been doing A.I. research for years. Its DeepMind lab in London is considered one of the best A.I. research centers in the world, and the company has been a pioneer with A.I. projects, such as self-driving cars and the so-called large language models that are used in the development of chatbots. In recent years, Google has used large language models to improve the quality of its search results, but held off on fully adopting A.I. because it has been prone to generating false and biased statements.
  • Now the priority is winning control of the industry’s next big thing. Last month, Google released its own chatbot, Bard, but the technology received mixed reviews.
  • The system would learn what users want to know based on what they’re searching when they begin using it. And it would offer lists of preselected options for objects to buy, information to research and other information. It would also be more conversational — a bit like chatting with a helpful person.
  • Magi would keep ads in the mix of search results. Search queries that could lead to a financial transaction, such as buying shoes or booking a flight, for example, would still feature ads on their results pages.
  • Last week, Google invited some employees to test Magi’s features, and it has encouraged them to ask the search engine follow-up questions to judge its ability to hold a conversation. Google is expected to release the tools to the public next month and add more features in the fall, according to the planning document.
  • The company plans to initially release the features to a maximum of one million people. That number should progressively increase to 30 million by the end of the year. The features will be available exclusively in the United States.
  • Google has also explored efforts to let people use Google Earth’s mapping technology with help from A.I. and search for music through a conversation with a chatbot
  • A tool called GIFI would use A.I. to generate images in Google Image results.
  • Tivoli Tutor, would teach users a new language through open-ended A.I. text conversations.
  • Yet another product, Searchalong, would let users ask a chatbot questions while surfing the web through Google’s Chrome browser. People might ask the chatbot for activities near an Airbnb rental, for example, and the A.I. would scan the page and the rest of the internet for a response.
  • “If we are the leading search engine and this is a new attribute, a new feature, a new characteristic of search engines, we want to make sure that we’re in this race as well,”
« First ‹ Previous 61 - 72 of 72
Showing 20 items per page