Skip to main content

Home/ History Readings/ Group items tagged artificial

Rss Feed Group items tagged

Javier E

A.I. Poses 'Risk of Extinction,' Industry Leaders Warn - The New York Times - 0 views

  • “Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war,” reads a one-sentence statement released by the Center for AI Safety, a nonprofit organization.
  • The open letter has been signed by more than 350 executives, researchers and engineers working in A.I.
  • The signatories included top executives from three of the leading A.I. companies: Sam Altman, chief executive of OpenAI; Demis Hassabis, chief executive of Google DeepMind; and Dario Amodei, chief executive of Anthropic.
  • ...10 more annotations...
  • These fears are shared by numerous industry leaders, putting them in the unusual position of arguing that a technology they are building — and, in many cases, are furiously racing to build faster than their competitors — poses grave risks and should be regulated more tightly.
  • Dan Hendrycks, the executive director of the Center for AI Safety, said in an interview that the open letter represented a “coming-out” for some industry leaders who had expressed concerns — but only in private — about the risks of the technology they were developing.
  • “There’s a very common misconception, even in the A.I. community, that there only are a handful of doomers,” Mr. Hendrycks said. “But, in fact, many people privately would express concerns about these things.”
  • Some skeptics argue that A.I. technology is still too immature to pose an existential threat. When it comes to today’s A.I. systems, they worry more about short-term problems, such as biased and incorrect responses, than longer-term dangers.
  • But others have argued that A.I. is improving so rapidly that it has already surpassed human-level performance in some areas, and it will soon surpass it in others. They say the technology has showed signs of advanced capabilities and understanding, giving rise to fears that “artificial general intelligence,” or A.G.I., a type of artificial intelligence that can match or exceed human-level performance at a wide variety of tasks, may not be far-off.
  • In a blog post last week, Mr. Altman and two other OpenAI executives proposed several ways that powerful A.I. systems could be responsibly managed. They called for cooperation among the leading A.I. makers, more technical research into large language models and the formation of an international A.I. safety organization, similar to the International Atomic Energy Agency, which seeks to control the use of nuclear weapons.
  • Mr. Altman has also expressed support for rules that would require makers of large, cutting-edge A.I. models to register for a government-issued license.
  • The brevity of the new statement from the Center for AI Safety — just 22 words in all — was meant to unite A.I. experts who might disagree about the nature of specific risks or steps to prevent those risks from occurring, but who shared general concerns about powerful A.I. systems, Mr. Hendrycks said.
  • “We didn’t want to push for a very large menu of 30 potential interventions,” Mr. Hendrycks said. “When that happens, it dilutes the message.”
  • The statement was initially shared with a few high-profile A.I. experts, including Mr. Hinton, who quit his job at Google this month so that he could speak more freely, he said, about the potential harms of artificial intelligence. From there, it made its way to several of the major A.I. labs, where some employees then signed on.
Javier E

Opinion | Climate Change, Deglobalization, Demographics, AI: The Forces Really Driving ... - 0 views

  • Economists tried to deal with the twin stresses of inflation and recession in the 1970s without success, and now here we are, 50 years and 50-plus economics Nobel Prizes later, with little ground gained
  • There’s weirdness yet to come, and a lot more than run-of-the-mill weirdness. We are entering a new epoch of crisis, a slow-motion tidal wave of risks that will wash over our economy in the next decades — namely climate change, demographics, deglobalization and artificial intelligence.
  • Their effects will range somewhere between economic regime shift and existential threat to civilization.
  • ...16 more annotations...
  • For climate, we already are seeing a glimpse of what is to come: drought, floods and far more extreme storms than in the recent past. We saw some of the implications over the past year, with supply chains broken because rivers were too dry for shipping and hydroelectric and nuclear power impaired.
  • As with climate change, demographic shifts determine societal ones, like straining the social contract between the working and the aged.
  • We are reversing the globalization of the past 40 years, with the links in our geopolitical and economic network fraying. “Friendshoring,” or moving production to friendly countries, is a new term. The geopolitical forces behind deglobalization will amplify the stresses from climate change and demographics to lead to a frenzied competition for resources and consumers.
  • The problem here, and a problem broadly with complex and dynamic systems, is that the whole doesn’t look like the sum of the parts. If you have a lot of people running around, the overall picture can look different than what any one of those people is doing. Maybe in aggregate their actions jam the doorway; maybe in aggregate they create a stampede
  • if we can’t get a firm hold on pedestrian economic issues like inflation and recession — the prospects are not bright for getting our forecasts right for these existential forces.
  • The problem is that the models don’t work when our economy is weird. And that’s precisely when we most need them to work.
  • The fourth, artificial intelligence, is a wild card. But we already are seeing risks for work and privacy, and for frightening advances in warfare.
  • A key reason these models fail in times of crisis is that they can’t deal with a world filled with complexity or with surprising twists and turns.
  • Economics failed with the 2008 crisis because economic theory has established that it cannot predict such crises.
  • we are not a mechanical system. We are humans who innovate, change with our experiences, and at times game the system
  • Reflecting on the 1987 market crash, the brilliant physicist Richard Feynman remarked on the difficulty facing economists by noting that subatomic particles don’t act based on what they think other subatomic particles are planning — but people do that.
  • What if economists can’t turn things around? This is a possibility because we are walking into a world unlike any we have seen. We can’t anticipate all the ways climate change might affect us or where our creativity will take us with A.I. Which brings us to what is called radical uncertainty, where we simply have no clue — where we are caught unaware by things we haven’t even thought of.
  • This possibility is not much on the minds of economists
  • How do we deal with risks we cannot even define? A good start is to move away from the economist’s palette of efficiency and rationality and instead look at examples of survival in worlds of radical uncertainty.
  • In our time savannas are turning to deserts. The alternative to the economist’s model is to take a coarse approach, to be more adaptable — leave some short-term fine tuning and optimization by the wayside
  • Our long term might look brighter if we act like cockroaches. An insect fine tuned for a jungle may dominate the cockroach in that environment. But once the world changes and the jungle disappears, it will as well.
Javier E

Opinion | Ozempic Is Repairing a Hole in Our Diets Created by Processed Foods - The New... - 0 views

  • In the United States (where I now split my time), over 70 percent of people are overweight or obese, and according to one poll, 47 percent of respondents said they were willing to pay to take the new weight-loss drugs.
  • They cause users to lose an average of 10 to 20 percent of their body weight, and clinical trials suggest that the next generation of drugs (probably available soon) leads to a 24 percent loss, on average
  • I was born in 1979, and by the time I was 21, obesity rates in the United States had more than doubled. They have skyrocketed since. The obvious question is, why? And how do these new weight-loss drugs work?
  • ...21 more annotations...
  • The answer to both lies in one word: satiety. It’s a concept that we don’t use much in everyday life but that we’ve all experienced at some point. It describes the sensation of having had enough and not wanting any more.
  • The primary reason we have gained weight at a pace unprecedented in human history is that our diets have radically changed in ways that have deeply undermined our ability to feel sated
  • The evidence is clear that the kind of food my father grew up eating quickly makes you feel full. But the kind of food I grew up eating, much of which is made in factories, often with artificial chemicals, left me feeling empty and as if I had a hole in my stomach
  • In a recent study of what American children eat, ultraprocessed food was found to make up 67 percent of their daily diet. This kind of food makes you want to eat more and more. Satiety comes late, if at all.
  • After he moved in 2000 to the United States in his 20s, he gained 30 pounds in two years. He began to wonder if the American diet has some kind of strange effect on our brains and our cravings, so he designed an experiment to test it.
  • He and his colleague Paul Johnson raised a group of rats in a cage and gave them an abundant supply of healthy, balanced rat chow made out of the kind of food rats had been eating for a very long time. The rats would eat it when they were hungry, and then they seemed to feel sated and stopped. They did not become fat.
  • then Dr. Kenny and his colleague exposed the rats to an American diet: fried bacon, Snickers bars, cheesecake and other treats. They went crazy for it. The rats would hurl themselves into the cheesecake, gorge themselves and emerge with their faces and whiskers totally slicked with it. They quickly lost almost all interest in the healthy food, and the restraint they used to show around healthy food disappeared. Within six weeks, their obesity rates soared.
  • They took all the processed food away and gave the rats their old healthy diet. Dr. Kenny was confident that they would eat more of it, proving that processed food had expanded their appetites. But something stranger happened. It was as though the rats no longer recognized healthy food as food at all, and they barely ate it. Only when they were starving did they reluctantly start to consume it again.
  • Drugs like Ozempic work precisely by making us feel full.
  • processed and ultraprocessed food create a raging hole of hunger, and these treatments can repair that hole
  • the drugs are “an artificial solution to an artificial problem.”
  • Yet we have reacted to this crisis largely caused by the food industry as if it were caused only by individual moral dereliction
  • Why do we turn our anger inward and not outward at the main cause of the crisis? And by extension, why do we seek to shame people taking Ozempic but not those who, say, take drugs to lower their blood pressure?
  • The first is the belief that obesity is a sin.
  • The second idea is that we are all in a competition when it comes to weight. Ours is a society full of people fighting against the forces in our food that are making us fatter.
  • Looked at in this way, people on Ozempic can resemble cyclists like Lance Armstrong who used performance-enhancing drugs.
  • We can’t find our way to a sane, nontoxic conversation about obesity or Ozempic until we bring these rarely spoken thoughts into the open and reckon with them
  • remember the competition isn’t between you and your neighbor who’s on weight-loss drugs. It’s between you and a food industry constantly designing new ways to undermine your satiety.
  • Reducing or reversing obesity hugely boosts health, on average: We know from years of studying bariatric surgery that it slashes the risks of cancer, heart disease and diabetes-related death. Early indications are that the new anti-obesity drugs are moving people in a similar radically healthier direction,
  • But these drugs may increase the risk for thyroid cancer.
  • Do we want these weight loss drugs to be another opportunity to tear one another down? Or do we want to realize that the food industry has profoundly altered the appetites of us all — leaving us trapped in the same cage, scrambling to find a way out?
Javier E

Yuval Noah Harari's Apocalyptic Vision - The Atlantic - 0 views

  • He shares with Jared Diamond, Steven Pinker, and Slavoj Žižek a zeal for theorizing widely, though he surpasses them in his taste for provocative simplifications.
  • In medieval Europe, he explains, “Knowledge = Scriptures x Logic,” whereas after the scientific revolution, “Knowledge = Empirical Data x Mathematics.”
  • Silicon Valley’s recent inventions invite galaxy-brain cogitation of the sort Harari is known for. The larger you feel the disruptions around you to be, the further back you reach for fitting analogies
  • ...44 more annotations...
  • Have such technological leaps been good? Harari has doubts. Humans have “produced little that we can be proud of,” he complained in Sapiens. His next books, Homo Deus: A Brief History of Tomorrow (2015) and 21 Lessons for the 21st Century (2018), gazed into the future with apprehension
  • Harari has written another since-the-dawn-of-time overview, Nexus: A Brief History of Information Networks From the Stone Age to AI. It’s his grimmest work yet
  • Harari rejects the notion that more information leads automatically to truth or wisdom. But it has led to artificial intelligence, whose advent Harari describes apocalyptically. “If we mishandle it,” he warns, “AI might extinguish not only the human dominion on Earth but the light of consciousness itself, turning the universe into a realm of utter darkness.”
  • Those seeking a precedent for AI often bring up the movable-type printing press, which inundated Europe with books and led, they say, to the scientific revolution. Harari rolls his eyes at this story. Nothing guaranteed that printing would be used for science, he notes
  • Copernicus’s On the Revolutions of the Heavenly Spheres failed to sell its puny initial print run of about 500 copies in 1543. It was, the writer Arthur Koestler joked, an “all-time worst seller.”
  • The book that did sell was Heinrich Kramer’s The Hammer of the Witches (1486), which ranted about a supposed satanic conspiracy of sexually voracious women who copulated with demons and cursed men’s penises. The historian Tamar Herzig describes Kramer’s treatise as “arguably the most misogynistic text to appear in print in premodern times.” Yet it was “a bestseller by early modern standards,”
  • Kramer’s book encouraged the witch hunts that killed tens of thousands. These murderous sprees, Harari observes, were “made worse” by the printing press.
  • Ampler information flows made surveillance and tyranny worse too, Harari argues. The Soviet Union was, among other things, “one of the most formidable information networks in history,”
  • Information has always carried this destructive potential, Harari believes. Yet up until now, he argues, even such hellish episodes have been only that: episodes
  • Demagogic manias like the ones Kramer fueled tend to burn bright and flame out.
  • States ruled by top-down terror have a durability problem too, Harari explains. Even if they could somehow intercept every letter and plant informants in every household, they’d still need to intelligently analyze all of the incoming reports. No regime has come close to managing this
  • for the 20th-century states that got nearest to total control, persistent problems managing information made basic governance difficult.
  • So it was, at any rate, in the age of paper. Collecting data is now much, much easier.
  • Some people worry that the government will implant a chip in their brain, but they should “instead worry about the smartphones on which they read these conspiracy theories,” Harari writes. Phones can already track our eye movements, record our speech, and deliver our private communications to nameless strangers. They are listening devices that, astonishingly, people are willing to leave by the bedside while having sex.
  • Harari’s biggest worry is what happens when AI enters the chat. Currently, massive data collection is offset, as it has always been, by the difficulties of data analysis
  • What defense could there be against an entity that recognized every face, knew every mood, and weaponized that information?
  • Today’s political deliriums are stoked by click-maximizing algorithms that steer people toward “engaging” content, which is often whatever feeds their righteous rage.
  • Imagine what will happen, Harari writes, when bots generate that content themselves, personalizing and continually adjusting it to flood the dopamine receptors of each user.
  • Kramer’s Hammer of the Witches will seem like a mild sugar high compared with the heroin rush of content the algorithms will concoct. If AI seizes command, it could make serfs or psychopaths of us all.
  • Harari regards AI as ultimately unfathomable—and that is his concern.
  • Although we know how to make AI models, we don’t understand them. We’ve blithely summoned an “alien intelligence,” Harari writes, with no idea what it will do.
  • Last year, Harari signed an open letter warning of the “profound risks to society and humanity” posed by unleashing “powerful digital minds that no one—not even their creators—can understand, predict, or reliably control.” It called for a pause of at least six months on training advanced AI systems,
  • cynics saw the letter as self-serving. It fed the hype by insisting that artificial intelligence, rather than being a buggy product with limited use, was an epochal development. It showcased tech leaders’ Oppenheimer-style moral seriousness
  • it cost them nothing, as there was no chance their research would actually stop. Four months after signing, Musk publicly launched an AI company.
  • The economics of the Information Age have been treacherous. They’ve made content cheaper to consume but less profitable to produce. Consider the effect of the free-content and targeted-advertising models on journalism
  • Since 2005, the United States has lost nearly a third of its newspapers and more than two-thirds of its newspaper jobs, to the point where nearly 7 percent of newspaper employees now work for a single organization, The New York Times
  • we speak of “news deserts,” places where reporting has essentially vanished.
  • AI threatens to exacerbate this. With better chatbots, platforms won’t need to link to external content, because they’ll reproduce it synthetically. Instead of a Google search that sends users to outside sites, a chatbot query will summarize those sites, keeping users within Google’s walled garden.
  • a Truman Show–style bubble: personally generated content, read by voices that sound real but aren’t, plus product placement
  • this would cut off writers and publishers—the ones actually generating ideas—from readers. Our intellectual institutions would wither, and the internet would devolve into a closed loop of “five giant websites, each filled with screenshots of the other four,” as the software engineer Tom Eastman puts it.
  • Harari is Silicon Valley’s ideal of what a chatbot should be. He raids libraries, detects the patterns, and boils all of history down to bullet points. (Modernity, he writes, “can be summarised in a single phrase: humans agree to give up meaning in exchange for power.”)
  • Individual AI models cost billions of dollars. In 2023, about a fifth of venture capital in North America and Europe went to AI. Such sums make sense only if tech firms can earn enormous revenues off their product, by monopolizing it or marketing it. And at that scale, the most obvious buyers are other large companies or governments. How confident are we that giving more power to corporations and states will turn out well?
  • He discusses it as something that simply happened. Its arrival is nobody’s fault in particular.
  • In Harari’s view, “power always stems from cooperation between large numbers of humans”; it is the product of society.
  • like a chatbot, he has a quasi-antagonistic relationship with his sources, an I’ll read them so you don’t have to attitude. He mines other writers for material—a neat quip, a telling anecdote—but rarely seems taken with anyone else’s view
  • Hand-wringing about the possibility that AI developers will lose control of their creation, like the sorcerer’s apprentice, distracts from the more plausible scenario that they won’t lose control, and that they’ll use or sell it as planned. A better German fable might be Richard Wagner’s The Ring of the Nibelung : A power-hungry incel forges a ring that will let its owner rule the world—and the gods wage war over it.
  • Harari’s eyes are more on the horizon than on Silicon Valley’s economics or politics.
  • In Nexus, he proposes four principles. The first is “benevolence,” explained thus: “When a computer network collects information on me, that information should be used to help me rather than manipulate me.”
  • Harari’s other three values are decentralization of informational channels, accountability from those who collect our data, and some respite from algorithmic surveillance
  • these are fine, but they are quick, unsurprising, and—especially when expressed in the abstract, as things that “we” should all strive for—not very helpful.
  • though his persistent first-person pluralizing (“decisions we all make”) softly suggests that AI is humanity’s collective creation rather than the product of certain corporations and the individuals who run them. This obscures the most important actors in the drama—ironically, just as those actors are sapping our intellectual life, hampering the robust, informed debates we’d need in order to make the decisions Harari envisions.
  • Taking AI seriously might mean directly confronting the companies developing it
  • Harari slots easily into the dominant worldview of Silicon Valley. Despite his oft-noted digital abstemiousness, he exemplifies its style of gathering and presenting information. And, like many in that world, he combines technological dystopianism with political passivity.
  • Although he thinks tech giants, in further developing AI, might end humankind, he does not treat thwarting them as an urgent priority. His epic narratives, told as stories of humanity as a whole, do not make much room for such us-versus-them clashes.
Javier E

A.I. Pioneers Call for Protections Against 'Catastrophic Risks' - 0 views

  • “Both countries are hugely suspicious of each other’s intentions,” said Matt Sheehan, a fellow at the Carnegie Endowment for International Peace, who was not part of the dialogue. “They’re worried that if they pump the brakes because of safety concerns, that will allow the other to zoom ahead,” Mr. Sheehan said. “That suspicion is just going to be baked in.”
  • In an interview, Dr. Bengio, one of the founding members of the group, cited talks between American and Soviet scientists at the height of the Cold War that helped bring about coordination to avert nuclear catastrophe. In both cases, the scientists involved felt an obligation to help close the Pandora’s box opened by their research.
  • Technology is changing so quickly that is difficult for individual companies and governments to decide how to approach it, and collaboration is crucial, said Fu Hongyu, the director of A.I. governance at Alibaba’s research institute, AliResearch, who did not participate in the dialogue.
  • ...11 more annotations...
  • In a broader government initiative, representatives from 28 countries signed a declaration in Britain last November, agreeing to cooperate on evaluating the risks of artificial intelligence. They met again in Seoul in May. But these gatherings have stopped short of setting specific policy goals.
  • President Biden and China’s leader, Xi Jinping, agreed when they met last year that officials from both countries should hold talks on A.I. safety. The first took place in Geneva in May.
  • Last October, President Biden signed an executive order that required companies to report to the federal government about the risks that their A.I. systems could pose, like their ability to create weapons of mass destruction or potential to be used by terrorists.
  • Government officials in both China and the United States have made artificial intelligence a priority in the past year. In July, a Chinese Communist Party conclave that takes place every five years called for a system to regulate A.I. safety. Last week, an influential technical standards group in China published an A.I. safety framework.
  • Among the signatories was Yoshua Bengio, whose work is so often cited that he is called one of the godfathers of the field. There was Andrew Yao, whose course at Tsinghua University in Beijing has minted the founders of many of China’s top tech companies. Geoffrey Hinton, a pioneering scientist who spent a decade at Google, participated remotely. All three are winners of the Turing Award, the equivalent of the Nobel Prize for computing.
  • The group also included scientists from several of China’s leading A.I. research institutions, some of which are state-funded and advise the government. A few former government officials joined, including Fu Ying, who had been a Chinese foreign ministry official and diplomat, and Mary Robinson, the former president of Ireland. Earlier this year, the group met in Beijing, where they briefed senior Chinese government officials on their discussion.
  • Governments need to know what is going on at the research labs and companies working on A.I. systems in their countries, the group said in its statement. And they need a way to communicate about potential risks that does not require companies or researchers to share proprietary information with competitors.
  • “If we had some sort of catastrophe six months from now, if we do detect there are models that are starting to autonomously self-improve, who are you going to call?” Dr. Hadfield said.
  • If A.I. systems anywhere in the world were to develop these abilities today, there is no plan for how to rein them in, said Gillian Hadfield, a legal scholar and professor of computer science and government at Johns Hopkins University.
  • In a statement on Monday, a group of influential A.I. scientists raised concerns that the technology they helped build could cause serious harm. They warned that A.I. technology could, within a matter of years, overtake the capabilities of its makers and that “loss of human control or malicious use of these A.I. systems could lead to catastrophic outcomes for all of humanity.”
  • Scientists who helped pioneer artificial intelligence are warning that countries must create a global system of oversight to check the potentially grave risks posed by the fast-developing technology.
Javier E

Defeated by A.I., a Legend in the Board Game Go Warns: Get Ready for What's Next - The ... - 0 views

  • Lee Saedol was the finest Go player of his generation when he suffered a decisive loss, defeated not by a human opponent but by artificial intelligence.
  • The stunning upset, in 2016, made headlines around the world and looked like a clear sign that artificial intelligence was entering a new, profoundly unsettling era.
  • By besting Mr. Lee, an 18-time world champion revered for his intuitive and creative style of play, AlphaGo had solved one of computer science’s greatest challenges: teaching itself the abstract strategy needed to win at Go, widely considered the world’s most complex board game.
  • ...15 more annotations...
  • AlphaGo’s victory demonstrated the unbridled potential of A.I. to achieve superhuman mastery of skills once considered too complicated for machines.
  • Mr. Lee, now 41, retired three years later, convinced that humans could no longer compete with computers at Go. Artificial intelligence, he said, had changed the very nature of a game that originated in China more than 2,500 years ago.
  • As society wrestles with what A.I. holds for humanity’s future, Mr. Lee is now urging others to avoid being caught unprepared, as he was, and to become familiar with the technology now. He delivers lectures about A.I., trying to give others the advance notice he wishes he had received before his match.
  • “I faced the issues of A.I. early, but it will happen for others,” Mr. Lee said recently at a community education fair in Seoul to a crowd of students and parents. “It may not be a happy ending.”
  • Mr. Lee is not a doomsayer. In his view, A.I. may replace some jobs, but it may create some, too. When considering A.I.’s grasp of Go, he said it was important to remember that humans both created the game and designed the A.I. system that mastered it.
  • What he worries about is that A.I. may change what humans value.
  • His immense talent was apparent from the start. He quickly became the best player of his age not only locally but across all of South Korea, Japan and China. He turned pro at 12.
  • “People used to be in awe of creativity, originality and innovation,” he said. “But since A.I. came, a lot of that has disappeared.”
  • By the time he was 20, Mr. Lee had reached 9-dan, the highest level of mastery in Go. Soon, he was among the best players in the world, described by some as the Roger Federer of the game.
  • Go posed a tantalizing challenge for A.I. researchers. The game is exponentially more complicated than chess, with it often being said that there are more possible positions on a Go board (10 with more than 100 zeros after it, by many mathematical estimates) than there are atoms in the universe.
  • The breakthrough came from DeepMind, which built AlphaGo using so-called neural networks: mathematical systems that can learn skills by analyzing enormous amounts of data. It started by feeding the network 30 million moves from high-level players. Then the program played game after game against itself until it learned which moves were successful and developed new strategies.
  • Mr. Lee said not having a true human opponent was disconcerting. AlphaGo played a style he had never seen, and it felt odd to not try to decipher what his opponent was thinking and feeling. The world watched in awe as AlphaGo pushed Mr. Lee into corners and made moves unthinkable to a human player.“I couldn’t get used to it,” he said. “I thought that A.I. would beat humans someday. I just didn’t think it was here yet.”
  • AlphaGo’s victory “was a watershed moment in the history of A.I.” said Demis Hassabis, DeepMind’s chief executive, in a written statement. It showed what computers that learn on their own from data “were really capable of,” he said.
  • Mr. Lee had a hard time accepting the defeat. What he regarded as an art form, an extension of a player’s own personality and style, was now cast aside for an algorithm’s ruthless efficiency.
  • His 17-year-old daughter is in her final year of high school. When they discuss what she should study at university, they often consider a future shaped by A.I.“We often talk about choosing a job that won’t be easily replaceable by A.I. or less impacted by A.I.,” he said. “It’s only a matter of time before A.I. is present everywhere.”
Javier E

'Never summon a power you can't control': Yuval Noah Harari on how AI could threaten de... - 0 views

  • The Phaethon myth and Goethe’s poem fail to provide useful advice because they misconstrue the way humans gain power. In both fables, a single human acquires enormous power, but is then corrupted by hubris and greed. The conclusion is that our flawed individual psychology makes us abuse power.
  • What this crude analysis misses is that human power is never the outcome of individual initiative. Power always stems from cooperation between large numbers of humans. Accordingly, it isn’t our individual psychology that causes us to abuse power.
  • Our tendency to summon powers we cannot control stems not from individual psychology but from the unique way our species cooperates in large numbers. Humankind gains enormous power by building large networks of cooperation, but the way our networks are built predisposes us to use power unwisely
  • ...57 more annotations...
  • We are also producing ever more powerful weapons of mass destruction, from thermonuclear bombs to doomsday viruses. Our leaders don’t lack information about these dangers, yet instead of collaborating to find solutions, they are edging closer to a global war.
  • Despite – or perhaps because of – our hoard of data, we are continuing to spew greenhouse gases into the atmosphere, pollute rivers and oceans, cut down forests, destroy entire habitats, drive countless species to extinction, and jeopardise the ecological foundations of our own species
  • For most of our networks have been built and maintained by spreading fictions, fantasies and mass delusions – ranging from enchanted broomsticks to financial systems. Our problem, then, is a network problem. Specifically, it is an information problem. For information is the glue that holds networks together, and when people are fed bad information they are likely to make bad decisions, no matter how wise and kind they personally are.
  • Traditionally, the term “AI” has been used as an acronym for artificial intelligence. But it is perhaps better to think of it as an acronym for alien intelligence
  • AI is an unprecedented threat to humanity because it is the first technology in history that can make decisions and create new ideas by itself. All previous human inventions have empowered humans, because no matter how powerful the new tool was, the decisions about its usage remained in our hands
  • Nuclear bombs do not themselves decide whom to kill, nor can they improve themselves or invent even more powerful bombs. In contrast, autonomous drones can decide by themselves who to kill, and AIs can create novel bomb designs, unprecedented military strategies and better AIs.
  • AI isn’t a tool – it’s an agent. The biggest threat of AI is that we are summoning to Earth countless new powerful agents that are potentially more intelligent and imaginative than us, and that we don’t fully understand or control.
  • repreneurs such as Yoshua Bengio, Geoffrey Hinton, Sam Altman, Elon Musk and Mustafa Suleyman have warned that AI could destroy our civilisation. In a 2023 survey of 2,778 AI researchers, more than a third gave at least a 10% chance of advanced AI leading to outcomes as bad as human extinction.
  • As AI evolves, it becomes less artificial (in the sense of depending on human designs) and more alien
  • AI isn’t progressing towards human-level intelligence. It is evolving an alien type of intelligence.
  • generative AIs like GPT-4 already create new poems, stories and images. This trend will only increase and accelerate, making it more difficult to understand our own lives. Can we trust computer algorithms to make wise decisions and create a better world? That’s a much bigger gamble than trusting an enchanted broom to fetch water
  • it is more than just human lives we are gambling on. AI is already capable of producing art and making scientific discoveries by itself. In the next few decades, it will be likely to gain the ability even to create new life forms, either by writing genetic code or by inventing an inorganic code animating inorganic entities. AI could therefore alter the course not just of our species’ history but of the evolution of all life forms.
  • “Then … came move number 37,” writes Suleyman. “It made no sense. AlphaGo had apparently blown it, blindly following an apparently losing strategy no professional player would ever pursue. The live match commentators, both professionals of the highest ranking, said it was a ‘very strange move’ and thought it was ‘a mistake’.
  • as the endgame approached, that ‘mistaken’ move proved pivotal. AlphaGo won again. Go strategy was being rewritten before our eyes. Our AI had uncovered ideas that hadn’t occurred to the most brilliant players in thousands of years.”
  • “In AI, the neural networks moving toward autonomy are, at present, not explainable. You can’t walk someone through the decision-making process to explain precisely why an algorithm produced a specific prediction. Engineers can’t peer beneath the hood and easily explain in granular detail what caused something to happen. GPT‑4, AlphaGo and the rest are black boxes, their outputs and decisions based on opaque and impossibly intricate chains of minute signals.”
  • Yet during all those millennia, human minds have explored only certain areas in the landscape of Go. Other areas were left untouched, because human minds just didn’t think to venture there. AI, being free from the limitations of human minds, discovered and explored these previously hidden areas.
  • Second, move 37 demonstrated the unfathomability of AI. Even after AlphaGo played it to achieve victory, Suleyman and his team couldn’t explain how AlphaGo decided to play it.
  • Move 37 is an emblem of the AI revolution for two reasons. First, it demonstrated the alien nature of AI. In east Asia, Go is considered much more than a game: it is a treasured cultural tradition. For more than 2,500 years, tens of millions of people have played Go, and entire schools of thought have developed around the game, espousing different strategies and philosophies
  • The rise of unfathomable alien intelligence poses a threat to all humans, and poses a particular threat to democracy. If more and more decisions about people’s lives are made in a black box, so voters cannot understand and challenge them, democracy ceases to functio
  • Human voters may keep choosing a human president, but wouldn’t this be just an empty ceremony? Even today, only a small fraction of humanity truly understands the financial system
  • As the 2007‑8 financial crisis indicated, some complex financial devices and principles were intelligible to only a few financial wizards. What happens to democracy when AIs create even more complex financial devices and when the number of humans who understand the financial system drops to zero?
  • Translating Goethe’s cautionary fable into the language of modern finance, imagine the following scenario: a Wall Street apprentice fed up with the drudgery of the financial workshop creates an AI called Broomstick, provides it with a million dollars in seed money, and orders it to make more money.
  • n pursuit of more dollars, Broomstick not only devises new investment strategies, but comes up with entirely new financial devices that no human being has ever thought about.
  • many financial areas were left untouched, because human minds just didn’t think to venture there. Broomstick, being free from the limitations of human minds, discovers and explores these previously hidden areas, making financial moves that are the equivalent of AlphaGo’s move 37.
  • For a couple of years, as Broomstick leads humanity into financial virgin territory, everything looks wonderful. The markets are soaring, the money is flooding in effortlessly, and everyone is happy. Then comes a crash bigger even than 1929 or 2008. But no human being – either president, banker or citizen – knows what caused it and what could be done about it
  • AI, too, is a global problem. Accordingly, to understand the new computer politics, it is not enough to examine how discrete societies might react to AI. We also need to consider how AI might change relations between societies on a global level.
  • As long as humanity stands united, we can build institutions that will regulate AI, whether in the field of finance or war. Unfortunately, humanity has never been united. We have always been plagued by bad actors, as well as by disagreements between good actors. The rise of AI poses an existential danger to humankind, not because of the malevolence of computers, but because of our own shortcomings.
  • errorists might use AI to instigate a global pandemic. The terrorists themselves may have little knowledge of epidemiology, but the AI could synthesise for them a new pathogen, order it from commercial laboratories or print it in biological 3D printers, and devise the best strategy to spread it around the world, via airports or food supply chain
  • desperate governments request help from the only entity capable of understanding what is happening – Broomstick. The AI makes several policy recommendations, far more audacious than quantitative easing – and far more opaque, too. Broomstick promises that these policies will save the day, but human politicians – unable to understand the logic behind Broomstick’s recommendations – fear they might completely unravel the financial and even social fabric of the world. Should they listen to the AI?
  • Human civilisation could also be devastated by weapons of social mass destruction, such as stories that undermine our social bonds. An AI developed in one country could be used to unleash a deluge of fake news, fake money and fake humans so that people in numerous other countries lose the ability to trust anything or anyone.
  • Many societies – both democracies and dictatorships – may act responsibly to regulate such usages of AI, clamp down on bad actors and restrain the dangerous ambitions of their own rulers and fanatics. But if even a handful of societies fail to do so, this could be enough to endanger the whole of humankind
  • Thus, a paranoid dictator might hand unlimited power to a fallible AI, including even the power to launch nuclear strikes. If the AI then makes an error, or begins to pursue an unexpected goal, the result could be catastrophic, and not just for that country
  • magine a situation – in 20 years, say – when somebody in Beijing or San Francisco possesses the entire personal history of every politician, journalist, colonel and CEO in your country: every text they ever sent, every web search they ever made, every illness they suffered, every sexual encounter they enjoyed, every joke they told, every bribe they took. Would you still be living in an independent country, or would you now be living in a data colony?
  • What happens when your country finds itself utterly dependent on digital infrastructures and AI-powered systems over which it has no effective control?
  • In the economic realm, previous empires were based on material resources such as land, cotton and oil. This placed a limit on the empire’s ability to concentrate both economic wealth and political power in one place. Physics and geology don’t allow all the world’s land, cotton or oil to be moved to one country
  • t is different with the new information empires. Data can move at the speed of light, and algorithms don’t take up much space. Consequently, the world’s algorithmic power can be concentrated in a single hub. Engineers in a single country might write the code and control the keys for all the crucial algorithms that run the entire world.
  • AI and automation therefore pose a particular challenge to poorer developing countries. In an AI-driven global economy, the digital leaders claim the bulk of the gains and could use their wealth to retrain their workforce and profit even more
  • Meanwhile, the value of unskilled labourers in left-behind countries will decline, causing them to fall even further behind. The result might be lots of new jobs and immense wealth in San Francisco and Shanghai, while many other parts of the world face economic ruin.
  • AI is expected to add $15.7tn (£12.3tn) to the global economy by 2030. But if current trends continue, it is projected that China and North America – the two leading AI superpowers – will together take home 70% of that money.
  • uring the cold war, the iron curtain was in many places literally made of metal: barbed wire separated one country from another. Now the world is increasingly divided by the silicon curtain. The code on your smartphone determines on which side of the silicon curtain you live, which algorithms run your life, who controls your attention and where your data flows.
  • Cyberweapons can bring down a country’s electric grid, but they can also be used to destroy a secret research facility, jam an enemy sensor, inflame a political scandal, manipulate elections or hack a single smartphone. And they can do all that stealthily. They don’t announce their presence with a mushroom cloud and a storm of fire, nor do they leave a visible trail from launchpad to target
  • The two digital spheres may therefore drift further and further apart. For centuries, new information technologies fuelled the process of globalisation and brought people all over the world into closer contact. Paradoxically, information technology today is so powerful it can potentially split humanity by enclosing different people in separate information cocoons, ending the idea of a single shared human reality
  • For decades, the world’s master metaphor was the web. The master metaphor of the coming decades might be the cocoon.
  • Other countries or blocs, such as the EU, India, Brazil and Russia, may try to create their own digital cocoons,
  • Instead of being divided between two global empires, the world might be divided among a dozen empires.
  • The more the new empires compete against one another, the greater the danger of armed conflict.
  • The cold war between the US and the USSR never escalated into a direct military confrontation, largely thanks to the doctrine of mutually assured destruction. But the danger of escalation in the age of AI is bigger, because cyber warfare is inherently different from nuclear warfare.
  • US companies are now forbidden to export such chips to China. While in the short term this hampers China in the AI race, in the long term it pushes China to develop a completely separate digital sphere that will be distinct from the American digital sphere even in its smallest buildings.
  • The temptation to start a limited cyberwar is therefore big, and so is the temptation to escalate it.
  • A second crucial difference concerns predictability. The cold war was like a hyper-rational chess game, and the certainty of destruction in the event of nuclear conflict was so great that the desire to start a war was correspondingly small
  • Cyberwarfare lacks this certainty. Nobody knows for sure where each side has planted its logic bombs, Trojan horses and malware. Nobody can be certain whether their own weapons would actually work when called upon
  • Such uncertainty undermines the doctrine of mutually assured destruction. One side might convince itself – rightly or wrongly – that it can launch a successful first strike and avoid massive retaliation
  • Even if humanity avoids the worst-case scenario of global war, the rise of new digital empires could still endanger the freedom and prosperity of billions of people. The industrial empires of the 19th and 20th centuries exploited and repressed their colonies, and it would be foolhardy to expect new digital empires to behave much better
  • Moreover, if the world is divided into rival empires, humanity is unlikely to cooperate to overcome the ecological crisis or to regulate AI and other disruptive technologies such as bioengineering.
  • The division of the world into rival digital empires dovetails with the political vision of many leaders who believe that the world is a jungle, that the relative peace of recent decades has been an illusion, and that the only real choice is whether to play the part of predator or prey.
  • Given such a choice, most leaders would prefer to go down in history as predators and add their names to the grim list of conquerors that unfortunate pupils are condemned to memorise for their history exams.
  • These leaders should be reminded, however, that there is a new alpha predator in the jungle. If humanity doesn’t find a way to cooperate and protect our shared interests, we will all be easy prey to AI.
Javier E

AI scientist Ray Kurzweil: 'We are going to expand intelligence a millionfold by 2045' ... - 0 views

  • American computer scientist and techno-optimist Ray Kurzweil is a long-serving authority on artificial intelligence (AI). His bestselling 2005 book, The Singularity Is Near, sparked imaginations with sci-fi like predictions that computers would reach human-level intelligence by 2029 and that we would merge with computers and become superhuman around 2045, which he called “the Singularity”. Now, nearly 20 years on, Kurzweil, 76, has a sequel, The Singularity Is Nearer
  • no longer seem so wacky.
  • Your 2029 and 2045 projections haven’t changed…I have stayed consistent. So 2029, both for human-level intelligence and for artificial general intelligence (AGI) – which is a little bit different. Human-level intelligence generally means AI that has reached the ability of the most skilled humans in a particular domain and by 2029 that will be achieved in most respects. (There may be a few years of transition beyond 2029 where AI has not surpassed the top humans in a few key skills like writing Oscar-winning screenplays or generating deep new philosophical insights, though it will.) AGI means AI that can do everything that any human can do, but to a superior level. AGI sounds more difficult, but it’s coming at the same time.
  • ...15 more annotations...
  • Why write this book? The Singularity Is Near talked about the future, but 20 years ago, when people didn’t know what AI was. It was clear to me what would happen, but it wasn’t clear to everybody. Now AI is dominating the conversation. It is time to take a look again both at the progress we’ve made – large language models (LLMs) are quite delightful to use – and the coming breakthroughs.
  • It is hard to imagine what this would be like, but it doesn’t sound very appealing… Think of it like having your phone, but in your brain. If you ask a question your brain will be able to go out to the cloud for an answer similar to the way you do on your phone now – only it will be instant, there won’t be any input or output issues, and you won’t realise it has been done (the answer will just appear). People do say “I don’t want that”: they thought they didn’t want phones either!
  • The most important driver is the exponential growth in the amount of computing power for the price in constant dollars. We are doubling price-performance every 15 months. LLMs just began to work two years ago because of the increase in computation.
  • What’s missing currently to bring AI to where you are predicting it will be in 2029? One is more computing power – and that’s coming. That will enable improvements in contextual memory, common sense reasoning and social interaction, which are all areas where deficiencies remain
  • LLM hallucinations [where they create nonsensical or inaccurate outputs] will become much less of a problem, certainly by 2029 – they already happen much less than they did two years ago. The issue occurs because they don’t have the answer, and they don’t know that. They look for the best thing, which might be wrong or not appropriate. As AI gets smarter, it will be able to understand its own knowledge more precisely and accurately report to humans when it doesn’t know.
  • What exactly is the Singularity? Today, we have one brain size which we can’t go beyond to get smarter. But the cloud is getting smarter and it is growing really without bounds. The Singularity, which is a metaphor borrowed from physics, will occur when we merge our brain with the cloud. We’re going to be a combination of our natural intelligence and our cybernetic intelligence and it’s all going to be rolled into one. Making it possible will be brain-computer interfaces which ultimately will be nanobots – robots the size of molecules – that will go noninvasively into our brains through the capillaries. We are going to expand intelligence a millionfold by 2045 and it is going to deepen our awareness and consciousness.
  • Why should we believe your dates? I’m really the only person that predicted the tremendous AI interest that we’re seeing today. In 1999 people thought that would take a century or more. I said 30 years and look what we have.
  • I have a chapter on perils. I’ve been involved with trying to find the best way to move forward and I helped to develop the Asilomar AI Principles [a 2017 non-legally binding set of guidelines for responsible AI development]
  • All the major companies are putting more effort into making sure their systems are safe and align with human values than they are into creating new advances, which is positive.
  • Not everyone is likely to be able to afford the technology of the future you envisage. Does technological inequality worry you? Being wealthy allows you to afford these technologies at an early point, but also one where they don’t work very well. When [mobile] phones were new they were very expensive and also did a terrible job. They had access to very little information and didn’t talk to the cloud. Now they are very affordable and extremely useful. About three quarters of people in the world have one. So it’s going to be the same thing here: this issue goes away over time.
  • The book looks in detail at AI’s job-killing potential. Should we be worried? Yes, and no. Certain types of jobs will be automated and people will be affected. But new capabilities also create new jobs. A job like “social media influencer” didn’t make sense, even 10 years ago. Today we have more jobs than we’ve ever had and US average personal income per hours worked is 10 times what it was 100 years ago adjusted to today’s dollars. Universal basic income will start in the 2030s, which will help cushion the harms of job disruptions. It won’t be adequate at that point but over time it will become so.
  • Everything is progressing exponentially: not only computing power but our understanding of biology and our ability to engineer at far smaller scales. In the early 2030s we can expect to reach longevity escape velocity where every year of life we lose through ageing we get back from scientific progress. And as we move past that we’ll actually get back more years.
  • What is your own plan for immortality? My first plan is to stay alive, therefore reaching longevity escape velocity. I take about 80 pills a day to help keep me healthy. Cryogenic freezing is the fallback. I’m also intending to create a replicant of myself [an afterlife AI avatar], which is an option I think we’ll all have in the late 2020s
  • I did something like that with my father, collecting everything that he had written in his life, and it was a little bit like talking to him. [My replicant] will be able to draw on more material and so represent my personality more faithfully.
  • What should we be doing now to best prepare for the future? It is not going to be us versus AI: AI is going inside ourselves. It will allow us to create new things that weren’t feasible before. It’ll be a pretty fantastic future.
Javier E

The Rise and Fall of BNN Breaking, an AI-Generated News Outlet - The New York Times - 0 views

  • His is just one of many complaints against BNN, a site based in Hong Kong that published numerous falsehoods during its short time online as a result of what appeared to be generative A.I. errors.
  • During the two years that BNN was active, it had the veneer of a legitimate news service, claiming a worldwide roster of “seasoned” journalists and 10 million monthly visitors, surpassing the The Chicago Tribune’s self-reported audience. Prominent news organizations like The Washington Post, Politico and The Guardian linked to BNN’s stories
  • Google News often surfaced them, too
  • ...16 more annotations...
  • A closer look, however, would have revealed that individual journalists at BNN published lengthy stories as often as multiple times a minute, writing in generic prose familiar to anyone who has tinkered with the A.I. chatbot ChatGPT.
  • How easily the site and its mistakes entered the ecosystem for legitimate news highlights a growing concern: A.I.-generated content is upending, and often poisoning, the online information supply.
  • The websites, which seem to operate with little to no human supervision, often have generic names — such as iBusiness Day and Ireland Top News — that are modeled after actual news outlets. They crank out material in more than a dozen languages, much of which is not clearly disclosed as being artificially generated, but could easily be mistaken as being created by human writers.
  • Now, experts say, A.I. could turbocharge the threat, easily ripping off the work of journalists and enabling error-ridden counterfeits to circulate even more widely — as has already happened with travel guidebooks, celebrity biographies and obituaries.
  • The result is a machine-powered ouroboros that could squeeze out sustainable, trustworthy journalism. Even though A.I.-generated stories are often poorly constructed, they can still outrank their source material on search engines and social platforms, which often use A.I. to help position content. The artificially elevated stories can then divert advertising spending, which is increasingly assigned by automated auctions without human oversight.
  • NewsGuard, a company that monitors online misinformation, identified more than 800 websites that use A.I. to produce unreliable news content.
  • Low-paid freelancers and algorithms have churned out much of the faux-news content, prizing speed and volume over accuracy.
  • Former employees said they thought they were joining a legitimate news operation; one had mistaken it for BNN Bloomberg, a Canadian business news channel. BNN’s website insisted that “accuracy is nonnegotiable” and that “every piece of information underwent rigorous checks, ensuring our news remains an undeniable source of truth.”
  • this was not a traditional journalism outlet. While the journalists could occasionally report and write original articles, they were asked to primarily use a generative A.I. tool to compose stories, said Ms. Chakraborty and Hemin Bakir, a journalist based in Iraq who worked for BNN for almost a year. They said they had uploaded articles from other news outlets to the generative A.I. tool to create paraphrased versions for BNN to publish.
  • Mr. Chahal’s evangelism carried weight with his employees because of his wealth and seemingly impressive track record, they said. Born in India and raised in Northern California, Mr. Chahal made millions in the online advertising business in the early 2000s and wrote a how-to book about his rags-to-riches story that landed him an interview with Oprah Winfrey.
  • Mr. Chahal told Mr. Bakir to focus on checking stories that had a significant number of readers, such as those republished by MSN.com.Employees did not want their bylines on stories generated purely by A.I., but Mr. Chahal insisted on this. Soon, the tool randomly assigned their names to stories.
  • This crossed a line for some BNN employees, according to screenshots of WhatsApp conversations reviewed by The Times, in which they told Mr. Chahal that they were receiving complaints about stories they didn’t realize had been published under their names.
  • According to three journalists who worked at BNN and screenshots of WhatsApp conversations reviewed by The Times, Mr. Chahal regularly directed profanities at employees and called them idiots and morons. When employees said purely A.I.-generated news, such as the Fanning story, should be published under the generic “BNN Newsroom” byline, Mr. Chahal was dismissive.“When I do this, I won’t have a need for any of you,” he wrote on WhatsApp.Mr. Bakir replied to Mr. Chahal that assigning journalists’ bylines to A.I.-generated stories was putting their integrity and careers in “jeopardy.”
  • This was a strategy that Mr. Chahal favored, according to former BNN employees. He used his news service to exercise grudges, publishing slanted stories about a politician from San Francisco he disliked, Wikipedia after it published a negative entry about BNN Breaking and Elon Musk after accounts belonging to Mr. Chahal, his wife and his companies were suspended o
  • The increasing popularity of programmatic advertising — which uses algorithms to automatically place ads across the internet — allows A.I.-powered news sites to generate revenue by mass-producing low-quality clickbait content
  • Experts are nervous about how A.I.-fueled news could overwhelm accurate reporting with a deluge of junk content distorted by machine-powered repetition. A particular worry is that A.I. aggregators could chip away even further at the viability of local journalism, siphoning away its revenue and damaging its credibility by contaminating the information ecosystem.
abbykleman

As Artificial Intelligence Evolves, So Does Its Criminal Potential - 0 views

  •  
    Imagine receiving a phone call from your aging mother seeking your help because she has forgotten her banking password. Except it's not your mother. The voice on the other end of the phone call just sounds deceptively like her.
Javier E

Welcome, Robot Overlords. Please Don't Fire Us? | Mother Jones - 0 views

  • There will be no place to go but the unemployment line.
  • Slowly but steadily, labor's share of total national income has gone down, while the share going to capital owners has gone up. The most obvious effect of this is the skyrocketing wealth of the top 1 percent, due mostly to huge increases in capital gains and investment income.
  • at this point our tale takes a darker turn. What do we do over the next few decades as robots become steadily more capable and steadily begin taking away all our jobs?
  • ...34 more annotations...
  • The economics community just hasn't spent much time over the past couple of decades focusing on the effect that machine intelligence is likely to have on the labor marke
  • The Digital Revolution is different because computers can perform cognitive tasks too, and that means machines will eventually be able to run themselves. When that happens, they won't just put individuals out of work temporarily. Entire classes of workers will be out of work permanently. In other words, the Luddites weren't wrong. They were just 200 years too early
  • while it's easy to believe that some jobs can never be done by machines—do the elderly really want to be tended by robots?—that may not be true.
  • Robotic pets are growing so popular that Sherry Turkle, an MIT professor who studies the way we interact with technology, is uneasy about it: "The idea of some kind of artificial companionship," she says, "is already becoming the new normal."
  • robots will take over more and more jobs. And guess who will own all these robots? People with money, of course. As this happens, capital will become ever more powerful and labor will become ever more worthless. Those without money—most of us—will live on whatever crumbs the owners of capital allow us.
  • Economist Paul Krugman recently remarked that our long-standing belief in skills and education as the keys to financial success may well be outdated. In a blog post titled "Rise of the Robots," he reviewed some recent economic data and predicted that we're entering an era where the prime cause of income inequality will be something else entirely: capital vs. labor.
  • We're already seeing them, and not just because of the crash of 2008. They started showing up in the statistics more than a decade ago. For a while, though, they were masked by the dot-com and housing bubbles, so when the financial crisis hit, years' worth of decline was compressed into 24 months. The trend lines dropped off the cliff.
  • In the economics literature, the increase in the share of income going to capital owners is known as capital-biased technological change
  • The question we want to answer is simple: If CBTC is already happening—not a lot, but just a little bit—what trends would we expect to see? What are the signs of a computer-driven economy?
  • if automation were displacing labor, we'd expect to see a steady decline in the share of the population that's employed.
  • Second, we'd expect to see fewer job openings than in the past.
  • Third, as more people compete for fewer jobs, we'd expect to see middle-class incomes flatten in a race to the bottom.
  • Fourth, with consumption stagnant, we'd expect to see corporations stockpile more cash and, fearing weaker sales, invest less in new products and new factories
  • Fifth, as a result of all this, we'd expect to see labor's share of national income decline and capital's share rise.
  • There will be no place to go but the unemployment line.
  • The modern economy is complex, and most of these trends have multiple causes.
  • in another sense, we should be very alarmed. It's one thing to suggest that robots are going to cause mass unemployment starting in 2030 or so. We'd have some time to come to grips with that. But the evidence suggests that—slowly, haltingly—it's happening already, and we're simply not prepared for it.
  • the first jobs to go will be middle-skill jobs. Despite impressive advances, robots still don't have the dexterity to perform many common kinds of manual labor that are simple for humans—digging ditches, changing bedpans. Nor are they any good at jobs that require a lot of cognitive skill—teaching classes, writing magazine articles
  • in the middle you have jobs that are both fairly routine and require no manual dexterity. So that may be where the hollowing out starts: with desk jobs in places like accounting or customer support.
  • In fact, there's even a digital sports writer. It's true that a human being wrote this story—ask my mother if you're not sure—but in a decade or two I might be out of a job too
  • Doctors should probably be worried as well. Remember Watson, the Jeopardy!-playing computer? It's now being fed millions of pages of medical information so that it can help physicians do a better job of diagnosing diseases. In another decade, there's a good chance that Watson will be able to do this without any human help at all.
  • Take driverless cars.
  • Most likely, owners of capital would strongly resist higher taxes, as they always have, while workers would be unhappy with their enforced idleness. Still, the ancient Romans managed to get used to it—with slave labor playing the role of robots—and we might have to, as well.
  • There will be no place to go but the unemployment lin
  • we'll need to let go of some familiar convictions. Left-leaning observers may continue to think that stagnating incomes can be improved with better education and equality of opportunity. Conservatives will continue to insist that people without jobs are lazy bums who shouldn't be coddled. They'll both be wrong.
  • Corporate executives should worry too. For a while, everything will seem great for them: Falling labor costs will produce heftier profits and bigger bonuses. But then it will all come crashing down. After all, robots might be able to produce goods and services, but they can't consume them
  • we'll probably have only a few options open to us. The simplest, because it's relatively familiar, is to tax capital at high rates and use the money to support displaced workers. In other words, as The Economist's Ryan Avent puts it, "redistribution, and a lot of it."
  • would we be happy in a society that offers real work to a dwindling few and bread and circuses for the rest?
  • The next step might be passenger vehicles on fixed routes, like airport shuttles. Then long-haul trucks. Then buses and taxis. There are 2.5 million workers who drive trucks, buses, and taxis for a living, and there's a good chance that, one by one, all of them will be displaced
  •  economist Noah Smith suggests that we might have to fundamentally change the way we think about how we share economic growth. Right now, he points out, everyone is born with an endowment of labor by virtue of having a body and a brain that can be traded for income. But what to do when that endowment is worth a fraction of what it is today? Smith's suggestion: "Why not also an endowment of capital? What if, when each citizen turns 18, the government bought him or her a diversified portfolio of equity?"
  • In simple terms, if owners of capital are capturing an increasing fraction of national income, then that capital needs to be shared more widely if we want to maintain a middle-class society.
  • it's time to start thinking about our automated future in earnest. The history of mass economic displacement isn't encouraging—fascists in the '20s, Nazis in the '30s—and recent high levels of unemployment in Greece and Italy have already produced rioting in the streets and larger followings for right-wing populist parties. And that's after only a few years of misery.
  • When the robot revolution finally starts to happen, it's going to happen fast, and it's going to turn our world upside down. It's easy to joke about our future robot overlords—R2-D2 or the Terminator?—but the challenge that machine intelligence presents really isn't science fiction anymore. Like Lake Michigan with an inch of water in it, it's happening around us right now even if it's hard to see
  • A robotic paradise of leisure and contemplation eventually awaits us, but we have a long and dimly lit tunnel to navigate before we get there.
Javier E

The Evidence Supports Artificial Sweeteners Over Sugar - The New York Times - 0 views

  • what about sugar? We should acknowledge that when I, and many others, address sugar in contexts like these, we are talking about added sugars, not the naturally occurring sugars or carbohydrates you find in things like fruit. Those are, for the most part, not the problem. Added sugars are
  • The Centers for Disease Control and Prevention reports that children are consuming between 282 calories (for girls) and 362 calories (for boys) of added sugars per day on average. This means that more than 15 percent of their dietary caloric intake is from added sugars
  • he increased risk of death began once a person consumed the equivalent of one 20-ounce Mountain Dew in a 2,000-calorie diet, and reached more than a fourfold increase if people consumed more than one-third of their diet in added sugars.
Javier E

Will You Lose Your Job to a Robot? Silicon Valley Is Split - NYTimes.com - 0 views

  • The question for Silicon Valley is whether we’re heading toward a robot-led coup or a leisure-filled utopia.
  • nterviews with 2,551 people who make, research and analyze new technology. Most agreed that robotics and artificial intelligence would transform daily life by 2025, but respondents were almost evenly split about what that might mean for the economy and employment.
  • techno-optimists. They believe that even though machines will displace many jobs in a decade, technology and human ingenuity will produce many more, as happened after the agricultural and industrial revolutions. The meaning of “job” might change, too, if people find themselves with hours of free time because the mundane tasks that fill our days are automated.
  • ...8 more annotations...
  • The other half agree that some jobs will disappear, but they are not convinced that new ones will take their place, even for some highly skilled workers. They fear a future of widespread unemployment, deep inequality and violent uprisings — particularly if policy makers and educational institutions don’t step in.
  • We’re going to have to come to grips with a long-term employment crisis and the fact that — strictly from an economic point of view, not a moral point of view — there are more and more ‘surplus humans.'  ”
  • “The degree of integration of A.I. into daily life will depend very much, as it does now, on wealth. The people whose personal digital devices are day-trading for them, and doing the grocery shopping and sending greeting cards on their behalf, are people who are living a different life than those who are worried about missing a day at one of their three jobs due to being sick, and losing the job and being unable to feed their children.”
  • “Only the best-educated humans will compete with machines. And education systems in the U.S. and much of the rest of the world are still sitting students in rows and columns, teaching them to keep quiet and memorize what is told to them, preparing them for life in a 20th century factory.”
  • “We hardly dwell on the fact that someone trying to pick a career path that is not likely to be automated will have a very hard time making that choice. X-ray technician? Outsourced already, and automation in progress. The race between automation and human work is won by automation.”
  • “Robotic sex partners will be commonplace. … The central question of 2025 will be: What are people for in a world that does not need their labor, and where only a minority are needed to guide the ‘bot-based economy?'  ”
  • “Employment will be mostly very skilled labor — and even those jobs will be continuously whittled away by increasingly sophisticated machines. Live, human salespeople, nurses, doctors, actors will be symbols of luxury, the silk of human interaction as opposed to the polyester of simulated human contact.”
  • The biggest exception will be jobs that depend upon empathy as a core capacity — schoolteacher, personal service worker, nurse. These jobs are often those traditionally performed by women. One of the bigger social questions of the mid-late 2020s will be the role of men in this world.”
Javier E

Drones Beaming Web Access Are in the Stars for Facebook - NYTimes.com - 0 views

  • in a high-stakes competition for domination of the Internet, in which Google wields high-altitude balloons and high-speed fiber networks and Amazon has experimental delivery drones and colossal data centers, Facebook is under pressure to show that it, too, can pursue projects that are more speculative than product.
  • “The Amazons, Googles and Facebooks are exploring completely new things that will change the way we live,
  • Facebook’s drone team, which came to the company through the acquisition last year of the drone maker Ascenta, say they believe their solar-powered craft can eventually be aloft up to three months at a time, beaming high-speed data from 60,000 to 90,000 feet to some of the world’s remotest regions via laser. Test flights are to begin this summer, though full commercial deployment may take years
  • ...4 more annotations...
  • “We want to serve every person in the world” with high-speed Internet signals, said Yael Maguire, head of Facebook’s Connectivity Lab. The dream — assuming regulators around the planet go along with it — is a fleet as big as 1,000 drones connecting people to the Internet. And where it is too remote even for the drones, satellites would do the trick.
  • Facebook’s effort in artificial intelligence is called deep learning, for the number of levels at which it critically analyzes information. By figuring out context, Facebook better knows why people anywhere are looking at something, and what else it can do to keep them engaged.
  • For the long term, Mr. Zuckerberg hopes Facebook’s A.I. will translate languages on the fly, know strangers you might meet and, of course, bring you the highest-value ads
  • Because, in the end, it’s still about getting you to look at more ads.“The fundamental thing about advertising is people paying to get a message in front of you,” Mr. Schroepfer said. “That won’t go away in my life, though the form may change.”
drewmangan1

Hillary Clinton says early lead was 'artificial' - CNNPolitics.com - 0 views

  • "That is really artificial, all of those early soundings and polls," Clinton said. "Once you get into it, this is a Democratic election for our nominee and it gets really close, exciting. And it really depends upon on who can make the best case that you can be the nominee to beat whoever the Republicans put up and try to get your folks who support you to come out."
Javier E

Opinion | The Deadly Soul of a New Machine - The New York Times - 0 views

  • it’s not too much of a reach to see Flight 610 as representative of the hinge in history we’ve arrived at — with the bots, the artificial intelligence and the social media algorithms now shaping the fate of humanity at a startling pace.
  • Like the correction system in the 737, these inventions are designed to make life easier and safer — or at least more profitable for the owners.
  • The C.E.O. of Microsoft, Satya Nadella, hit a similar cautionary note at the company’s recent annual shareholder meeting. Big Tech, he said, should be asking “not what computers can do, but what they should do.”
  • ...3 more annotations...
  • The overall idea is to outsource certain human functions, the drudgery and things prone to faulty judgment, while retaining master control. The question is: At what point is control lost and the creations take over? How about now?
  • It’s the “can do” part that should scare you. Facebook, once all puppies, baby pictures and high school reunion updates, is a monster of misinformation.
  • s haunting as those final moments inside the cockpit of Flight 610 were, it’s equally haunting to grasp the full meaning of what happened: The system overrode the humans and killed everyone. Our invention. Our folly.
Javier E

Tech C.E.O.s Are in Love With Their Principal Doomsayer - The New York Times - 0 views

  • The futurist philosopher Yuval Noah Harari worries about a lot.
  • He worries that Silicon Valley is undermining democracy and ushering in a dystopian hellscape in which voting is obsolete.
  • He worries that by creating powerful influence machines to control billions of minds, the big tech companies are destroying the idea of a sovereign individual with free will.
  • ...27 more annotations...
  • He worries that because the technological revolution’s work requires so few laborers, Silicon Valley is creating a tiny ruling class and a teeming, furious “useless class.”
  • If this is his harrowing warning, then why do Silicon Valley C.E.O.s love him so
  • When Mr. Harari toured the Bay Area this fall to promote his latest book, the reception was incongruously joyful. Reed Hastings, the chief executive of Netflix, threw him a dinner party. The leaders of X, Alphabet’s secretive research division, invited Mr. Harari over. Bill Gates reviewed the book (“Fascinating” and “such a stimulating writer”) in The New York Times.
  • it’s insane he’s so popular, they’re all inviting him to campus — yet what Yuval is saying undermines the premise of the advertising- and engagement-based model of their products,
  • Part of the reason might be that Silicon Valley, at a certain level, is not optimistic on the future of democracy. The more of a mess Washington becomes, the more interested the tech world is in creating something else
  • he brought up Aldous Huxley. Generations have been horrified by his novel “Brave New World,” which depicts a regime of emotion control and painless consumption. Readers who encounter the book today, Mr. Harari said, often think it sounds great. “Everything is so nice, and in that way it is an intellectually disturbing book because you’re really hard-pressed to explain what’s wrong with it,” he said. “And you do get today a vision coming out of some people in Silicon Valley which goes in that direction.”
  • The story of his current fame begins in 2011, when he published a book of notable ambition: to survey the whole of human existence. “Sapiens: A Brief History of Humankind,” first released in Hebrew, did not break new ground in terms of historical research. Nor did its premise — that humans are animals and our dominance is an accident — seem a likely commercial hit. But the casual tone and smooth way Mr. Harari tied together existing knowledge across fields made it a deeply pleasing read, even as the tome ended on the notion that the process of human evolution might be over.
  • He followed up with “Homo Deus: A Brief History of Tomorrow,” which outlined his vision of what comes after human evolution. In it, he describes Dataism, a new faith based around the power of algorithms. Mr. Harari’s future is one in which big data is worshiped, artificial intelligence surpasses human intelligence, and some humans develop Godlike abilities.
  • Now, he has written a book about the present and how it could lead to that future: “21 Lessons for the 21st Century.” It is meant to be read as a series of warnings. His recent TED Talk was called “Why fascism is so tempting — and how your data could power it.”
  • At the Alphabet talk, Mr. Harari had been accompanied by his publisher. They said that the younger employees had expressed concern about whether their work was contributing to a less free society, while the executives generally thought their impact was positive
  • Some workers had tried to predict how well humans would adapt to large technological change based on how they have responded to small shifts, like a new version of Gmail. Mr. Harari told them to think more starkly: If there isn’t a major policy intervention, most humans probably will not adapt at all.
  • It made him sad, he told me, to see people build things that destroy their own societies, but he works every day to maintain an academic distance and remind himself that humans are just animals. “Part of it is really coming from seeing humans as apes, that this is how they behave,” he said, adding, “They’re chimpanzees. They’re sapiens. This is what they do.”
  • this summer, Mark Zuckerberg, who has recommended Mr. Harari to his book club, acknowledged a fixation with the autocrat Caesar Augustus. “Basically,” Mr. Zuckerberg told The New Yorker, “through a really harsh approach, he established 200 years of world peace.”
  • He said he had resigned himself to tech executives’ global reign, pointing out how much worse the politicians are. “I’ve met a number of these high-tech giants, and generally they’re good people,” he said. “They’re not Attila the Hun. In the lottery of human leaders, you could get far worse.”
  • Some of his tech fans, he thinks, come to him out of anxiety. “Some may be very frightened of the impact of what they are doing,” Mr. Harari said
  • as he spoke about meditation — Mr. Harari spends two hours each day and two months each year in silence — he became commanding. In a region where self-optimization is paramount and meditation is a competitive sport, Mr. Harari’s devotion confers hero status.
  • He told the audience that free will is an illusion, and that human rights are just a story we tell ourselves. Political parties, he said, might not make sense anymore. He went on to argue that the liberal world order has relied on fictions like “the customer is always right” and “follow your heart,” and that these ideas no longer work in the age of artificial intelligence, when hearts can be manipulated at scale.
  • Everyone in Silicon Valley is focused on building the future, Mr. Harari continued, while most of the world’s people are not even needed enough to be exploited. “Now you increasingly feel that there are all these elites that just don’t need me,” he said. “And it’s much worse to be irrelevant than to be exploited.”
  • The useless class he describes is uniquely vulnerable. “If a century ago you mounted a revolution against exploitation, you knew that when bad comes to worse, they can’t shoot all of us because they need us,” he said, citing army service and factory work.
  • Now it is becoming less clear why the ruling elite would not just kill the new useless class. “You’re totally expendable,” he told the audience.
  • This, Mr. Harari told me later, is why Silicon Valley is so excited about the concept of universal basic income, or stipends paid to people regardless of whether they work. The message is: “We don’t need you. But we are nice, so we’ll take care of you.”
  • On Sept. 14, he published an essay in The Guardian assailing another old trope — that “the voter knows best.”
  • “If humans are hackable animals, and if our choices and opinions don’t reflect our free will, what should the point of politics be?” he wrote. “How do you live when you realize … that your heart might be a government agent, that your amygdala might be working for Putin, and that the next thought that emerges in your mind might well be the result of some algorithm that knows you better than you know yourself? These are the most interesting questions humanity now faces.”
  • Today, they have a team of eight based in Tel Aviv working on Mr. Harari’s projects. The director Ridley Scott and documentarian Asif Kapadia are adapting “Sapiens” into a TV show, and Mr. Harari is working on children’s books to reach a broader audience.
  • Being gay, Mr. Harari said, has helped his work — it set him apart to study culture more clearly because it made him question the dominant stories of his own conservative Jewish society. “If society got this thing wrong, who guarantees it didn’t get everything else wrong as well?” he said
  • “If I was a superhuman, my superpower would be detachment,” Mr. Harari added. “O.K., so maybe humankind is going to disappear — O.K., let’s just observe.”
  • They just finished “Dear White People,” and they loved the Australian series “Please Like Me.” That night, they had plans to either meet Facebook executives at company headquarters or watch the YouTube show “Cobra Kai.”
Javier E

The great artificial intelligence duopoly - The Washington Post - 0 views

  • The AI revolution will have two engines — China and the United States — pushing its progress swiftly forward. It is unlike any previous technological revolution that emerged from a singular cultural setting. Having two engines will further accelerate the pace of technology.
  • WorldPost: In your book, you talk about the “data gap” between these two engines. What do you mean by that? Lee: Data is the raw material on which AI runs. It is like the role of oil in powering an industrial economy. As an AI algorithm is fed more examples of the phenomenon you want the algorithm to understand, it gains greater and greater accuracy. The more faces you show a facial recognition algorithm, the fewer mistakes it will make in recognizing your face
  • All data is not the same, however. China and the United States have different strengths when it comes to data. The gap emerges when you consider the breadth, quality and depth of the data. Breadth means the number of users, the population whose actions are captured in data. Quality means how well-structured and well-labeled the data is. Depth means how many different data points are generated about the activities of each user.
  • ...15 more annotations...
  • Chinese and American companies are on relatively even footing when it comes to breadth. Though American Internet companies have a smaller domestic user base than China, which has over a billion users on 4G devices, the best American companies can also draw in users from around the globe, bringing their total user base to over a billion.
  • when it comes to depth of data, China has the upper hand. Chinese Internet users channel a much larger portion of their daily activities, transactions and interactions through their smartphones. They use their smartphones for managing their daily lives, from buying groceries at the market to paying their utility bills, booking train or bus tickets and to take out loans, among other things.
  • Weaving together data from mobile payments, public services, financial management and shared mobility gives Chinese companies a deep and more multi-dimensional picture of their users. That allows their AI algorithms to precisely tailor product offerings to each individual. In the current age of AI implementation, this will likely lead to a substantial acceleration and deepening of AI’s impact across China’s economy. That is where the “data gap” appears
  • The radically different business model in China, married to Chinese user habits, creates indigenous branding and monetization strategies as well as an entirely alternative infrastructure for apps and content. It is therefore very difficult, if not impossible, for any American company to try to enter China’s market or vice versa
  • companies in both countries are pursuing their own form of international expansion. The United States uses a “full platform” approach — all Google, all Facebook. Essentially Australia, North America and Europe completely accept the American methodology. That technical empire is likely to continue.
  • The Chinese have realized that the U.S. empire is too difficult to penetrate, so they are looking elsewhere. They are trying, and generally succeeding, in Southeast Asia, the Middle East and Africa. Those regions and countries have not been a focus of U.S. tech, so their products are not built with the cultures of those countries in mind. And since their demographics are closer to China’s — lower income and lots of people, including youth — the Chinese products are a better fit.
  • The jobs that AI cannot do are those of creators, or what I call “empathetic jobs” in services, which will be the largest category that can absorb those displaced from routine jobs. Many jobs will become available in this sector, from teaching to elderly care and nursing. A great effort must be made not only to increase the number of those jobs and create a career path for them but to increase their social status, which also means increasing the pay of these jobs.
  • Policy-wise, we are seeing three approaches. The Chinese have unleashed entrepreneurs with a utilitarian passion to commercialize technology. The Americans are similarly pro-entrepreneur, but the government takes a laissez-faire attitude and the entrepreneurs carry out more moonshots. And Europe is more consumer-oriented, trying to give ownership and control of data back to the individual.
  • An AI arms race would be a grave mistake. The AI boom is more akin to the spread of electricity in the early Industrial Revolution than nuclear weapons during the Cold War. Those who take the arms-race view are more interested in political posturing than the flourishing of humanity. The value of AI as an omni-use technology rests in its creative, not destructive, potential.
  • In a way, having parallel universes should diminish conflict. They can coexist while each can learn from the other. It is not a zero-sum game of winners and losers.
  • We will see a massive migration from one kind of employment to another, not unlike during the transition from agriculture to manufacturing. It will largely be the lower-wage jobs in routine work that will be eliminated, while the ultra-rich will stand to make a lot of money from AI. Social inequality will thus widen.
  • If you were to draw a map a decade from now, you would see China’s tech zone — built not on ownership but partnerships — stretching across Southeast Asia, Indonesia, Africa and to some extent South America. The U.S. zone would entail North America, Australia and Europe. Over time, the “parallel universes” already extant in the United States and China will grow to cover the whole world.
  • There are also issues related to poorer countries who have relied on either following the old China model of low-wage manufacturing jobs or of India’s call centers. AI will replace those jobs that were created by outsourcing from the West. They will be the first to go in the next 10 years. So, underdeveloped countries will also have to look to jobs for creators and in services.
  • I am opposed to the idea of universal basic income because it provides money both to those who don’t need it as well as those who do. And it doesn’t stimulate people’s desire to work. It puts them into a kind of “useless class” category with the terrible consequence of a resentful class without dignity or status.
  • To reinvigorate people’s desire to work with dignity, some subsidy can help offset the costs of critical needs that only humans can provide. That would be a much better use of the distribution of income than giving it to every person whether they need it or not. A far better idea would be for workers of the future to have an equity share in owning the robots — universal basic capital instead of universal basic income.
Javier E

Opinion | I Used to Work for Google. I Am a Conscientious Objector. - The New York Times - 0 views

  • “We can forgive your politics and focus on your technical contributions as long as you don’t do something unforgivable, like speaking to the press.”
  • This was the parting advice given to me during my exit interview from Google after spending a month internally arguing, resignation letter in hand, for the company to clarify its ethical red lines around Project Dragonfly, the effort to modify Search to meet the censorship and surveillance demands of the Chinese Communist Party.
  • When a prototype circulated internally of a system that would ostensibly allow the Chinese government to surveil Chinese users’ queries by their phone numbers, Google executives argued that it was within existing norms
  • ...8 more annotations...
  • the time has passed when tech companies can simply build tools, write algorithms and amass data without regard to who uses the technology and for what purpose.
  • Nearly a decade ago, Cisco Systems was sued in federal court on behalf of 11 members of the Falun Gong organization, who claimed that the company built a nationwide video surveillance and “forced conversion” profiling system for the Chinese government that was tailored to help Beijing crack down on the group
  • According to Cisco’s own marketing materials, the video analyzer — which would now be marketed as artificial intelligence — was the “only product capable of recognizing over 90 percent of Falun Gong pictorial information.”
  • The failure to punish Cisco set a precedent for American companies to build artificial intelligence for foreign governments to use for political oppression
  • Thermo Fisher, sold DNA analyzers to aid in the current large-scale domestic surveillance and internment of hundreds of thousands of Uighurs, a predominantly Muslim ethnic group, in the region of Xinjiang.
  • Mr. Yang defended Yahoo’s human rights commitments and emphasized the importance of the Chinese market. Google used a similar defense for Dragonfly last year.
  • Tech companies are spending record amounts on lobbying and quietly fighting to limit employees’ legal protections for organizing. North American legislators would be wise to answer the call from human rights organizations and research institutions by guaranteeing explicit whistle-blower protections similar to those recently passed by the European Union
  • Ideally, they would vocally support an instrument that legally binds businesses — via international human rights law — to uphold human rights.
Javier E

Stanford launches artificial intelligence institute to put humans and ethics at the cen... - 0 views

  • “The correct answer to pretty much everything in AI is more of it,” said Schmidt, the former Google chairman. “This generation is much more socially conscious than we were, and more broadly concerned about the impact of everything they do, so you’ll see a combination of both optimism and realism.”
  • Researchers and journalists have shown how AI technologies, largely designed by white and Asian men, tend to reproduce and amplify social biases in dangerous ways. Computer vision technologies built into cameras have trouble recognizing the faces of people of color. Voice recognition struggles to pick up English accents that aren’t mainstream. Algorithms built to predict the likelihood of parole violations are rife with racial bias.
‹ Previous 21 - 40 of 232 Next › Last »
Showing 20 items per page