Skip to main content

Home/ History Readings/ Group items tagged artificial

Rss Feed Group items tagged

Javier E

Opinion | Climate Change, Deglobalization, Demographics, AI: The Forces Really Driving ... - 0 views

  • Economists tried to deal with the twin stresses of inflation and recession in the 1970s without success, and now here we are, 50 years and 50-plus economics Nobel Prizes later, with little ground gained
  • There’s weirdness yet to come, and a lot more than run-of-the-mill weirdness. We are entering a new epoch of crisis, a slow-motion tidal wave of risks that will wash over our economy in the next decades — namely climate change, demographics, deglobalization and artificial intelligence.
  • Their effects will range somewhere between economic regime shift and existential threat to civilization.
  • ...16 more annotations...
  • For climate, we already are seeing a glimpse of what is to come: drought, floods and far more extreme storms than in the recent past. We saw some of the implications over the past year, with supply chains broken because rivers were too dry for shipping and hydroelectric and nuclear power impaired.
  • As with climate change, demographic shifts determine societal ones, like straining the social contract between the working and the aged.
  • We are reversing the globalization of the past 40 years, with the links in our geopolitical and economic network fraying. “Friendshoring,” or moving production to friendly countries, is a new term. The geopolitical forces behind deglobalization will amplify the stresses from climate change and demographics to lead to a frenzied competition for resources and consumers.
  • The problem here, and a problem broadly with complex and dynamic systems, is that the whole doesn’t look like the sum of the parts. If you have a lot of people running around, the overall picture can look different than what any one of those people is doing. Maybe in aggregate their actions jam the doorway; maybe in aggregate they create a stampede
  • if we can’t get a firm hold on pedestrian economic issues like inflation and recession — the prospects are not bright for getting our forecasts right for these existential forces.
  • The problem is that the models don’t work when our economy is weird. And that’s precisely when we most need them to work.
  • Economics failed with the 2008 crisis because economic theory has established that it cannot predict such crises.
  • A key reason these models fail in times of crisis is that they can’t deal with a world filled with complexity or with surprising twists and turns.
  • The fourth, artificial intelligence, is a wild card. But we already are seeing risks for work and privacy, and for frightening advances in warfare.
  • we are not a mechanical system. We are humans who innovate, change with our experiences, and at times game the system
  • Reflecting on the 1987 market crash, the brilliant physicist Richard Feynman remarked on the difficulty facing economists by noting that subatomic particles don’t act based on what they think other subatomic particles are planning — but people do that.
  • What if economists can’t turn things around? This is a possibility because we are walking into a world unlike any we have seen. We can’t anticipate all the ways climate change might affect us or where our creativity will take us with A.I. Which brings us to what is called radical uncertainty, where we simply have no clue — where we are caught unaware by things we haven’t even thought of.
  • This possibility is not much on the minds of economists
  • How do we deal with risks we cannot even define? A good start is to move away from the economist’s palette of efficiency and rationality and instead look at examples of survival in worlds of radical uncertainty.
  • In our time savannas are turning to deserts. The alternative to the economist’s model is to take a coarse approach, to be more adaptable — leave some short-term fine tuning and optimization by the wayside
  • Our long term might look brighter if we act like cockroaches. An insect fine tuned for a jungle may dominate the cockroach in that environment. But once the world changes and the jungle disappears, it will as well.
Javier E

Opinion | Ozempic Is Repairing a Hole in Our Diets Created by Processed Foods - The New... - 0 views

  • In the United States (where I now split my time), over 70 percent of people are overweight or obese, and according to one poll, 47 percent of respondents said they were willing to pay to take the new weight-loss drugs.
  • They cause users to lose an average of 10 to 20 percent of their body weight, and clinical trials suggest that the next generation of drugs (probably available soon) leads to a 24 percent loss, on average
  • I was born in 1979, and by the time I was 21, obesity rates in the United States had more than doubled. They have skyrocketed since. The obvious question is, why? And how do these new weight-loss drugs work?
  • ...21 more annotations...
  • The answer to both lies in one word: satiety. It’s a concept that we don’t use much in everyday life but that we’ve all experienced at some point. It describes the sensation of having had enough and not wanting any more.
  • The primary reason we have gained weight at a pace unprecedented in human history is that our diets have radically changed in ways that have deeply undermined our ability to feel sated
  • The evidence is clear that the kind of food my father grew up eating quickly makes you feel full. But the kind of food I grew up eating, much of which is made in factories, often with artificial chemicals, left me feeling empty and as if I had a hole in my stomach
  • In a recent study of what American children eat, ultraprocessed food was found to make up 67 percent of their daily diet. This kind of food makes you want to eat more and more. Satiety comes late, if at all.
  • After he moved in 2000 to the United States in his 20s, he gained 30 pounds in two years. He began to wonder if the American diet has some kind of strange effect on our brains and our cravings, so he designed an experiment to test it.
  • He and his colleague Paul Johnson raised a group of rats in a cage and gave them an abundant supply of healthy, balanced rat chow made out of the kind of food rats had been eating for a very long time. The rats would eat it when they were hungry, and then they seemed to feel sated and stopped. They did not become fat.
  • then Dr. Kenny and his colleague exposed the rats to an American diet: fried bacon, Snickers bars, cheesecake and other treats. They went crazy for it. The rats would hurl themselves into the cheesecake, gorge themselves and emerge with their faces and whiskers totally slicked with it. They quickly lost almost all interest in the healthy food, and the restraint they used to show around healthy food disappeared. Within six weeks, their obesity rates soared.
  • They took all the processed food away and gave the rats their old healthy diet. Dr. Kenny was confident that they would eat more of it, proving that processed food had expanded their appetites. But something stranger happened. It was as though the rats no longer recognized healthy food as food at all, and they barely ate it. Only when they were starving did they reluctantly start to consume it again.
  • Drugs like Ozempic work precisely by making us feel full.
  • processed and ultraprocessed food create a raging hole of hunger, and these treatments can repair that hole
  • the drugs are “an artificial solution to an artificial problem.”
  • Yet we have reacted to this crisis largely caused by the food industry as if it were caused only by individual moral dereliction
  • Why do we turn our anger inward and not outward at the main cause of the crisis? And by extension, why do we seek to shame people taking Ozempic but not those who, say, take drugs to lower their blood pressure?
  • The first is the belief that obesity is a sin.
  • The second idea is that we are all in a competition when it comes to weight. Ours is a society full of people fighting against the forces in our food that are making us fatter.
  • Looked at in this way, people on Ozempic can resemble cyclists like Lance Armstrong who used performance-enhancing drugs.
  • We can’t find our way to a sane, nontoxic conversation about obesity or Ozempic until we bring these rarely spoken thoughts into the open and reckon with them
  • remember the competition isn’t between you and your neighbor who’s on weight-loss drugs. It’s between you and a food industry constantly designing new ways to undermine your satiety.
  • Reducing or reversing obesity hugely boosts health, on average: We know from years of studying bariatric surgery that it slashes the risks of cancer, heart disease and diabetes-related death. Early indications are that the new anti-obesity drugs are moving people in a similar radically healthier direction,
  • But these drugs may increase the risk for thyroid cancer.
  • Do we want these weight loss drugs to be another opportunity to tear one another down? Or do we want to realize that the food industry has profoundly altered the appetites of us all — leaving us trapped in the same cage, scrambling to find a way out?
Javier E

The Rise and Fall of BNN Breaking, an AI-Generated News Outlet - The New York Times - 0 views

  • His is just one of many complaints against BNN, a site based in Hong Kong that published numerous falsehoods during its short time online as a result of what appeared to be generative A.I. errors.
  • During the two years that BNN was active, it had the veneer of a legitimate news service, claiming a worldwide roster of “seasoned” journalists and 10 million monthly visitors, surpassing the The Chicago Tribune’s self-reported audience. Prominent news organizations like The Washington Post, Politico and The Guardian linked to BNN’s stories
  • Google News often surfaced them, too
  • ...16 more annotations...
  • A closer look, however, would have revealed that individual journalists at BNN published lengthy stories as often as multiple times a minute, writing in generic prose familiar to anyone who has tinkered with the A.I. chatbot ChatGPT.
  • How easily the site and its mistakes entered the ecosystem for legitimate news highlights a growing concern: A.I.-generated content is upending, and often poisoning, the online information supply.
  • The websites, which seem to operate with little to no human supervision, often have generic names — such as iBusiness Day and Ireland Top News — that are modeled after actual news outlets. They crank out material in more than a dozen languages, much of which is not clearly disclosed as being artificially generated, but could easily be mistaken as being created by human writers.
  • Now, experts say, A.I. could turbocharge the threat, easily ripping off the work of journalists and enabling error-ridden counterfeits to circulate even more widely — as has already happened with travel guidebooks, celebrity biographies and obituaries.
  • The result is a machine-powered ouroboros that could squeeze out sustainable, trustworthy journalism. Even though A.I.-generated stories are often poorly constructed, they can still outrank their source material on search engines and social platforms, which often use A.I. to help position content. The artificially elevated stories can then divert advertising spending, which is increasingly assigned by automated auctions without human oversight.
  • NewsGuard, a company that monitors online misinformation, identified more than 800 websites that use A.I. to produce unreliable news content.
  • Low-paid freelancers and algorithms have churned out much of the faux-news content, prizing speed and volume over accuracy.
  • Former employees said they thought they were joining a legitimate news operation; one had mistaken it for BNN Bloomberg, a Canadian business news channel. BNN’s website insisted that “accuracy is nonnegotiable” and that “every piece of information underwent rigorous checks, ensuring our news remains an undeniable source of truth.”
  • this was not a traditional journalism outlet. While the journalists could occasionally report and write original articles, they were asked to primarily use a generative A.I. tool to compose stories, said Ms. Chakraborty and Hemin Bakir, a journalist based in Iraq who worked for BNN for almost a year. They said they had uploaded articles from other news outlets to the generative A.I. tool to create paraphrased versions for BNN to publish.
  • Mr. Chahal’s evangelism carried weight with his employees because of his wealth and seemingly impressive track record, they said. Born in India and raised in Northern California, Mr. Chahal made millions in the online advertising business in the early 2000s and wrote a how-to book about his rags-to-riches story that landed him an interview with Oprah Winfrey.
  • Mr. Chahal told Mr. Bakir to focus on checking stories that had a significant number of readers, such as those republished by MSN.com.Employees did not want their bylines on stories generated purely by A.I., but Mr. Chahal insisted on this. Soon, the tool randomly assigned their names to stories.
  • This crossed a line for some BNN employees, according to screenshots of WhatsApp conversations reviewed by The Times, in which they told Mr. Chahal that they were receiving complaints about stories they didn’t realize had been published under their names.
  • According to three journalists who worked at BNN and screenshots of WhatsApp conversations reviewed by The Times, Mr. Chahal regularly directed profanities at employees and called them idiots and morons. When employees said purely A.I.-generated news, such as the Fanning story, should be published under the generic “BNN Newsroom” byline, Mr. Chahal was dismissive.“When I do this, I won’t have a need for any of you,” he wrote on WhatsApp.Mr. Bakir replied to Mr. Chahal that assigning journalists’ bylines to A.I.-generated stories was putting their integrity and careers in “jeopardy.”
  • This was a strategy that Mr. Chahal favored, according to former BNN employees. He used his news service to exercise grudges, publishing slanted stories about a politician from San Francisco he disliked, Wikipedia after it published a negative entry about BNN Breaking and Elon Musk after accounts belonging to Mr. Chahal, his wife and his companies were suspended o
  • The increasing popularity of programmatic advertising — which uses algorithms to automatically place ads across the internet — allows A.I.-powered news sites to generate revenue by mass-producing low-quality clickbait content
  • Experts are nervous about how A.I.-fueled news could overwhelm accurate reporting with a deluge of junk content distorted by machine-powered repetition. A particular worry is that A.I. aggregators could chip away even further at the viability of local journalism, siphoning away its revenue and damaging its credibility by contaminating the information ecosystem.
abbykleman

As Artificial Intelligence Evolves, So Does Its Criminal Potential - 0 views

  •  
    Imagine receiving a phone call from your aging mother seeking your help because she has forgotten her banking password. Except it's not your mother. The voice on the other end of the phone call just sounds deceptively like her.
Javier E

Welcome, Robot Overlords. Please Don't Fire Us? | Mother Jones - 0 views

  • There will be no place to go but the unemployment line.
  • Slowly but steadily, labor's share of total national income has gone down, while the share going to capital owners has gone up. The most obvious effect of this is the skyrocketing wealth of the top 1 percent, due mostly to huge increases in capital gains and investment income.
  • at this point our tale takes a darker turn. What do we do over the next few decades as robots become steadily more capable and steadily begin taking away all our jobs?
  • ...34 more annotations...
  • The economics community just hasn't spent much time over the past couple of decades focusing on the effect that machine intelligence is likely to have on the labor marke
  • The Digital Revolution is different because computers can perform cognitive tasks too, and that means machines will eventually be able to run themselves. When that happens, they won't just put individuals out of work temporarily. Entire classes of workers will be out of work permanently. In other words, the Luddites weren't wrong. They were just 200 years too early
  • while it's easy to believe that some jobs can never be done by machines—do the elderly really want to be tended by robots?—that may not be true.
  • Robotic pets are growing so popular that Sherry Turkle, an MIT professor who studies the way we interact with technology, is uneasy about it: "The idea of some kind of artificial companionship," she says, "is already becoming the new normal."
  • robots will take over more and more jobs. And guess who will own all these robots? People with money, of course. As this happens, capital will become ever more powerful and labor will become ever more worthless. Those without money—most of us—will live on whatever crumbs the owners of capital allow us.
  • Economist Paul Krugman recently remarked that our long-standing belief in skills and education as the keys to financial success may well be outdated. In a blog post titled "Rise of the Robots," he reviewed some recent economic data and predicted that we're entering an era where the prime cause of income inequality will be something else entirely: capital vs. labor.
  • We're already seeing them, and not just because of the crash of 2008. They started showing up in the statistics more than a decade ago. For a while, though, they were masked by the dot-com and housing bubbles, so when the financial crisis hit, years' worth of decline was compressed into 24 months. The trend lines dropped off the cliff.
  • In the economics literature, the increase in the share of income going to capital owners is known as capital-biased technological change
  • The question we want to answer is simple: If CBTC is already happening—not a lot, but just a little bit—what trends would we expect to see? What are the signs of a computer-driven economy?
  • if automation were displacing labor, we'd expect to see a steady decline in the share of the population that's employed.
  • Second, we'd expect to see fewer job openings than in the past.
  • Third, as more people compete for fewer jobs, we'd expect to see middle-class incomes flatten in a race to the bottom.
  • Fourth, with consumption stagnant, we'd expect to see corporations stockpile more cash and, fearing weaker sales, invest less in new products and new factories
  • Fifth, as a result of all this, we'd expect to see labor's share of national income decline and capital's share rise.
  • There will be no place to go but the unemployment line.
  • The modern economy is complex, and most of these trends have multiple causes.
  • in another sense, we should be very alarmed. It's one thing to suggest that robots are going to cause mass unemployment starting in 2030 or so. We'd have some time to come to grips with that. But the evidence suggests that—slowly, haltingly—it's happening already, and we're simply not prepared for it.
  • the first jobs to go will be middle-skill jobs. Despite impressive advances, robots still don't have the dexterity to perform many common kinds of manual labor that are simple for humans—digging ditches, changing bedpans. Nor are they any good at jobs that require a lot of cognitive skill—teaching classes, writing magazine articles
  • in the middle you have jobs that are both fairly routine and require no manual dexterity. So that may be where the hollowing out starts: with desk jobs in places like accounting or customer support.
  • In fact, there's even a digital sports writer. It's true that a human being wrote this story—ask my mother if you're not sure—but in a decade or two I might be out of a job too
  • Doctors should probably be worried as well. Remember Watson, the Jeopardy!-playing computer? It's now being fed millions of pages of medical information so that it can help physicians do a better job of diagnosing diseases. In another decade, there's a good chance that Watson will be able to do this without any human help at all.
  • Take driverless cars.
  • Most likely, owners of capital would strongly resist higher taxes, as they always have, while workers would be unhappy with their enforced idleness. Still, the ancient Romans managed to get used to it—with slave labor playing the role of robots—and we might have to, as well.
  • There will be no place to go but the unemployment lin
  • we'll need to let go of some familiar convictions. Left-leaning observers may continue to think that stagnating incomes can be improved with better education and equality of opportunity. Conservatives will continue to insist that people without jobs are lazy bums who shouldn't be coddled. They'll both be wrong.
  • Corporate executives should worry too. For a while, everything will seem great for them: Falling labor costs will produce heftier profits and bigger bonuses. But then it will all come crashing down. After all, robots might be able to produce goods and services, but they can't consume them
  • we'll probably have only a few options open to us. The simplest, because it's relatively familiar, is to tax capital at high rates and use the money to support displaced workers. In other words, as The Economist's Ryan Avent puts it, "redistribution, and a lot of it."
  • would we be happy in a society that offers real work to a dwindling few and bread and circuses for the rest?
  • The next step might be passenger vehicles on fixed routes, like airport shuttles. Then long-haul trucks. Then buses and taxis. There are 2.5 million workers who drive trucks, buses, and taxis for a living, and there's a good chance that, one by one, all of them will be displaced
  •  economist Noah Smith suggests that we might have to fundamentally change the way we think about how we share economic growth. Right now, he points out, everyone is born with an endowment of labor by virtue of having a body and a brain that can be traded for income. But what to do when that endowment is worth a fraction of what it is today? Smith's suggestion: "Why not also an endowment of capital? What if, when each citizen turns 18, the government bought him or her a diversified portfolio of equity?"
  • In simple terms, if owners of capital are capturing an increasing fraction of national income, then that capital needs to be shared more widely if we want to maintain a middle-class society.
  • it's time to start thinking about our automated future in earnest. The history of mass economic displacement isn't encouraging—fascists in the '20s, Nazis in the '30s—and recent high levels of unemployment in Greece and Italy have already produced rioting in the streets and larger followings for right-wing populist parties. And that's after only a few years of misery.
  • When the robot revolution finally starts to happen, it's going to happen fast, and it's going to turn our world upside down. It's easy to joke about our future robot overlords—R2-D2 or the Terminator?—but the challenge that machine intelligence presents really isn't science fiction anymore. Like Lake Michigan with an inch of water in it, it's happening around us right now even if it's hard to see
  • A robotic paradise of leisure and contemplation eventually awaits us, but we have a long and dimly lit tunnel to navigate before we get there.
Javier E

The Evidence Supports Artificial Sweeteners Over Sugar - The New York Times - 0 views

  • what about sugar? We should acknowledge that when I, and many others, address sugar in contexts like these, we are talking about added sugars, not the naturally occurring sugars or carbohydrates you find in things like fruit. Those are, for the most part, not the problem. Added sugars are
  • The Centers for Disease Control and Prevention reports that children are consuming between 282 calories (for girls) and 362 calories (for boys) of added sugars per day on average. This means that more than 15 percent of their dietary caloric intake is from added sugars
  • he increased risk of death began once a person consumed the equivalent of one 20-ounce Mountain Dew in a 2,000-calorie diet, and reached more than a fourfold increase if people consumed more than one-third of their diet in added sugars.
Javier E

Will You Lose Your Job to a Robot? Silicon Valley Is Split - NYTimes.com - 0 views

  • The question for Silicon Valley is whether we’re heading toward a robot-led coup or a leisure-filled utopia.
  • nterviews with 2,551 people who make, research and analyze new technology. Most agreed that robotics and artificial intelligence would transform daily life by 2025, but respondents were almost evenly split about what that might mean for the economy and employment.
  • techno-optimists. They believe that even though machines will displace many jobs in a decade, technology and human ingenuity will produce many more, as happened after the agricultural and industrial revolutions. The meaning of “job” might change, too, if people find themselves with hours of free time because the mundane tasks that fill our days are automated.
  • ...8 more annotations...
  • The other half agree that some jobs will disappear, but they are not convinced that new ones will take their place, even for some highly skilled workers. They fear a future of widespread unemployment, deep inequality and violent uprisings — particularly if policy makers and educational institutions don’t step in.
  • We’re going to have to come to grips with a long-term employment crisis and the fact that — strictly from an economic point of view, not a moral point of view — there are more and more ‘surplus humans.'  ”
  • “The degree of integration of A.I. into daily life will depend very much, as it does now, on wealth. The people whose personal digital devices are day-trading for them, and doing the grocery shopping and sending greeting cards on their behalf, are people who are living a different life than those who are worried about missing a day at one of their three jobs due to being sick, and losing the job and being unable to feed their children.”
  • “Only the best-educated humans will compete with machines. And education systems in the U.S. and much of the rest of the world are still sitting students in rows and columns, teaching them to keep quiet and memorize what is told to them, preparing them for life in a 20th century factory.”
  • “We hardly dwell on the fact that someone trying to pick a career path that is not likely to be automated will have a very hard time making that choice. X-ray technician? Outsourced already, and automation in progress. The race between automation and human work is won by automation.”
  • “Robotic sex partners will be commonplace. … The central question of 2025 will be: What are people for in a world that does not need their labor, and where only a minority are needed to guide the ‘bot-based economy?'  ”
  • “Employment will be mostly very skilled labor — and even those jobs will be continuously whittled away by increasingly sophisticated machines. Live, human salespeople, nurses, doctors, actors will be symbols of luxury, the silk of human interaction as opposed to the polyester of simulated human contact.”
  • The biggest exception will be jobs that depend upon empathy as a core capacity — schoolteacher, personal service worker, nurse. These jobs are often those traditionally performed by women. One of the bigger social questions of the mid-late 2020s will be the role of men in this world.”
Javier E

Drones Beaming Web Access Are in the Stars for Facebook - NYTimes.com - 0 views

  • in a high-stakes competition for domination of the Internet, in which Google wields high-altitude balloons and high-speed fiber networks and Amazon has experimental delivery drones and colossal data centers, Facebook is under pressure to show that it, too, can pursue projects that are more speculative than product.
  • “The Amazons, Googles and Facebooks are exploring completely new things that will change the way we live,
  • Facebook’s drone team, which came to the company through the acquisition last year of the drone maker Ascenta, say they believe their solar-powered craft can eventually be aloft up to three months at a time, beaming high-speed data from 60,000 to 90,000 feet to some of the world’s remotest regions via laser. Test flights are to begin this summer, though full commercial deployment may take years
  • ...4 more annotations...
  • “We want to serve every person in the world” with high-speed Internet signals, said Yael Maguire, head of Facebook’s Connectivity Lab. The dream — assuming regulators around the planet go along with it — is a fleet as big as 1,000 drones connecting people to the Internet. And where it is too remote even for the drones, satellites would do the trick.
  • Facebook’s effort in artificial intelligence is called deep learning, for the number of levels at which it critically analyzes information. By figuring out context, Facebook better knows why people anywhere are looking at something, and what else it can do to keep them engaged.
  • For the long term, Mr. Zuckerberg hopes Facebook’s A.I. will translate languages on the fly, know strangers you might meet and, of course, bring you the highest-value ads
  • Because, in the end, it’s still about getting you to look at more ads.“The fundamental thing about advertising is people paying to get a message in front of you,” Mr. Schroepfer said. “That won’t go away in my life, though the form may change.”
drewmangan1

Hillary Clinton says early lead was 'artificial' - CNNPolitics.com - 0 views

  • "That is really artificial, all of those early soundings and polls," Clinton said. "Once you get into it, this is a Democratic election for our nominee and it gets really close, exciting. And it really depends upon on who can make the best case that you can be the nominee to beat whoever the Republicans put up and try to get your folks who support you to come out."
Javier E

Opinion | The Deadly Soul of a New Machine - The New York Times - 0 views

  • it’s not too much of a reach to see Flight 610 as representative of the hinge in history we’ve arrived at — with the bots, the artificial intelligence and the social media algorithms now shaping the fate of humanity at a startling pace.
  • Like the correction system in the 737, these inventions are designed to make life easier and safer — or at least more profitable for the owners.
  • The C.E.O. of Microsoft, Satya Nadella, hit a similar cautionary note at the company’s recent annual shareholder meeting. Big Tech, he said, should be asking “not what computers can do, but what they should do.”
  • ...3 more annotations...
  • The overall idea is to outsource certain human functions, the drudgery and things prone to faulty judgment, while retaining master control. The question is: At what point is control lost and the creations take over? How about now?
  • It’s the “can do” part that should scare you. Facebook, once all puppies, baby pictures and high school reunion updates, is a monster of misinformation.
  • s haunting as those final moments inside the cockpit of Flight 610 were, it’s equally haunting to grasp the full meaning of what happened: The system overrode the humans and killed everyone. Our invention. Our folly.
Javier E

Tech C.E.O.s Are in Love With Their Principal Doomsayer - The New York Times - 0 views

  • The futurist philosopher Yuval Noah Harari worries about a lot.
  • He worries that Silicon Valley is undermining democracy and ushering in a dystopian hellscape in which voting is obsolete.
  • He worries that by creating powerful influence machines to control billions of minds, the big tech companies are destroying the idea of a sovereign individual with free will.
  • ...27 more annotations...
  • He worries that because the technological revolution’s work requires so few laborers, Silicon Valley is creating a tiny ruling class and a teeming, furious “useless class.”
  • If this is his harrowing warning, then why do Silicon Valley C.E.O.s love him so
  • When Mr. Harari toured the Bay Area this fall to promote his latest book, the reception was incongruously joyful. Reed Hastings, the chief executive of Netflix, threw him a dinner party. The leaders of X, Alphabet’s secretive research division, invited Mr. Harari over. Bill Gates reviewed the book (“Fascinating” and “such a stimulating writer”) in The New York Times.
  • it’s insane he’s so popular, they’re all inviting him to campus — yet what Yuval is saying undermines the premise of the advertising- and engagement-based model of their products,
  • Part of the reason might be that Silicon Valley, at a certain level, is not optimistic on the future of democracy. The more of a mess Washington becomes, the more interested the tech world is in creating something else
  • he brought up Aldous Huxley. Generations have been horrified by his novel “Brave New World,” which depicts a regime of emotion control and painless consumption. Readers who encounter the book today, Mr. Harari said, often think it sounds great. “Everything is so nice, and in that way it is an intellectually disturbing book because you’re really hard-pressed to explain what’s wrong with it,” he said. “And you do get today a vision coming out of some people in Silicon Valley which goes in that direction.”
  • The story of his current fame begins in 2011, when he published a book of notable ambition: to survey the whole of human existence. “Sapiens: A Brief History of Humankind,” first released in Hebrew, did not break new ground in terms of historical research. Nor did its premise — that humans are animals and our dominance is an accident — seem a likely commercial hit. But the casual tone and smooth way Mr. Harari tied together existing knowledge across fields made it a deeply pleasing read, even as the tome ended on the notion that the process of human evolution might be over.
  • He followed up with “Homo Deus: A Brief History of Tomorrow,” which outlined his vision of what comes after human evolution. In it, he describes Dataism, a new faith based around the power of algorithms. Mr. Harari’s future is one in which big data is worshiped, artificial intelligence surpasses human intelligence, and some humans develop Godlike abilities.
  • Now, he has written a book about the present and how it could lead to that future: “21 Lessons for the 21st Century.” It is meant to be read as a series of warnings. His recent TED Talk was called “Why fascism is so tempting — and how your data could power it.”
  • At the Alphabet talk, Mr. Harari had been accompanied by his publisher. They said that the younger employees had expressed concern about whether their work was contributing to a less free society, while the executives generally thought their impact was positive
  • Some workers had tried to predict how well humans would adapt to large technological change based on how they have responded to small shifts, like a new version of Gmail. Mr. Harari told them to think more starkly: If there isn’t a major policy intervention, most humans probably will not adapt at all.
  • It made him sad, he told me, to see people build things that destroy their own societies, but he works every day to maintain an academic distance and remind himself that humans are just animals. “Part of it is really coming from seeing humans as apes, that this is how they behave,” he said, adding, “They’re chimpanzees. They’re sapiens. This is what they do.”
  • this summer, Mark Zuckerberg, who has recommended Mr. Harari to his book club, acknowledged a fixation with the autocrat Caesar Augustus. “Basically,” Mr. Zuckerberg told The New Yorker, “through a really harsh approach, he established 200 years of world peace.”
  • He said he had resigned himself to tech executives’ global reign, pointing out how much worse the politicians are. “I’ve met a number of these high-tech giants, and generally they’re good people,” he said. “They’re not Attila the Hun. In the lottery of human leaders, you could get far worse.”
  • Some of his tech fans, he thinks, come to him out of anxiety. “Some may be very frightened of the impact of what they are doing,” Mr. Harari said
  • as he spoke about meditation — Mr. Harari spends two hours each day and two months each year in silence — he became commanding. In a region where self-optimization is paramount and meditation is a competitive sport, Mr. Harari’s devotion confers hero status.
  • He told the audience that free will is an illusion, and that human rights are just a story we tell ourselves. Political parties, he said, might not make sense anymore. He went on to argue that the liberal world order has relied on fictions like “the customer is always right” and “follow your heart,” and that these ideas no longer work in the age of artificial intelligence, when hearts can be manipulated at scale.
  • Everyone in Silicon Valley is focused on building the future, Mr. Harari continued, while most of the world’s people are not even needed enough to be exploited. “Now you increasingly feel that there are all these elites that just don’t need me,” he said. “And it’s much worse to be irrelevant than to be exploited.”
  • The useless class he describes is uniquely vulnerable. “If a century ago you mounted a revolution against exploitation, you knew that when bad comes to worse, they can’t shoot all of us because they need us,” he said, citing army service and factory work.
  • Now it is becoming less clear why the ruling elite would not just kill the new useless class. “You’re totally expendable,” he told the audience.
  • This, Mr. Harari told me later, is why Silicon Valley is so excited about the concept of universal basic income, or stipends paid to people regardless of whether they work. The message is: “We don’t need you. But we are nice, so we’ll take care of you.”
  • On Sept. 14, he published an essay in The Guardian assailing another old trope — that “the voter knows best.”
  • “If humans are hackable animals, and if our choices and opinions don’t reflect our free will, what should the point of politics be?” he wrote. “How do you live when you realize … that your heart might be a government agent, that your amygdala might be working for Putin, and that the next thought that emerges in your mind might well be the result of some algorithm that knows you better than you know yourself? These are the most interesting questions humanity now faces.”
  • Today, they have a team of eight based in Tel Aviv working on Mr. Harari’s projects. The director Ridley Scott and documentarian Asif Kapadia are adapting “Sapiens” into a TV show, and Mr. Harari is working on children’s books to reach a broader audience.
  • Being gay, Mr. Harari said, has helped his work — it set him apart to study culture more clearly because it made him question the dominant stories of his own conservative Jewish society. “If society got this thing wrong, who guarantees it didn’t get everything else wrong as well?” he said
  • “If I was a superhuman, my superpower would be detachment,” Mr. Harari added. “O.K., so maybe humankind is going to disappear — O.K., let’s just observe.”
  • They just finished “Dear White People,” and they loved the Australian series “Please Like Me.” That night, they had plans to either meet Facebook executives at company headquarters or watch the YouTube show “Cobra Kai.”
Javier E

The great artificial intelligence duopoly - The Washington Post - 0 views

  • The AI revolution will have two engines — China and the United States — pushing its progress swiftly forward. It is unlike any previous technological revolution that emerged from a singular cultural setting. Having two engines will further accelerate the pace of technology.
  • WorldPost: In your book, you talk about the “data gap” between these two engines. What do you mean by that? Lee: Data is the raw material on which AI runs. It is like the role of oil in powering an industrial economy. As an AI algorithm is fed more examples of the phenomenon you want the algorithm to understand, it gains greater and greater accuracy. The more faces you show a facial recognition algorithm, the fewer mistakes it will make in recognizing your face
  • All data is not the same, however. China and the United States have different strengths when it comes to data. The gap emerges when you consider the breadth, quality and depth of the data. Breadth means the number of users, the population whose actions are captured in data. Quality means how well-structured and well-labeled the data is. Depth means how many different data points are generated about the activities of each user.
  • ...15 more annotations...
  • Chinese and American companies are on relatively even footing when it comes to breadth. Though American Internet companies have a smaller domestic user base than China, which has over a billion users on 4G devices, the best American companies can also draw in users from around the globe, bringing their total user base to over a billion.
  • when it comes to depth of data, China has the upper hand. Chinese Internet users channel a much larger portion of their daily activities, transactions and interactions through their smartphones. They use their smartphones for managing their daily lives, from buying groceries at the market to paying their utility bills, booking train or bus tickets and to take out loans, among other things.
  • Weaving together data from mobile payments, public services, financial management and shared mobility gives Chinese companies a deep and more multi-dimensional picture of their users. That allows their AI algorithms to precisely tailor product offerings to each individual. In the current age of AI implementation, this will likely lead to a substantial acceleration and deepening of AI’s impact across China’s economy. That is where the “data gap” appears
  • The radically different business model in China, married to Chinese user habits, creates indigenous branding and monetization strategies as well as an entirely alternative infrastructure for apps and content. It is therefore very difficult, if not impossible, for any American company to try to enter China’s market or vice versa
  • companies in both countries are pursuing their own form of international expansion. The United States uses a “full platform” approach — all Google, all Facebook. Essentially Australia, North America and Europe completely accept the American methodology. That technical empire is likely to continue.
  • The Chinese have realized that the U.S. empire is too difficult to penetrate, so they are looking elsewhere. They are trying, and generally succeeding, in Southeast Asia, the Middle East and Africa. Those regions and countries have not been a focus of U.S. tech, so their products are not built with the cultures of those countries in mind. And since their demographics are closer to China’s — lower income and lots of people, including youth — the Chinese products are a better fit.
  • If you were to draw a map a decade from now, you would see China’s tech zone — built not on ownership but partnerships — stretching across Southeast Asia, Indonesia, Africa and to some extent South America. The U.S. zone would entail North America, Australia and Europe. Over time, the “parallel universes” already extant in the United States and China will grow to cover the whole world.
  • Policy-wise, we are seeing three approaches. The Chinese have unleashed entrepreneurs with a utilitarian passion to commercialize technology. The Americans are similarly pro-entrepreneur, but the government takes a laissez-faire attitude and the entrepreneurs carry out more moonshots. And Europe is more consumer-oriented, trying to give ownership and control of data back to the individual.
  • An AI arms race would be a grave mistake. The AI boom is more akin to the spread of electricity in the early Industrial Revolution than nuclear weapons during the Cold War. Those who take the arms-race view are more interested in political posturing than the flourishing of humanity. The value of AI as an omni-use technology rests in its creative, not destructive, potential.
  • In a way, having parallel universes should diminish conflict. They can coexist while each can learn from the other. It is not a zero-sum game of winners and losers.
  • We will see a massive migration from one kind of employment to another, not unlike during the transition from agriculture to manufacturing. It will largely be the lower-wage jobs in routine work that will be eliminated, while the ultra-rich will stand to make a lot of money from AI. Social inequality will thus widen.
  • The jobs that AI cannot do are those of creators, or what I call “empathetic jobs” in services, which will be the largest category that can absorb those displaced from routine jobs. Many jobs will become available in this sector, from teaching to elderly care and nursing. A great effort must be made not only to increase the number of those jobs and create a career path for them but to increase their social status, which also means increasing the pay of these jobs.
  • There are also issues related to poorer countries who have relied on either following the old China model of low-wage manufacturing jobs or of India’s call centers. AI will replace those jobs that were created by outsourcing from the West. They will be the first to go in the next 10 years. So, underdeveloped countries will also have to look to jobs for creators and in services.
  • I am opposed to the idea of universal basic income because it provides money both to those who don’t need it as well as those who do. And it doesn’t stimulate people’s desire to work. It puts them into a kind of “useless class” category with the terrible consequence of a resentful class without dignity or status.
  • To reinvigorate people’s desire to work with dignity, some subsidy can help offset the costs of critical needs that only humans can provide. That would be a much better use of the distribution of income than giving it to every person whether they need it or not. A far better idea would be for workers of the future to have an equity share in owning the robots — universal basic capital instead of universal basic income.
Javier E

Opinion | I Used to Work for Google. I Am a Conscientious Objector. - The New York Times - 0 views

  • “We can forgive your politics and focus on your technical contributions as long as you don’t do something unforgivable, like speaking to the press.”
  • This was the parting advice given to me during my exit interview from Google after spending a month internally arguing, resignation letter in hand, for the company to clarify its ethical red lines around Project Dragonfly, the effort to modify Search to meet the censorship and surveillance demands of the Chinese Communist Party.
  • When a prototype circulated internally of a system that would ostensibly allow the Chinese government to surveil Chinese users’ queries by their phone numbers, Google executives argued that it was within existing norms
  • ...8 more annotations...
  • the time has passed when tech companies can simply build tools, write algorithms and amass data without regard to who uses the technology and for what purpose.
  • Nearly a decade ago, Cisco Systems was sued in federal court on behalf of 11 members of the Falun Gong organization, who claimed that the company built a nationwide video surveillance and “forced conversion” profiling system for the Chinese government that was tailored to help Beijing crack down on the group
  • According to Cisco’s own marketing materials, the video analyzer — which would now be marketed as artificial intelligence — was the “only product capable of recognizing over 90 percent of Falun Gong pictorial information.”
  • The failure to punish Cisco set a precedent for American companies to build artificial intelligence for foreign governments to use for political oppression
  • Thermo Fisher, sold DNA analyzers to aid in the current large-scale domestic surveillance and internment of hundreds of thousands of Uighurs, a predominantly Muslim ethnic group, in the region of Xinjiang.
  • Mr. Yang defended Yahoo’s human rights commitments and emphasized the importance of the Chinese market. Google used a similar defense for Dragonfly last year.
  • Tech companies are spending record amounts on lobbying and quietly fighting to limit employees’ legal protections for organizing. North American legislators would be wise to answer the call from human rights organizations and research institutions by guaranteeing explicit whistle-blower protections similar to those recently passed by the European Union
  • Ideally, they would vocally support an instrument that legally binds businesses — via international human rights law — to uphold human rights.
Javier E

Stanford launches artificial intelligence institute to put humans and ethics at the cen... - 0 views

  • “The correct answer to pretty much everything in AI is more of it,” said Schmidt, the former Google chairman. “This generation is much more socially conscious than we were, and more broadly concerned about the impact of everything they do, so you’ll see a combination of both optimism and realism.”
  • Researchers and journalists have shown how AI technologies, largely designed by white and Asian men, tend to reproduce and amplify social biases in dangerous ways. Computer vision technologies built into cameras have trouble recognizing the faces of people of color. Voice recognition struggles to pick up English accents that aren’t mainstream. Algorithms built to predict the likelihood of parole violations are rife with racial bias.
Javier E

Google Chief Economist Hal Varian Argues Automation Is Essential - CIO Journal. - WSJ - 0 views

  • Hal Varian, chief economist at Alphabet Inc.-owned Google, is optimistic about the overall impact of automation on the worldwide economy.
  • Automating routine, predictable tasks will help mitigate the effects of a tight labor market over the next decade
  • “Automation, in my view, is coming along just in time to address this coming period of labor shortages,”
  • ...7 more annotations...
  • Free tutorial videos on platforms like Google-owned YouTube will also mitigate negative effects of the tight labor market,
  • “We never had a technology before that could educate such a broad group of people any time on an as-needed basis for free,”
  • Jobs where one person is doing the same, routine task over and over again, in an automobile-manufacturing assembly line, for example, will be the first to be automated, he said. “If you look at environments that don’t have those characteristics, then the possibilities of automation become much more problematic,”
  • Analysts at Forrester Research Inc. predict that over the next five years, about 4 million jobs will be lost in the U.S. as a result of artificial intelligence and related technology, including software robots.
  • Gartner Inc. has said that artificial intelligence will create 2.3 million jobs in 2020, while eliminating 1.8 million.
  • “If you want to see what the future looks like in an extreme case, go to Japan, where there are vending machines that are providing so many things because of the shortage of labor,” he said.
  • “The line in Silicon Valley is, ‘We always overestimate the amount of change that can occur in a year and we underestimate what can occur in a decade,’” he said. “So I think that’s a very good principle to keep in mind.”
Javier E

'Fiction is outperforming reality': how YouTube's algorithm distorts truth | Technology... - 0 views

  • There are 1.5 billion YouTube users in the world, which is more than the number of households that own televisions. What they watch is shaped by this algorithm, which skims and ranks billions of videos to identify 20 “up next” clips that are both relevant to a previous video and most likely, statistically speaking, to keep a person hooked on their screen.
  • Company insiders tell me the algorithm is the single most important engine of YouTube’s growth
  • YouTube engineers describe it as one of the “largest scale and most sophisticated industrial recommendation systems in existence”
  • ...49 more annotations...
  • Lately, it has also become one of the most controversial. The algorithm has been found to be promoting conspiracy theories about the Las Vegas mass shooting and incentivising, through recommendations, a thriving subculture that targets children with disturbing content
  • One YouTube creator who was banned from making advertising revenues from his strange videos – which featured his children receiving flu shots, removing earwax, and crying over dead pets – told a reporter he had only been responding to the demands of Google’s algorithm. “That’s what got us out there and popular,” he said. “We learned to fuel it and do whatever it took to please the algorithm.”
  • academics have speculated that YouTube’s algorithms may have been instrumental in fuelling disinformation during the 2016 presidential election. “YouTube is the most overlooked story of 2016,” Zeynep Tufekci, a widely respected sociologist and technology critic, tweeted back in October. “Its search and recommender algorithms are misinformation engines.”
  • Those are not easy questions to answer. Like all big tech companies, YouTube does not allow us to see the algorithms that shape our lives. They are secret formulas, proprietary software, and only select engineers are entrusted to work on the algorithm
  • Guillaume Chaslot, a 36-year-old French computer programmer with a PhD in artificial intelligence, was one of those engineers.
  • The experience led him to conclude that the priorities YouTube gives its algorithms are dangerously skewed.
  • Chaslot said none of his proposed fixes were taken up by his managers. “There are many ways YouTube can change its algorithms to suppress fake news and improve the quality and diversity of videos people see,” he says. “I tried to change YouTube from the inside but it didn’t work.”
  • Chaslot explains that the algorithm never stays the same. It is constantly changing the weight it gives to different signals: the viewing patterns of a user, for example, or the length of time a video is watched before someone clicks away.
  • The engineers he worked with were responsible for continuously experimenting with new formulas that would increase advertising revenues by extending the amount of time people watched videos. “Watch time was the priority,” he recalls. “Everything else was considered a distraction.”
  • Chaslot was fired by Google in 2013, ostensibly over performance issues. He insists he was let go after agitating for change within the company, using his personal time to team up with like-minded engineers to propose changes that could diversify the content people see.
  • He was especially worried about the distortions that might result from a simplistic focus on showing people videos they found irresistible, creating filter bubbles, for example, that only show people content that reinforces their existing view of the world.
  • “YouTube is something that looks like reality, but it is distorted to make you spend more time online,” he tells me when we meet in Berkeley, California. “The recommendation algorithm is not optimising for what is truthful, or balanced, or healthy for democracy.”
  • YouTube told me that its recommendation system had evolved since Chaslot worked at the company and now “goes beyond optimising for watchtime”.
  • It did not say why Google, which acquired YouTube in 2006, waited over a decade to make those changes
  • Chaslot believes such changes are mostly cosmetic, and have failed to fundamentally alter some disturbing biases that have evolved in the algorithm
  • It finds videos through a word search, selecting a “seed” video to begin with, and recording several layers of videos that YouTube recommends in the “up next” column. It does so with no viewing history, ensuring the videos being detected are YouTube’s generic recommendations, rather than videos personalised to a user. And it repeats the process thousands of times, accumulating layers of data about YouTube recommendations to build up a picture of the algorithm’s preferences.
  • Each study finds something different, but the research suggests YouTube systematically amplifies videos that are divisive, sensational and conspiratorial.
  • When his program found a seed video by searching the query “who is Michelle Obama?” and then followed the chain of “up next” suggestions, for example, most of the recommended videos said she “is a man”
  • He believes one of the most shocking examples was detected by his program in the run-up to the 2016 presidential election. As he observed in a short, largely unnoticed blogpost published after Donald Trump was elected, the impact of YouTube’s recommendation algorithm was not neutral during the presidential race: it was pushing videos that were, in the main, helpful to Trump and damaging to Hillary Clinton.
  • “It was strange,” he explains to me. “Wherever you started, whether it was from a Trump search or a Clinton search, the recommendation algorithm was much more likely to push you in a pro-Trump direction.”
  • Trump won the electoral college as a result of 80,000 votes spread across three swing states. There were more than 150 million YouTube users in the US. The videos contained in Chaslot’s database of YouTube-recommended election videos were watched, in total, more than 3bn times before the vote in November 2016.
  • “Algorithms that shape the content we see can have a lot of impact, particularly on people who have not made up their mind,”
  • “Gentle, implicit, quiet nudging can over time edge us toward choices we might not have otherwise made.”
  • But what was most compelling was how often Chaslot’s software detected anti-Clinton conspiracy videos appearing “up next” beside other videos.
  • I spent weeks watching, sorting and categorising the trove of videos with Erin McCormick, an investigative reporter and expert in database analysis. From the start, we were stunned by how many extreme and conspiratorial videos had been recommended, and the fact that almost all of them appeared to be directed against Clinton.
  • “This research captured the apparent direction of YouTube’s political ecosystem,” he says. “That has not been done before.”
  • There were too many videos in the database for us to watch them all, so we focused on 1,000 of the top-recommended videos. We sifted through them one by one to determine whether the content was likely to have benefited Trump or Clinton. Just over a third of the videos were either unrelated to the election or contained content that was broadly neutral or even-handed. Of the remaining 643 videos, 551 were videos favouring Trump, while only only 92 favoured the Clinton campaign.
  • The sample we had looked at suggested Chaslot’s conclusion was correct: YouTube was six times more likely to recommend videos that aided Trump than his adversary.
  • The spokesperson added: “Our search and recommendation systems reflect what people search for, the number of videos available, and the videos people choose to watch on YouTube. That’s not a bias towards any particular candidate; that is a reflection of viewer interest.”
  • YouTube seemed to be saying that its algorithm was a neutral mirror of the desires of the people who use it – if we don’t like what it does, we have ourselves to blame. How does YouTube interpret “viewer interest” – and aren’t “the videos people choose to watch” influenced by what the company shows them?
  • Offered the choice, we may instinctively click on a video of a dead man in a Japanese forest, or a fake news clip claiming Bill Clinton raped a 13-year-old. But are those in-the-moment impulses really a reflect of the content we want to be fed?
  • YouTube’s recommendation system has probably figured out that edgy and hateful content is engaging. “This is a bit like an autopilot cafeteria in a school that has figured out children have sweet teeth, and also like fatty and salty foods,” she says. “So you make a line offering such food, automatically loading the next plate as soon as the bag of chips or candy in front of the young person has been consumed.”
  • Once that gets normalised, however, what is fractionally more edgy or bizarre becomes, Tufekci says, novel and interesting. “So the food gets higher and higher in sugar, fat and salt – natural human cravings – while the videos recommended and auto-played by YouTube get more and more bizarre or hateful.”
  • “This is important research because it seems to be the first systematic look into how YouTube may have been manipulated,” he says, raising the possibility that the algorithm was gamed as part of the same propaganda campaigns that flourished on Twitter and Facebook.
  • “We believe that the activity we found was limited because of various safeguards that we had in place in advance of the 2016 election, and the fact that Google’s products didn’t lend themselves to the kind of micro-targeting or viral dissemination that these actors seemed to prefer.”
  • Senator Mark Warner, the ranking Democrat on the intelligence committee, later wrote to the company about the algorithm, which he said seemed “particularly susceptible to foreign influence”. The senator demanded to know what the company was specifically doing to prevent a “malign incursion” of YouTube’s recommendation system. Walker, in his written reply, offered few specifics
  • Tristan Harris, a former Google insider turned tech whistleblower, likes to describe Facebook as a “living, breathing crime scene for what happened in the 2016 election” that federal investigators have no access to. The same might be said of YouTube. About half the videos Chaslot’s program detected being recommended during the election have now vanished from YouTube – many of them taken down by their creators. Chaslot has always thought this suspicious. These were videos with titles such as “Must Watch!! Hillary Clinton tried to ban this video”, watched millions of times before they disappeared. “Why would someone take down a video that has been viewed millions of times?” he asks
  • I shared the entire database of 8,000 YouTube-recommended videos with John Kelly, the chief executive of the commercial analytics firm Graphika, which has been tracking political disinformation campaigns. He ran the list against his own database of Twitter accounts active during the election, and concluded many of the videos appeared to have been pushed by networks of Twitter sock puppets and bots controlled by pro-Trump digital consultants with “a presumably unsolicited assist” from Russia.
  • “I don’t have smoking-gun proof of who logged in to control those accounts,” he says. “But judging from the history of what we’ve seen those accounts doing before, and the characteristics of how they tweet and interconnect, they are assembled and controlled by someone – someone whose job was to elect Trump.”
  • After the Senate’s correspondence with Google over possible Russian interference with YouTube’s recommendation algorithm was made public last week, YouTube sent me a new statement. It emphasised changes it made in 2017 to discourage the recommendation system from promoting some types of problematic content. “We appreciate the Guardian’s work to shine a spotlight on this challenging issue,” it added. “We know there is more to do here and we’re looking forward to making more announcements in the months ahead.”
  • In the months leading up to the election, the Next News Network turned into a factory of anti-Clinton news and opinion, producing dozens of videos a day and reaching an audience comparable to that of MSNBC’s YouTube channel. Chaslot’s research indicated Franchi’s success could largely be credited to YouTube’s algorithms, which consistently amplified his videos to be played “up next”. YouTube had sharply dismissed Chaslot’s research.
  • I contacted Franchi to see who was right. He sent me screen grabs of the private data given to people who upload YouTube videos, including a breakdown of how their audiences found their clips. The largest source of traffic to the Bill Clinton rape video, which was viewed 2.4m times in the month leading up to the election, was YouTube recommendations.
  • The same was true of all but one of the videos Franchi sent me data for. A typical example was a Next News Network video entitled “WHOA! HILLARY THINKS CAMERA’S OFF… SENDS SHOCK MESSAGE TO TRUMP” in which Franchi, pointing to a tiny movement of Clinton’s lips during a TV debate, claims she says “fuck you” to her presidential rival. The data Franchi shared revealed in the month leading up to the election, 73% of the traffic to the video – amounting to 1.2m of its views – was due to YouTube recommendations. External traffic accounted for only 3% of the views.
  • many of the other creators of anti-Clinton videos I spoke to were amateur sleuths or part-time conspiracy theorists. Typically, they might receive a few hundred views on their videos, so they were shocked when their anti-Clinton videos started to receive millions of views, as if they were being pushed by an invisible force.
  • In every case, the largest source of traffic – the invisible force – came from the clips appearing in the “up next” column. William Ramsey, an occult investigator from southern California who made “Irrefutable Proof: Hillary Clinton Has a Seizure Disorder!”, shared screen grabs that showed the recommendation algorithm pushed his video even after YouTube had emailed him to say it violated its guidelines. Ramsey’s data showed the video was watched 2.4m times by US-based users before election day. “For a nobody like me, that’s a lot,” he says. “Enough to sway the election, right?”
  • Daniel Alexander Cannon, a conspiracy theorist from South Carolina, tells me: “Every video I put out about the Clintons, YouTube would push it through the roof.” His best-performing clip was a video titled “Hillary and Bill Clinton ‘The 10 Photos You Must See’”, essentially a slideshow of appalling (and seemingly doctored) images of the Clintons with voiceover in which Cannon speculates on their health. It has been seen 3.7m times on YouTube, and 2.9m of those views, Cannon said, came from “up next” recommendations.
  • his research also does something more important: revealing how thoroughly our lives are now mediated by artificial intelligence.
  • Less than a generation ago, the way voters viewed their politicians was largely shaped by tens of thousands of newspaper editors, journalists and TV executives. Today, the invisible codes behind the big technology platforms have become the new kingmakers.
  • They pluck from obscurity people like Dave Todeschini, a retired IBM engineer who, “let off steam” during the election by recording himself opining on Clinton’s supposed involvement in paedophilia, child sacrifice and cannibalism. “It was crazy, it was nuts,” he said of the avalanche of traffic to his YouTube channel, which by election day had more than 2m views
nrashkind

A.I. Versus the Coronavirus - The New York Times - 0 views

  • A new consortium of top scientists will be able to use some of the world’s most advanced supercomputers to look for solutions.
  • Advanced computers have defeated chess masters and learned how to pick through mountains of data to recognize faces and voices.
  • Now, a billionaire developer of software and artificial intelligence is teaming up with top universities and companies to see if A.I. can help curb the current and future pandemics.
  • ...10 more annotations...
  • Condoleezza Rice, a former U.S. secretary of state who serves on the C3.ai board and was recently named the next director of the Hoover Institution
  • Known as the C3.ai Digital Transformation Institute, the new research consortium includes commitments from Princeton, Carnegie Mellon, the Massachusetts Institute of Technology, the University of California, the University of Illinois and the University of Chicago, as well as C3.ai and Microsoft.
  • Thomas M. Siebel, founder and chief executive of C3.ai, an artificial intelligence company in Redwood City, Calif., said the public-private consortium would spend $367 million in its initial five years, aiming its first awards at finding ways to slow the new coronavirus that is sweeping the globe.
  • The new institute plans to award up to 26 grants annually, each featuring up to $500,000 in research funds in addition to computing resources.
  • The institute’s co-directors are S. Shankar Sastry of the University of California, Berkeley, and Rayadurgam Srikant of the University of Illinois, Urbana-Champaign.
  • Successful A.I. can be extremely hard to deliver, especially in thorny real-world problems such as self-driving cars.
  • In recent decades, many rich Americans have sought to reinvent themselves as patrons of social progress through science research
  • Forbes puts Mr. Siebel’s current net worth at $3.6 billion. His First Virtual Group is a diversified holding company that includes philanthropic ventures.
  • The first part of the company’s name, Mr. Siebel said in an email, stands for the convergence of three digital trends: big data, cloud computing and the internet of things, with A.I. amplifying their power. Last year, he laid out his thesis in a book
  • “In no way am I suggesting that A.I. is all sweetness and light,” Mr. Siebel said. But the new institute, he added, is “a place where it can be a force for good.”
Javier E

iHeartMedia laid off hundreds of radio DJs. Is AI to blame? - The Washington Post - 0 views

  • When iHeartMedia announced this month it would fire hundreds of workers across the country, the radio conglomerate said the restructuring was critical to take advantage of its “significant investments … in technology and artificial intelligence.” In a companywide email, chief executive Bob Pittman said the “employee dislocation” was “the unfortunate price we pay to modernize the company.
  • But laid-off employees like D’Edwin “Big Kosh” Walton, who made $12 an hour as an on-air personality for the Columbus, Ohio, hip-hop station 106.7 the Beat, don’t buy it. Walton doesn’t blame the cuts on a computer; he blames them on the company’s top executives, whose “coldblooded, calculated move” cost people their jobs.
  • It “ripped my [expletive] heart out,” Walton said. “The people at the top don’t know who we are at the bottom. They don’t understand the relationships and the connections we had with the communities. And that’s the worst part: They don’t care.”
  • ...25 more annotations...
  • The dominant player in U.S. radio, which owns the online music service iHeartRadio and more than 850 local stations across the United States, has called AI the muscle it needs to fend off rivals, recapture listeners and emerge from bankruptcy
  • The company, which now uses software to schedule music, analyze research and mix songs, plans to consolidate offices around what executives call “AI-enabled Centers of Excellence.”
  • The company’s shift seems in line with a corporate America that is increasingly embracing automation, using technological advances to take over tasks once done by people, boosting profits and cutting costs
  • While the job cuts may sound “inhumane,” she added, they made sense from a Wall Street perspective, given the company’s need to trim costs and improve its profit margins.
  • “This is a typical example of a dying industry that is blaming technology for something that is just absolutely a reduction in force,”
  • iHeartRadio spokeswoman Wendy Goldberg declined to make executives available for comment or provide a total layoff count, saying only that the job cuts were “relatively small” compared with the company’s overall workforce of 12,500 employees
  • Del Colliano estimated that more than 1,000 people would lose their jobs nationwide.
  • iHeartMedia was shifting “jobs to the future from the past,” adding data scientists, podcast producers and other digital teams to help transform the radio broadcaster into a “multiplatform” creator and “America’s #1 audio company.
  • the long-running medium remains a huge business. In November, iHeartMedia reported it took in more than $1.6 billion in broadcast-radio revenue during the first nine months of 2019, and company filings claim that a quarter of a billion listeners still tune in every month to discover new music, catch up on the news or hear from their local DJs.
  • Executives at the Texas-based company have long touted human DJs as their biggest competitive strength, saying in federal securities filings last year that the company was in the “companionship” business because listeners build a “trusted bond and strong relationship” with the on-air personalities they hear every day.
  • The system can transition in real time between songs by layering in music, sound effects, voice-over snippets and ads, delivering the style of smooth, seamless playback that has long been the human DJ’s trade.
  • its “computational music presentation” AI can help erase the seconds-long gaps between songs that can lead to “a loss of energy, lack of continuity and disquieting sterility.”
  • One song wove cleanly into the other through an automated mix of booming sound effects, background music, interview sound bites and station-branding shout-outs (“Super Hi-Fi: Recommended by God”). The smooth transition might have taken a DJ a few minutes to prepare; the computer completed it in a matter of seconds
  • Much of the initial training for these delicate transitions comes from humans, who prerecord voice-overs, select songs, edit audio clips, and classify music by genre, style and mood. Zalon said the machine-learning system has been further refined by iHeartMedia’s human DJs, who have helped identify clumsy transitions and room for future improvements.
  • “To have radio DJs across the country that really care about song transitions and are listening to find everything wrong, that was awesome,” Zalon said. “It gave us hundreds of the world’s best ears. … They almost unwittingly became kind of like our QA [quality assurance] team.”
  • he expects that, in a few years, computer-generated voices could automatically read off the news, tee up interviews and introduce songs, potentially supplanting humans even more. The software performed 315 million musical transitions for listeners in January alone.
  • The company’s chief product officer, Chris Williams, said last year in an interview with the industry news site RadioWorld that “virtual DJs” that could seamlessly interweave chatter, music and ads were “absolutely” coming, and “something we are always thinking about.”
  • That has allowed the company, she said, to free up programming people for more creative pursuits, “embedding our radio stations into the communities and lives of our listeners better and deeper than they have been before.”
  • In 2008, to gain control of the radio and billboard titan then known as Clear Channel, the private-equity firms Bain Capital and Thomas H. Lee Partners staged a leveraged buyout, weighing the company down with a mountain of borrowed cash they needed to seal the deal.
  • The audacious move left the radio giant saddled with more than $20 billion in debt, just as the Great Recession kicked off and radio’s strengths began to rust. The debt would kneecap the company for the next decade, forcing it to pay more toward interest payments some years than it earned in revenue.
  • In the year the company filed for bankruptcy, Pittman, the company’s chief and a former head of MTV and AOL, was paid roughly $13 million in salary and bonus pay, nearly three times what he made in 2016
  • The company’s push to shrink and subsume local stations was also made possible by deregulation. In 2017, the Federal Communications Commission ditched a rule requiring radio stations to maintain a studio near where they were broadcasting. Local DJs have since been further replaced by prerecorded substitutes, sometimes from hundreds of miles away.
  • Ashley “Z” Elzinga, a former on-air personality for 95.6 KISS FM in Cleveland, said she was upbeat about the future but frustrated that the company had said the layoffs touched only a “relatively small” slice of its workforce. “I gave my life to this,” she said. “I moved my life, moved my family.
  • Since the layoffs, they’ve been inundated with messages from listeners who said they couldn’t imagine their daily lives without them. They said they don’t expect a computer-generated system will satisfy listeners or fill that void.
  • “It was something I was really looking forward to making a future out of. And in the blink of an eye, all of that stopped for me,” he said. “That’s the painful part. They just killed what I thought was the future for me.”
carolinehayter

Researchers Demand That Google Rehire And Promote Timnit Gebru After Firing : NPR - 0 views

  • Members of a prestigious research unit at Google have sent a letter to the company's chief executive demanding that ousted artificial intelligence researcher Timnit Gebru be reinstated.
  • Gebru, who studies the ethics of AI and was one of the only Black research scientists at Google, says she was unexpectedly fired after a dispute over an academic paper and months of speaking out about the need for more women and people of color at the tech giant.
  • "Offering Timnit her position back at a higher level would go a long way to help re-establish trust and rebuild our team environment,"
  • ...13 more annotations...
  • "The removal of Timnit has had a demoralizing effect on the whole of our team."
  • Since Gebru's termination earlier this month, more than 2,600 Googlers have signed an open letter expressing dismay over the way Gebru exited the company and asking executives for a full explanation of what prompted her dismissal.
  • Gebru's firing happened "without warning rather than engaging in dialogue."
  • Google has maintained that Gebru resigned, though Gebru herself says she never voluntary agreed to leave the company.
  • They say Jeff Dean, senior vice president of Google Research, and other executives involved in Gebru's firing need to be held accountable.
  • She also was the co-author of pioneering research into facial recognition technology that demonstrated how people of color and women are misidentified far more often than white faces. The study helped persuade IBM, Amazon and Microsoft to stop selling the technology to law enforcement.
  • At Google, Gebru's former team wrote in the Wednesday letter that studying ways to reduce the harm of AI on marginalized groups is key to their mission.
  • Last month, Google abruptly asked Gebru to retract a research paper focused on the potential biases baked into an AI system that attempts to mimic human speech. The technology helps power Google's search engine. Google claims that the paper did not meet its bar for publication and that Gebru did not follow the company's internal review protocol.
  • However, Gebru and her supporters counter that she was being targeted because of how outspoken she was about diversity issues, a theme that was underscored in the letter.
  • The letter says Google's top brass have committed to advancing diversity, equity and inclusion among its research units, but unless more concrete and immediate action is taken, those promises are "virtue signaling; they are damaging, evasive, defensive and demonstrate leadership's inability to understand how our organization is part of the problem," according to the letter.
  • Gebru helped establish Black in AI, a group that supports Black researchers in the field of artificial intelligence.
  • saying such "gaslighting" has caused harm to Gebru and the Black community at Google.
  • Google has a history of striking back against employees who agitate internally for change. Organizers of the worldwide walkouts at Google in 2018 over sexual harassment and other issues were fired by the company. And more recently, the National Labor Relation Board accused Google of illegally firing workers who were involved in union organizing.
Javier E

Elon Musk Ramps Up A.I. Efforts, Even as He Warns of Dangers - The New York Times - 0 views

  • At a 2014 aerospace event at the Massachusetts Institute of Technology, Mr. Musk indicated that he was hesitant to build A.I himself.“I think we need to be very careful about artificial intelligence,” he said while answering audience questions. “With artificial intelligence, we are summoning the demon.”
  • That winter, the Future of Life Institute, which explores existential risks to humanity, organized a private conference in Puerto Rico focused on the future of A.I. Mr. Musk gave a speech there, arguing that A.I. could cross into dangerous territory without anyone realizing it and announced that he would help fund the institute. He gave $10 million.
  • OpenAI was set up as a nonprofit, with Mr. Musk and others pledging $1 billion in donations. The lab vowed to “open source” all its research, meaning it would share its underlying software code with the world. Mr. Musk and Mr. Altman argued that the threat of harmful A.I. would be mitigated if everyone, rather than just tech giants like Google and Facebook, had access to the technology.
  • ...4 more annotations...
  • as OpenAI began building the technology that would result in ChatGPT, many at the lab realized that openly sharing its software could be dangerous. Using A.I., individuals and organizations can potentially generate and distribute false information more quickly and efficiently than they otherwise could. Many OpenAI employees said the lab should keep some of its ideas and code from the public.
  • Mr. Musk renewed his complaints that A.I. was dangerous and accelerated his own efforts to build it. At a Tesla investor event last month, he called for regulators to protect society from A.I., even though his car company has used A.I. systems to push the boundaries of self-driving technologies that have been involved in fatal crashes.
  • During the interview last week with Mr. Carlson, Mr. Musk said OpenAI was no longer serving as a check on the power of tech giants. He wanted to build TruthGPT, he said, “a maximum-truth-seeking A.I. that tries to understand the nature of the universe.
  • Experts who have discussed A.I. with Mr. Musk believe he is sincere in his worries about the technology’s dangers, even as he builds it himself. Others said his stance was influenced by other motivations, most notably his efforts to promote and profit from his companies.
‹ Previous 21 - 40 of 212 Next › Last »
Showing 20 items per page