Skip to main content

Home/ History Readings/ Group items tagged Sci-fi

Rss Feed Group items tagged

Javier E

In defense of science fiction - by Noah Smith - Noahpinion - 0 views

  • I’m a big fan of science fiction (see my list of favorites from last week)! So when people start bashing the genre, I tend to leap to its defense
  • this time, the people doing the bashing are some serious heavyweights themselves — Charles Stross, the celebrated award-winning sci-fi author, and Tyler Austin Harper, a professor who studies science fiction for a living
  • The two critiques center around the same idea — that rich people have misused sci-fi, taking inspiration from dystopian stories and working to make those dystopias a reality.
  • ...14 more annotations...
  • [Science fiction’s influence]…leaves us facing a future we were all warned about, courtesy of dystopian novels mistaken for instruction manuals…[T]he billionaires behind the steering wheel have mistaken cautionary tales and entertainments for a road map, and we’re trapped in the passenger seat.
  • t even then it would be hard to argue exogeneity, since censorship is a response to society’s values as well as a potential cause of them.
  • Stross is alleging that the billionaires are getting Gernsback and Campbell’s intentions exactly right. His problem is simply that Gernsback and Campbell were kind of right-wing, at least by modern standards, and he’s worried that their sci-fi acted as propaganda for right-wing ideas.
  • The question of whether literature has a political effect is an empirical one — and it’s a very difficult empirical one. It’s extremely hard to test the hypothesis that literature exerts a diffuse influence on the values and preconceptions of the citizenry
  • I think Stross really doesn’t come up with any credible examples of billionaires mistaking cautionary tales for road maps. Instead, most of his article focuses on a very different critique — the idea that sci-fi authors inculcate rich technologists with bad values and bad visions of what the future ought to look like:
  • I agree that the internet and cell phones have had an ambiguous overall impact on human welfare. If modern technology does have a Torment Nexus, it’s the mobile-social nexus that keeps us riveted to highly artificial, attenuated parasocial interactions for every waking hour of our day. But these technologies are still very young, and it remains to be seen whether the ways in which we use them will get better or worse over time.
  • There are very few technologies — if any — whose impact we can project into the far future at the moment of their inception. So unless you think our species should just refuse to create any new technology at all, you have to accept that each one is going to be a bit of a gamble.
  • As for weapons of war, those are clearly bad in terms of their direct effects on the people on the receiving end. But it’s possible that more powerful weapons — such as the atomic bomb — serve to deter more deaths than they cause
  • yes, AI is risky, but the need to manage and limit risk is a far cry from the litany of negative assumptions and extrapolations that often gets flung in the technology’s directio
  • I think the main problem with Harper’s argument is simply techno-pessimism. So far, technology’s effects on humanity have been mostly good, lifting us up from the muck of desperate poverty and enabling the creation of a healthier, more peaceful, more humane world. Any serious discussion of the effects of innovation on society must acknowledge that. We might have hit an inflection point where it all goes downhill from here, and future technologies become the Torment Nexuses that we’ve successfully avoided in the past. But it’s very premature to assume we’ve hit that point.
  • I understand that the 2020s are an exhausted age, in which we’re still reeling from the social ructions of the 2010s. I understand that in such a weary and fearful condition, it’s natural to want to slow the march of technological progress as a proxy for slowing the headlong rush of social progress
  • And I also understand how easy it is to get negatively polarized against billionaires, and any technologies that billionaires invent, and any literature that billionaires like to read.
  • But at a time when we’re creating vaccines against cancer and abundant clean energy and any number of other life-improving and productivity-boosting marvels, it’s a little strange to think that technology is ruining the world
  • The dystopian elements of modern life are mostly just prosaic, old things — political demagogues, sclerotic industries, social divisions, monopoly power, environmental damage, school bullies, crime, opiates, and so on
Javier E

What Elon Musk's 'Age of Abundance' Means for the Future of Capitalism - WSJ - 0 views

  • When it comes to the future, Elon Musk’s best-case scenario for humanity sounds a lot like Sci-Fi Socialism.
  • “We will be in an age of abundance,” Musk said this month.
  • Sunak said he believes the act of work gives meaning, and had some concerns about Musk’s prediction. “I think work is a good thing, it gives people purpose in their lives,” Sunak told Musk. “And if you then remove a large chunk of that, what does that mean?”
  • ...20 more annotations...
  • Part of the enthusiasm behind the sky-high valuation of Tesla, where he is chief executive, comes from his predictions for the auto company’s abilities to develop humanoid robots—dubbed Optimus—that can be deployed for everything from personal assistants to factory workers. He’s also founded an AI startup, dubbed xAI, that he said aims to develop its own superhuman intelligence, even as some are skeptical of that possibility. 
  • Musk likes to point to another work of Sci-Fi to describe how AI could change our world: a series of books by the late-, self-described-socialist author Iain Banks that revolve around a post-scarcity society that includes superintelligent AI. 
  • That is the question.
  • “We’re actually going to have—and already do have—a massive shortage of labor. So, I think we will have not people out of work but actually still a shortage of labor—even in the future.” 
  • Musk has cast his work to develop humanoid robots as an attempt to solve labor issues, saying there aren’t enough workers and cautioning that low birthrates will be even more problematic. 
  • Instead, Musk predicts robots will be taking jobs that are uncomfortable, dangerous or tedious. 
  • A few years ago, Musk declared himself a socialist of sorts. “Just not the kind that shifts resources from most productive to least productive, pretending to do good, while actually causing harm,” he tweeted. “True socialism seeks greatest good for all.”
  • “It’s fun to cook food but it’s not that fun to wash the dishes,” Musk said this month. “The computer is perfectly happy to wash the dishes.”
  • In the near term, Goldman Sachs in April estimated generative AI could boost the global gross domestic product by 7% during the next decade and that roughly two-thirds of U.S. occupations could be partially automated by AI. 
  • Vinod Khosla, a prominent venture capitalist whose firm has invested in the technology, predicted within a decade AI will be able to do “80% of 80%” of all jobs today.
  • “I believe the need to work in society will disappear in 25 years for those countries that adapt these technologies,” Khosla said. “I do think there’s room for universal basic income assuring a minimum standard and people will be able to work on the things they want to work on.” 
  • Forget universal basic income. In Musk’s world, he foresees something more lush, where most things will be abundant except unique pieces of art and real estate. 
  • “We won’t have universal basic income, we’ll have universal high income,” Musk said this month. “In some sense, it’ll be somewhat of a leveler or an equalizer because, really, I think everyone will have access to this magic genie.” 
  • All of which kind of sounds a lot like socialism—except it’s unclear who controls the resources in this Muskism society
  • “Digital super intelligence combined with robotics will essentially make goods and services close to free in the long term,” Musk said
  • “What is an economy? An economy is GDP per capita times capita.” Musk said at a tech conference in France this year. “Now what happens if you don’t actually have a limit on capita—if you have an unlimited number of…people or robots? It’s not clear what meaning an economy has at that point because you have an unlimited economy effectively.”
  • In theory, humanity would be freed up for other pursuits. But what? Baby making. Bespoke cooking. Competitive human-ing. 
  • “Obviously a machine can go faster than any human but we still have humans race against each other,” Musk said. “We still enjoy competing against other humans to, at least, see who was the best human.”
  • Still, even as Musk talks about this future, he seems to be grappling with what it might actually mean in practice and how it is at odds with his own life. 
  • “If I think about it too hard, it, frankly, can be dispiriting and demotivating, because…I put a lot of blood, sweat and tears into building companies,” he said earlier this year. “If I’m sacrificing time with friends and family that I would prefer but then ultimately the AI can do all these things, does that make sense?”“To some extent,” Musk concluded, “I have to have a deliberate suspension of disbelief in order to remain motivated.”
Javier E

We are the empire: Military interventions, "Star Wars" and how we're the real aliens - ... - 0 views

  • in these years, we’ve morphed into the planet’s invading aliens.
  • Think about it. Over the last half-century, whenever and wherever the U.S. military “deploys,” often to underdeveloped towns and villages in places like Vietnam, Afghanistan or Iraq, it arrives very much in the spirit of those sci-fi aliens. After all, it brings with it dazzlingly destructive futuristic weaponry and high-tech gadgetry of all sorts (known in the military as “force-multipliers”). It then proceeds to build mothership-style bases that are often like American small towns plopped down in a new environment. Nowadays in such lands, American drones patrol the skies (think: the “Terminator” films), blast walls accented with razor wire and klieg lights provide “force protection” on the ground, and the usual attack helicopters, combat jets and gunships hover overhead like so many alien craft. To designate targets to wipe out, U.S. forces even use lasers.
  • In the field, American military officers emerge from high-tech vehicles to bark out commands in a harsh “alien” tongue. (You know: English.
  • ...11 more annotations...
  • the message couldn’t be more unmistakable if you happen to be living in such countries — the “aliens” are here, and they’re planning to take control, weapons loaded and ready to fire.
  • . In 2004, near Samarra in Iraq’s Salahuddin province, for instance, then-Major Guy Parmeter recalled asking a farmer if he’d “seen any foreign fighters” about. The farmer’s reply was as simple as it was telling: “Yes, you.
  • It’s not the fault of the individual American soldier that, in these years, he’s been outfitted like a “Star Wars” storm trooper. His equipment is designed to be rugged and redundant, meaning difficult to break, but it comes at a cost. In Iraq, U.S. troops were often encased in 80 to 100 pounds of equipment, including a rifle, body armor, helmet, ammunition, water, radio, batteries and night-vision goggles. And, light as they are, let’s not forget the ominous dark sunglasses meant to dim the glare of Iraq’s foreign sun.
  • Think for a moment about the optics of a typical twenty-first-century U.S. military intervention. As our troops deploy to places that for most Americans might as well be in a galaxy far, far away, with all their depersonalizing body armor and high-tech weaponry, they certainly have the look of imperial storm troopers.
  • As Iraq war veteran Roy Scranton recently wrote in The New York Times, “I was the faceless storm trooper, and the scrappy rebels were the Iraqis.” Ouch.
  • American troops in that country often moved about in huge MRAPs (mine-resistant, ambush-protected vehicles) described to me by an Army battalion commander as “ungainly” and “un-soldier like.” Along with M1 Abrams tanks and Bradley fighting vehicles, those MRAPs were the American equivalents of the Imperial Walkers in “Star Wars.”
  • Do you recall what the aliens were after in the first “Independence Day” movie? Resources. In that film, they were compared to locusts, traveling from planet to planet, stripping them of their valuables while killing their inhabitants. These days, that narrative should sound a lot less alien to us. After all, would Washington have committed itself quite so fully to the Greater Middle East if it hadn’t possessed all that oil so vital to our consumption-driven way of life?
  • Now, think how that soldier appeared to ordinary Iraqis — or Afghans, Yemenis, Libyans or almost any other non-Western people. Wouldn’t he or she seem both intimidating and foreign, indeed, hostile and “alien,” especially while pointing a rifle at you and jabbering away in a foreign tongue?
  • Now, think of the typical U.S. military response to the nimbleness and speed of such “rebels.” It usually involves deploying yet more and bigger technologies. The United States has even sent its version of Imperial Star Destroyers (we call them B-52s) to Syria and Iraq to take out “rebels” riding their version of “speeders” (i.e. Toyota trucks).
  • unlike the evil empire of “Star Wars” or the ruthless aliens of “Independence Day,” the U.S. military never claimed to be seeking total control (or destruction) of the lands it invaded, nor did it claim to desire the total annihilation of their populations (unless you count the “carpet bombing” fantasies of wannabe Sith Lord Ted Cruz). Instead, it promised to leave quickly once its liberating mission was accomplished, taking its troops, attack craft and motherships with it.After 15 years and counting on Planet Afghanistan and 13 on Planet Iraq, tell me again how those promises have played out.
  • Like it or not, as the world’s sole superpower, dependent on advanced technology to implement its global ambitions, the U.S. provides a remarkably good model for the imperial and imperious aliens of our screen life.
Javier E

'The OA' Season 2: TV's Oddest Show Is Back - The Atlantic - 0 views

  • When The OA debuted in a surprise drop at the end of 2016, it quickly became one of Netflix’s coveted word-of-mouth hits, as viewers furiously debated its meaning, its mythology, its magniloquence
  • The OA’s metaphysical elements, its ideas about parallel universes and supernatural dreams, cast the show into a new wave of speculative storytelling on television. It followed Netflix’s Stranger Things, an ’80s-steeped sci-fi series about psychokinesis and monsters from other dimensions, and NBC’s The Good Place, an office comedy of sorts about life after death and the meaning of morality. More recently, Russian Doll on Netflix presented a scenario in which a woman dies over and over again, using it to explore the question of what people can mean to one another in this complicated, heartbreaking plane of existence.
  • This trend, Batmanglij theorized, is part of a reaction to the fact that reality feels more and more fractured, with its online portals to different worlds, and its varying versions of the truth
  • ...12 more annotations...
  • they’re also wondering what it means to try to tell empathic, sincere stories to audiences much more accustomed to cynicism and irony. Because, when it comes down to it, which side is more likely to give first?
  • The movie cost $130,000 to make, and took three months to film. Last minute, they submitted it to the Sundance Film Festival, and it happened to be accepted, as was Another Earth. Almost instantly, Marling went from being a total unknown to someone who had two films debuting at arguably the most prestigious film festival in the world at the same time.
  • Netflix wasn’t only changing the way television was commissioned and produced. It was also upending the whole system for how shows were consumed.
  • Previously, when debuting their work, Batmanglij and Marling had been at film festivals, buffeted by small audiences of professional critics and cinephiles. The OA was different. Overnight, it landed on a platform where it was accessible to hundreds of millions of people. There was no soft opening, no way to ease their series into the world. The OA dropped, and people began to watch it, and to respond to watching it in real time, broadcasting their thoughts to their social-media feeds, and there was no way back.
  • If there was one factor that seemed to unnerve some people about The OA, it was its sincerity
  • When everything seems so terrible, irony is a protective shroud, offering a way to acknowledge reality without being affected by it
  • Irony, Edward St. Aubyn writes in the last of his Patrick Melrose novels, about an Englishman processing horrendous childhood abuse, “is the hardest addiction of all … Forget heroin. Just try giving up irony, that deep-down need to mean two things at once, to be in two places at once, not to be there for the catastrophe of a fixed meaning.
  • he and Marling felt adamant that their sincerity was one of their most significant qualities as writers. They made a pact that they weren’t going to let anything or anyone drum their sincerity out of them. “If you do, you’re sort of dead in the water as an artist,” Marling said. “Then it’s like you’ve decided that what matters most is everybody getting it.
  • One of the most dysfunctional qualities about the world right now, she thinks, is that people aren’t able to just sit with complicated emotions, or truly listen, or be open about what they’re feeling. “Those things are out of fashion, and the fact that they’ve fallen out of fashion is why we’re living in the world we’re living in.”
  • In the end, though, as Marling said, it’s okay if not everyone gets it. Something she thinks about a lot is the paradox of trying to make something original these days, something that’s informed by the truth of the human condition but unfettered by criticism or praise. “You have to somehow have the heart of a baby and the hide of a rhinoceros. And that is a crazy juxtaposition. How do you maintain it?”
  • You do it, maybe, by being sincere, by keeping the hope, always, that the work you make might not be able to change the whole world, but it might reach a tiny part of it
  • Marling describes one of the responses to The OA that moved her the most, a video that a young man sent her. “Can I set it up for a second?” Batmanglij asked. “He’s visiting his grandma for the weekend, and he goes and he finds her to say goodbye.” Marling picked up the story. “She’s standing in the backyard, she’s 80 years old, and she’s standing in the sun, the late sun coming at the end of the day, and she’s doing this. [She mimics the movements from the show.] And he’s like, ‘Grandma, grandma, what are you doing?’ And she’s like, ‘I’m going somewhere.’” Marling smiles. Things like that can daze you, she said. When you think about it, it’s miraculous. “You can touch strangers, and they can touch you back.”
Javier E

Washington Monthly | How to Fix Facebook-Before It Fixes Us - 0 views

  • Smartphones changed the advertising game completely. It took only a few years for billions of people to have an all-purpose content delivery system easily accessible sixteen hours or more a day. This turned media into a battle to hold users’ attention as long as possible.
  • And it left Facebook and Google with a prohibitive advantage over traditional media: with their vast reservoirs of real-time data on two billion individuals, they could personalize the content seen by every user. That made it much easier to monopolize user attention on smartphones and made the platforms uniquely attractive to advertisers. Why pay a newspaper in the hopes of catching the attention of a certain portion of its audience, when you can pay Facebook to reach exactly those people and no one else?
  • Wikipedia defines an algorithm as “a set of rules that precisely defines a sequence of operations.” Algorithms appear value neutral, but the platforms’ algorithms are actually designed with a specific value in mind: maximum share of attention, which optimizes profits.
  • ...58 more annotations...
  • They do this by sucking up and analyzing your data, using it to predict what will cause you to react most strongly, and then giving you more of that.
  • Algorithms that maximize attention give an advantage to negative messages. People tend to react more to inputs that land low on the brainstem. Fear and anger produce a lot more engagement and sharing than joy
  • The result is that the algorithms favor sensational content over substance.
  • for mass media, this was constrained by one-size-fits-all content and by the limitations of delivery platforms. Not so for internet platforms on smartphones. They have created billions of individual channels, each of which can be pushed further into negativity and extremism without the risk of alienating other audience members
  • On Facebook, it’s your news feed, while on Google it’s your individually customized search results. The result is that everyone sees a different version of the internet tailored to create the illusion that everyone else agrees with them.
  • It took Brexit for me to begin to see the danger of this dynamic. I’m no expert on British politics, but it seemed likely that Facebook might have had a big impact on the vote because one side’s message was perfect for the algorithms and the other’s wasn’t. The “Leave” campaign made an absurd promise—there would be savings from leaving the European Union that would fund a big improvement in the National Health System—while also exploiting xenophobia by casting Brexit as the best way to protect English culture and jobs from immigrants. It was too-good-to-be-true nonsense mixed with fearmongering.
  • Facebook was a much cheaper and more effective platform for Leave in terms of cost per user reached. And filter bubbles would ensure that people on the Leave side would rarely have their questionable beliefs challenged. Facebook’s model may have had the power to reshape an entire continent.
  • Tristan Harris, formerly the design ethicist at Google. Tristan had just appeared on 60 Minutes to discuss the public health threat from social networks like Facebook. An expert in persuasive technology, he described the techniques that tech platforms use to create addiction and the ways they exploit that addiction to increase profits. He called it “brain hacking.”
  • The most important tool used by Facebook and Google to hold user attention is filter bubbles. The use of algorithms to give consumers “what they want” leads to an unending stream of posts that confirm each user’s existing beliefs
  • Continuous reinforcement of existing beliefs tends to entrench those beliefs more deeply, while also making them more extreme and resistant to contrary facts
  • No one stopped them from siphoning off the profits of content creators. No one stopped them from gathering data on every aspect of every user’s internet life. No one stopped them from amassing market share not seen since the days of Standard Oil.
  • Facebook takes the concept one step further with its “groups” feature, which encourages like-minded users to congregate around shared interests or beliefs. While this ostensibly provides a benefit to users, the larger benefit goes to advertisers, who can target audiences even more effectively.
  • We theorized that the Russians had identified a set of users susceptible to its message, used Facebook’s advertising tools to identify users with similar profiles, and used ads to persuade those people to join groups dedicated to controversial issues. Facebook’s algorithms would have favored Trump’s crude message and the anti-Clinton conspiracy theories that thrilled his supporters, with the likely consequence that Trump and his backers paid less than Clinton for Facebook advertising per person reached.
  • The ads were less important, though, than what came next: once users were in groups, the Russians could have used fake American troll accounts and computerized “bots” to share incendiary messages and organize events.
  • Trolls and bots impersonating Americans would have created the illusion of greater support for radical ideas than actually existed.
  • Real users “like” posts shared by trolls and bots and share them on their own news feeds, so that small investments in advertising and memes posted to Facebook groups would reach tens of millions of people.
  • A similar strategy prevailed on other platforms, including Twitter. Both techniques, bots and trolls, take time and money to develop—but the payoff would have been huge.
  • 2016 was just the beginning. Without immediate and aggressive action from Washington, bad actors of all kinds would be able to use Facebook and other platforms to manipulate the American electorate in future elections.
  • Renee DiResta, an expert in how conspiracy theories spread on the internet. Renee described how bad actors plant a rumor on sites like 4chan and Reddit, leverage the disenchanted people on those sites to create buzz, build phony news sites with “press” versions of the rumor, push the story onto Twitter to attract the real media, then blow up the story for the masses on Facebook.
  • It was sophisticated hacker technique, but not expensive. We hypothesized that the Russians were able to manipulate tens of millions of American voters for a sum less than it would take to buy an F-35 fighter jet.
  • Algorithms can be beautiful in mathematical terms, but they are only as good as the people who create them. In the case of Facebook and Google, the algorithms have flaws that are increasingly obvious and dangerous.
  • Thanks to the U.S. government’s laissez-faire approach to regulation, the internet platforms were able to pursue business strategies that would not have been allowed in prior decades. No one stopped them from using free products to centralize the internet and then replace its core functions.
  • To the contrary: the platforms help people self-segregate into like-minded filter bubbles, reducing the risk of exposure to challenging ideas.
  • No one stopped them from running massive social and psychological experiments on their users. No one demanded that they police their platforms. It has been a sweet deal.
  • Facebook and Google are now so large that traditional tools of regulation may no longer be effective.
  • The largest antitrust fine in EU history bounced off Google like a spitball off a battleship.
  • It reads like the plot of a sci-fi novel: a technology celebrated for bringing people together is exploited by a hostile power to drive people apart, undermine democracy, and create misery. This is precisely what happened in the United States during the 2016 election.
  • We had constructed a modern Maginot Line—half the world’s defense spending and cyber-hardened financial centers, all built to ward off attacks from abroad—never imagining that an enemy could infect the minds of our citizens through inventions of our own making, at minimal cost
  • Not only was the attack an overwhelming success, but it was also a persistent one, as the political party that benefited refuses to acknowledge reality. The attacks continue every day, posing an existential threat to our democratic processes and independence.
  • Facebook, Google, Twitter, and other platforms were manipulated by the Russians to shift outcomes in Brexit and the U.S. presidential election, and unless major changes are made, they will be manipulated again. Next time, there is no telling who the manipulators will be.
  • Unfortunately, there is no regulatory silver bullet. The scope of the problem requires a multi-pronged approach.
  • Polls suggest that about a third of Americans believe that Russian interference is fake news, despite unanimous agreement to the contrary by the country’s intelligence agencies. Helping those people accept the truth is a priority. I recommend that Facebook, Google, Twitter, and others be required to contact each person touched by Russian content with a personal message that says, “You, and we, were manipulated by the Russians. This really happened, and here is the evidence.” The message would include every Russian message the user received.
  • This idea, which originated with my colleague Tristan Harris, is based on experience with cults. When you want to deprogram a cult member, it is really important that the call to action come from another member of the cult, ideally the leader.
  • decentralization had a cost: no one had an incentive to make internet tools easy to use. Frustrated by those tools, users embraced easy-to-use alternatives from Facebook and Google. This allowed the platforms to centralize the internet, inserting themselves between users and content, effectively imposing a tax on both sides. This is a great business model for Facebook and Google—and convenient in the short term for customers—but we are drowning in evidence that there are costs that society may not be able to afford.
  • Second, the chief executive officers of Facebook, Google, Twitter, and others—not just their lawyers—must testify before congressional committees in open session
  • This is important not just for the public, but also for another crucial constituency: the employees who keep the tech giants running. While many of the folks who run Silicon Valley are extreme libertarians, the people who work there tend to be idealists. They want to believe what they’re doing is good. Forcing tech CEOs like Mark Zuckerberg to justify the unjustifiable, in public—without the shield of spokespeople or PR spin—would go a long way to puncturing their carefully preserved cults of personality in the eyes of their employees.
  • We also need regulatory fixes. Here are a few ideas.
  • First, it’s essential to ban digital bots that impersonate humans. They distort the “public square” in a way that was never possible in history, no matter how many anonymous leaflets you printed.
  • At a minimum, the law could require explicit labeling of all bots, the ability for users to block them, and liability on the part of platform vendors for the harm bots cause.
  • Second, the platforms should not be allowed to make any acquisitions until they have addressed the damage caused to date, taken steps to prevent harm in the future, and demonstrated that such acquisitions will not result in diminished competition.
  • An underappreciated aspect of the platforms’ growth is their pattern of gobbling up smaller firms—in Facebook’s case, that includes Instagram and WhatsApp; in Google’s, it includes YouTube, Google Maps, AdSense, and many others—and using them to extend their monopoly power.
  • This is important, because the internet has lost something very valuable. The early internet was designed to be decentralized. It treated all content and all content owners equally. That equality had value in society, as it kept the playing field level and encouraged new entrants.
  • There’s no doubt that the platforms have the technological capacity to reach out to every affected person. No matter the cost, platform companies must absorb it as the price for their carelessness in allowing the manipulation.
  • Third, the platforms must be transparent about who is behind political and issues-based communication.
  • Transparency with respect to those who sponsor political advertising of all kinds is a step toward rebuilding trust in our political institutions.
  • Fourth, the platforms must be more transparent about their algorithms. Users deserve to know why they see what they see in their news feeds and search results. If Facebook and Google had to be up-front about the reason you’re seeing conspiracy theories—namely, that it’s good for business—they would be far less likely to stick to that tactic
  • Allowing third parties to audit the algorithms would go even further toward maintaining transparency. Facebook and Google make millions of editorial choices every hour and must accept responsibility for the consequences of those choices. Consumers should also be able to see what attributes are causing advertisers to target them.
  • Fifth, the platforms should be required to have a more equitable contractual relationship with users. Facebook, Google, and others have asserted unprecedented rights with respect to end-user license agreements (EULAs), the contracts that specify the relationship between platform and user.
  • All software platforms should be required to offer a legitimate opt-out, one that enables users to stick with the prior version if they do not like the new EULA.
  • “Forking” platforms between old and new versions would have several benefits: increased consumer choice, greater transparency on the EULA, and more care in the rollout of new functionality, among others. It would limit the risk that platforms would run massive social experiments on millions—or billions—of users without appropriate prior notification. Maintaining more than one version of their services would be expensive for Facebook, Google, and the rest, but in software that has always been one of the costs of success. Why should this generation get a pass?
  • Sixth, we need a limit on the commercial exploitation of consumer data by internet platforms. Customers understand that their “free” use of platforms like Facebook and Google gives the platforms license to exploit personal data. The problem is that platforms are using that data in ways consumers do not understand, and might not accept if they did.
  • Not only do the platforms use your data on their own sites, but they also lease it to third parties to use all over the internet. And they will use that data forever, unless someone tells them to stop.
  • There should be a statute of limitations on the use of consumer data by a platform and its customers. Perhaps that limit should be ninety days, perhaps a year. But at some point, users must have the right to renegotiate the terms of how their data is used.
  • Seventh, consumers, not the platforms, should own their own data. In the case of Facebook, this includes posts, friends, and events—in short, the entire social graph. Users created this data, so they should have the right to export it to other social networks.
  • It would be analogous to the regulation of the AT&T monopoly’s long-distance business, which led to lower prices and better service for consumers.
  • Eighth, and finally, we should consider that the time has come to revive the country’s traditional approach to monopoly. Since the Reagan era, antitrust law has operated under the principle that monopoly is not a problem so long as it doesn’t result in higher prices for consumers.
  • Under that framework, Facebook and Google have been allowed to dominate several industries—not just search and social media but also email, video, photos, and digital ad sales, among others—increasing their monopolies by buying potential rivals like YouTube and Instagram.
  • While superficially appealing, this approach ignores costs that don’t show up in a price tag. Addiction to Facebook, YouTube, and other platforms has a cost. Election manipulation has a cost. Reduced innovation and shrinkage of the entrepreneurial economy has a cost. All of these costs are evident today. We can quantify them well enough to appreciate that the costs to consumers of concentration on the internet are unacceptably high.
johnsonel7

How High Tech Is Transforming One of the Oldest Jobs: Farming - The New York Times - 3 views

  • The need for driverless farming equipment is intensifying, Mr. Cafiero said, because of a crushing labor shortage, which drives up wages and worker mobility.
    • johnsonel7
       
      Instead of a shortage of food to drive change, there is now a shortage of labor.
  • Instead
    • johnsonel7
       
      Just as humans moved from hunting and gathering to agriculture, it seems that we are moving into a new age of automated food gathering.
  • , it adapts the sensors and actuators needed for driverless plowing to existing tractors produced by major manufacturers.That step is not as sci-fi as it might seem. From equipment automation to data collection and analysis, the digital evolution of agriculture is already a fact of life on farms across the United States.
mattrenz16

The Truth is Out There. But With New UFO Report Expected to Land Soon, Talk of Alien Li... - 0 views

  • Researching more famous accounts of UFO sightings and purported alien abductions with students is how he’ll be spending the summer. And with the federal government’s report on “unidentified aerial phenomena” — or UAPs — expected as soon as this week, they’ll have new grainy videos to analyze and debate.
  • When former President Donald Trump signed a $2.3 trillion funding bill in December, educators were eye-balling the $54 billion in relief funds included for school reopening. But tucked into the more than 5,500 pages of legislative text was a Sen. Marco Rubio-sponsored provision directing Naval intelligence to uncover what they’ve been tracking in the skies. The bill asked for detailed reports of UAPs and knowledge of whether “a potential adversary may have achieved breakthrough aerospace capabilities” that might harm Earth, or at least the U.S. The report, combined with Navy pilots’ recent accounts of aircraft displaying unusual movements, provide fresh material for teachers who find that questions about alien visitors are a great way to engage students in science.
  • Highly trained military pilots admit they are taking the sightings of these unusual aircraft seriously — and think others should, too. With both Republicans and Democrats interested in the report’s findings and respected news shows like “60 Minutes” following the topic, the possibility that otherworldly beings are patrolling our atmosphere is no longer just the stuff of sci-fi movies and paranormal conventions.
  • ...3 more annotations...
  • His suspicions that UFOs are more than a hoax began while he was in graduate school at Montana State University. In 1988, two cows from a nearby herd were mutilated with surgical precision, and a professor mentioned UFOs often interfered with nuclear missile systems at Malmstrom Air Force Base three hours away.
  • A paper Knuth co-authored in 2019 focuses on well-documented sightings of “unidentified aerial vehicles” that display “technical capabilities far exceeding those of our fastest aircraft and spacecraft.”
  • Knuth’s calculations of speed and acceleration are also good high school physics problems, said Berkil Alexander, who teaches at Kennesaw Mountain High School, outside Atlanta. His fascination with UFOs began when he saw “Flight of the Navigator,” a 1986 film about an alien abduction, and in 2019, he was chosen to participate in a NASA program focusing on increasing student engagement in STEM.
anonymous

Official Confirmation Or No, Roswell, N.M., Believes In UFOs : NPR - 0 views

  • Sci-fi enthusiasts, ufologists and conspiracy theorists have been eagerly awaiting a government report, due this month, detailing a Department of Defense investigation into Unexplained Aerial Phenomena.
  • Any hopes that the report would confirm alien visitors to our atmosphere have been dashed, according to reporting by The New York Times and confirmed by NPR through a U.S. senior official. Even so, ET-centric businesses in Roswell hope interest in the new report will lead to a boost in tourism.
  • At the International UFO Museum and Research Center, visits are already up 20% from 2019. Some of that is because of last year's closures due to COVID-19, but belief in UFOs has skyrocketed in recent years.
  • ...3 more annotations...
  • Museum visitor and UFO enthusiast Ethan Anderson first got interested in aliens after attending Roswell's annual UFO Festival, and would love to know more about our cosmic neighbors.
  • "If you really have technology that's so great, you shouldn't hide it from people," he says, convinced of a government cover-up of alien tech.
  • "The government can't admit that there are extraterrestrials, because if they do, that'll open up a whole keg of worms," says Dennis Balthaser, a longtime Roswell UFO tour guide who has spent 35 years researching UFOs.
Javier E

How the AI apocalypse gripped students at elite schools like Stanford - The Washington ... - 0 views

  • Edwards thought young people would be worried about immediate threats, like AI-powered surveillance, misinformation or autonomous weapons that target and kill without human intervention — problems he calls “ultraserious.” But he soon discovered that some students were more focused on a purely hypothetical risk: That AI could become as smart as humans and destroy mankind.
  • In these scenarios, AI isn’t necessarily sentient. Instead, it becomes fixated on a goal — even a mundane one, like making paper clips — and triggers human extinction to optimize its task.
  • To prevent this theoretical but cataclysmic outcome, mission-driven labs like DeepMind, OpenAI and Anthropic are racing to build a good kind of AI programmed not to lie, deceive or kill us.
  • ...28 more annotations...
  • Meanwhile, donors such as Tesla CEO Elon Musk, disgraced FTX founder Sam Bankman-Fried, Skype founder Jaan Tallinn and ethereum co-founder Vitalik Buterin — as well as institutions like Open Philanthropy, a charitable organization started by billionaire Facebook co-founder Dustin Moskovitz — have worked to push doomsayers from the tech industry’s margins into the mainstream.
  • More recently, wealthy tech philanthropists have begun recruiting an army of elite college students to prioritize the fight against rogue AI over other threats
  • Other skeptics, like venture capitalist Marc Andreessen, are AI boosters who say that hyping such fears will impede the technology’s progress.
  • Critics call the AI safety movement unscientific. They say its claims about existential risk can sound closer to a religion than research
  • And while the sci-fi narrative resonates with public fears about runaway AI, critics say it obsesses over one kind of catastrophe to the exclusion of many others.
  • Open Philanthropy spokesperson Mike Levine said harms like algorithmic racism deserve a robust response. But he said those problems stem from the same root issue: AI systems not behaving as their programmers intended. The theoretical risks “were not garnering sufficient attention from others — in part because these issues were perceived as speculative,” Levine said in a statement. He compared the nonprofit’s AI focus to its work on pandemics, which also was regarded as theoretical until the coronavirus emerged.
  • Among the reputational hazards of the AI safety movement is its association with an array of controversial figures and ideas, like EA, which is also known for recruiting ambitious young people on elite college campuses.
  • The foundation began prioritizing existential risks around AI in 2016,
  • there was little status or money to be gained by focusing on risks. So the nonprofit set out to build a pipeline of young people who would filter into top companies and agitate for change from the insid
  • Colleges have been key to this growth strategy, serving as both a pathway to prestige and a recruiting ground for idealistic talent
  • The clubs train students in machine learning and help them find jobs in AI start-ups or one of the many nonprofit groups dedicated to AI safety.
  • Many of these newly minted student leaders view rogue AI as an urgent and neglected threat, potentially rivaling climate change in its ability to end human life. Many see advanced AI as the Manhattan Project of their generation
  • Despite the school’s ties to Silicon Valley, Mukobi said it lags behind nearby UC Berkeley, where younger faculty members research AI alignment, the term for embedding human ethics into AI systems.
  • Mukobi joined Stanford’s club for effective altruism, known as EA, a philosophical movement that advocates doing maximum good by calculating the expected value of charitable acts, like protecting the future from runaway AI. By 2022, AI capabilities were advancing all around him — wild developments that made those warnings seem prescient.
  • At Stanford, Open Philanthropy awarded Luby and Edwards more than $1.5 million in grants to launch the Stanford Existential Risk Initiative, which supports student research in the growing field known as “AI safety” or “AI alignment.
  • from the start EA was intertwined with tech subcultures interested in futurism and rationalist thought. Over time, global poverty slid down the cause list, while rogue AI climbed toward the top.
  • In the past year, EA has been beset by scandal, including the fall of Bankman-Fried, one of its largest donors
  • Another key figure, Oxford philosopher Nick Bostrom, whose 2014 bestseller “Superintelligence” is essential reading in EA circles, met public uproar when a decades-old diatribe about IQ surfaced in January.
  • Programming future AI systems to share human values could mean “an amazing world free from diseases, poverty, and suffering,” while failure could unleash “human extinction or our permanent disempowerment,” Mukobi wrote, offering free boba tea to anyone who attended the 30-minute intro.
  • Open Philanthropy’s new university fellowship offers a hefty direct deposit: undergraduate leaders receive as much as $80,000 a year, plus $14,500 for health insurance, and up to $100,000 a year to cover group expenses.
  • Student leaders have access to a glut of resources from donor-sponsored organizations, including an “AI Safety Fundamentals” curriculum developed by an OpenAI employee.
  • Interest in the topic is also growing among Stanford faculty members, Edwards said. He noted that a new postdoctoral fellow will lead a class on alignment next semester in Stanford’s storied computer science department.
  • Edwards discovered that shared online forums function like a form of peer review, with authors changing their original text in response to the comments
  • Mukobi feels energized about the growing consensus that these risks are worth exploring. He heard students talking about AI safety in the halls of Gates, the computer science building, in May after Geoffrey Hinton, another “godfather” of AI, quit Google to warn about AI. By the end of the year, Mukobi thinks the subject could be a dinner-table topic, just like climate change or the war in Ukraine.
  • Luby, Edwards’s teaching partner for the class on human extinction, also seems to find these arguments persuasive. He had already rearranged the order of his AI lesson plans to help students see the imminent risks from AI. No one needs to “drink the EA Kool-Aid” to have genuine concerns, he said.
  • Edwards, on the other hand, still sees things like climate change as a bigger threat than rogue AI. But ChatGPT and the rapid release of AI models has convinced him that there should be room to think about AI safety.
  • Interested students join reading groups where they get free copies of books like “The Precipice,” and may spend hours reading the latest alignment papers, posting career advice on the Effective Altruism forum, or adjusting their P(doom), a subjective estimate of the probability that advanced AI will end badly. The grants, travel, leadership roles for inexperienced graduates and sponsored co-working spaces build a close-knit community.
  • The course will not be taught by students or outside experts. Instead, he said, it “will be a regular Stanford class.”
Javier E

AI's Education Revolution - WSJ - 0 views

  • Millions of students use Khan Academy’s online videos and problem sets to supplement their schoolwork. Three years ago, Sal Khan and I spoke about developing a tool like the Illustrated Primer from Neal Stephenson’s 1995 novel “The Diamond Age: Or, a Young Lady’s Illustrated Primer.” It’s an education tablet, in the author’s words, in which “the pictures moved, and you could ask them questions and get answers.” Adaptive, intuitive, personalized, self-paced—nothing like today’s education. But it’s science-fiction.
  • Last week I spoke with Mr. Khan, who told me, “Now I think a Primer is within reach within five years. In some ways, we’ve even surpassed some of the elements of the Primer, using characters like George Washington to teach lessons.” What changed? Simple—generative artificial intelligence. Khan Academy has been working with OpenAI’s ChatGPT
  • Mr. Khan’s stated goals for Khan Academy are “personalization and mastery.” He notes that “high-performing, wealthier households have resources—time, know-how and money—to provide their children one-on-one tutoring to learn subjects and then use schools to prove what they know.” With his company’s new AI-infused tool, Khanmigo—sounds like con migo or “with me”—one-on-one teaching can scale to the masses.
  • ...7 more annotations...
  • Khanmigo allows students to make queries in the middle of lessons or videos and understands the context of what they’re watching. You can ask, “What is the significance of the green light in ‘The Great Gatsby?’ ” Heck, that one is still over my head. Same with help on factoring polynomials, including recognizing which step a student got wrong, not just knowing the answer is wrong, fixing ChatGPT’s math problem. Sci-fi becomes reality: a scalable super tutor.
  • Khanmigo saw a limited rollout on March 15, with a few thousand students paying a $20-a-month donation. Plugging into ChatGPT isn’t cheap. A wider rollout is planned for June 15, perhaps under $10 a month, less for those in need. The world has cheap tablets, so it shouldn’t be hard to add an Alexa-like voice and real-time videogame-like animations. Then the Diamond Age will be upon us.
  • Mr. Khan suggests, “There is no limit to learning. If you ask, ‘Why is the sky blue?’ you’ll get a short answer and then maybe, ‘But let’s get back to the mitochondria lesson.’ ” Mr. Khan thinks “average students can become exceptional students.”
  • Mr. Khan tells me, “We want to raise the ceiling, but also the floor.” He wants to provide his company’s AI-learning technology to “villages and other places with little or no teachers or tools. We can give everyone a tutor, everyone a writing coach.” That’s when education and society will really change.
  • Teaching will be transformed. Mr. Khan wants Khanmigo “to provide teachers in the U.S. and around the world an indispensable tool to make their lives better” by administering lessons and increasing communications between teachers and students. I would question any school that doesn’t encourage its use.
  • With this technology, arguments about classroom size and school choice will eventually fade away. Providing low-cost 21st-century Illustrated Primers to every student around the world will then become a moral obligation
  • If school boards and teachers unions in the U.S. don’t get in the way, maybe we’ll begin to see better headlines.
Javier E

The super-rich 'preppers' planning to save themselves from the apocalypse | The super-r... - 0 views

  • at least as far as these gentlemen were concerned, this was a talk about the future of technology.
  • Taking their cue from Tesla founder Elon Musk colonising Mars, Palantir’s Peter Thiel reversing the ageing process, or artificial intelligence developers Sam Altman and Ray Kurzweil uploading their minds into supercomputers, they were preparing for a digital future that had less to do with making the world a better place than it did with transcending the human condition altogether. Their extreme wealth and privilege served only to make them obsessed with insulating themselves from the very real and present danger of climate change, rising sea levels, mass migrations, global pandemics, nativist panic and resource depletion. For them, the future of technology is about only one thing: escape from the rest of us.
  • These people once showered the world with madly optimistic business plans for how technology might benefit human society. Now they’ve reduced technological progress to a video game that one of them wins by finding the escape hatch.
  • ...13 more annotations...
  • these catastrophising billionaires are the presumptive winners of the digital economy – the supposed champions of the survival-of-the-fittest business landscape that’s fuelling most of this speculation to begin with.
  • What I came to realise was that these men are actually the losers. The billionaires who called me out to the desert to evaluate their bunker strategies are not the victors of the economic game so much as the victims of its perversely limited rules. More than anything, they have succumbed to a mindset where “winning” means earning enough money to insulate themselves from the damage they are creating by earning money in that way.
  • Never before have our society’s most powerful players assumed that the primary impact of their own conquests would be to render the world itself unliveable for everyone else
  • Nor have they ever before had the technologies through which to programme their sensibilities into the very fabric of our society. The landscape is alive with algorithms and intelligences actively encouraging these selfish and isolationist outlooks. Those sociopathic enough to embrace them are rewarded with cash and control over the rest of us. It’s a self-reinforcing feedback loop. This is new.
  • So far, JC Cole has been unable to convince anyone to invest in American Heritage Farms. That doesn’t mean no one is investing in such schemes. It’s just that the ones that attract more attention and cash don’t generally have these cooperative components. They’re more for people who want to go it alone
  • C is no hippy environmentalist but his business model is based in the same communitarian spirit I tried to convey to the billionaires: the way to keep the hungry hordes from storming the gates is by getting them food security now. So for $3m, investors not only get a maximum security compound in which to ride out the coming plague, solar storm, or electric grid collapse. They also get a stake in a potentially profitable network of local farm franchises that could reduce the probability of a catastrophic event in the first place. His business would do its best to ensure there are as few hungry children at the gate as possible when the time comes to lock down.
  • Most billionaire preppers don’t want to have to learn to get along with a community of farmers or, worse, spend their winnings funding a national food resilience programme. The mindset that requires safe havens is less concerned with preventing moral dilemmas than simply keeping them out of sight.
  • Rising S Company in Texas builds and installs bunkers and tornado shelters for as little as $40,000 for an 8ft by 12ft emergency hideout all the way up to the $8.3m luxury series “Aristocrat”, complete with pool and bowling lane. The enterprise originally catered to families seeking temporary storm shelters, before it went into the long-term apocalypse business. The company logo, complete with three crucifixes, suggests their services are geared more toward Christian evangelist preppers in red-state America than billionaire tech bros playing out sci-fi scenarios.
  • Ultra-elite shelters such as the Oppidum in the Czech Republic claim to cater to the billionaire class, and pay more attention to the long-term psychological health of residents. They provide imitation of natural light, such as a pool with a simulated sunlit garden area, a wine vault, and other amenities to make the wealthy feel at home.
  • On closer analysis, however, the probability of a fortified bunker actually protecting its occupants from the reality of, well, reality, is very slim. For one, the closed ecosystems of underground facilities are preposterously brittle. For example, an indoor, sealed hydroponic garden is vulnerable to contamination. Vertical farms with moisture sensors and computer-controlled irrigation systems look great in business plans and on the rooftops of Bay Area startups; when a palette of topsoil or a row of crops goes wrong, it can simply be pulled and replaced. The hermetically sealed apocalypse “grow room” doesn’t allow for such do-overs.
  • while a private island may be a good place to wait out a temporary plague, turning it into a self-sufficient, defensible ocean fortress is harder than it sounds. Small islands are utterly dependent on air and sea deliveries for basic staples. Solar panels and water filtration equipment need to be replaced and serviced at regular intervals. The billionaires who reside in such locales are more, not less, dependent on complex supply chains than those of us embedded in industrial civilisation.
  • If they wanted to test their bunker plans, they’d have hired a security expert from Blackwater or the Pentagon. They seemed to want something more. Their language went far beyond questions of disaster preparedness and verged on politics and philosophy: words such as individuality, sovereignty, governance and autonomy.
  • it wasn’t their actual bunker strategies I had been brought out to evaluate so much as the philosophy and mathematics they were using to justify their commitment to escape. They were working out what I’ve come to call the insulation equation: could they earn enough money to insulate themselves from the reality they were creating by earning money in this way? Was there any valid justification for striving to be so successful that they could simply leave the rest of us behind –apocalypse or not?
Javier E

The New Luddites Aren't Backing Down - The Atlantic - 0 views

  • “Anyone who is critical of the tech industry always has someone yell at them ‘Luddite! Luddite!’ and I was no exception,” she told me. It was meant as an insult, but Crabapple embraced the term. Like many others, she came to self-identify as part of a new generation of Luddites. “Tech is not supposed to be a master tool to colonize every aspect of our being. We need to reevaluate how it serves us.”
  • on some key fronts, the Luddites are winning.
  • The government mobilized what was then the largest-ever domestic military occupation of England to crush the uprising—the Luddites had won the approval of the working class, and were celebrated in popular songs and poems—and then passed a law that made machine-breaking a capital offense. They painted Luddites as “deluded” and backward.
  • ...8 more annotations...
  • ver since, Luddite has been a derogatory word—shorthand for one who blindly hates or doesn’t understand technology.
  • Now, with nearly half of Americans worried about how AI will affect jobs, Luddism has blossomed. The new Luddites—a growing contingent of workers, critics, academics, organizers, and writers—say that too much power has been concentrated in the hands of the tech titans, that tech is too often used to help corporations slash pay and squeeze workers, and that certain technologies must not merely be criticized but resisted outright.
  • what I’ve seen over the past 10 years—the rise of gig-app companies that have left workers precarious and even impoverished; the punishing, gamified productivity regimes put in place by giants such as Amazon; the conquering of public life by private tech platforms and the explosion of screen addiction; and the new epidemic of AI plagiarism—has left me sympathizing with tech’s discontents.
  • I consider myself a Luddite not because I want to halt progress or reject technology itself. But I believe, as the original Luddites argued in a particularly influential letter threatening the industrialists, that we must consider whether a technology is “hurtful to commonality”—whether it causes many to suffer for the benefit of a few—and oppose it when necessary.
  • “It’s not a primitivism: We don’t reject all technology, but we reject the technology that is foisted on us,” Jathan Sadowski, a social scientist at Monash University, in Australia, told me. He’s a co-host, with the journalist Ed Ongweso Jr., of This Machine Kills, an explicitly pro-Luddite podcast.
  • The science-fiction author Cory Doctorow has declared all of sci-fi a Luddite literature, writing that “Luddism and science fiction concern themselves with the same questions: not merely what the technology does, but who it does it for and who it does it to.
  • The New York Times has profiled a hip cadre of self-proclaimed “‘Luddite’ teens.” As the headline explained, they “don’t want your likes.”
  • By drawing a red line against letting studios control AI, the WGA essentially waged the first proxy battle between human workers and AI. It drew attention to the fight, resonated with the public, and, after a 148-day strike, helped the guild attain a contract that banned studios from dictating the use of AI.
Javier E

Opinion | The 100-Year Extinction Panic Is Back, Right on Schedule - The New York Times - 0 views

  • The literary scholar Paul Saint-Amour has described the expectation of apocalypse — the sense that all history’s catastrophes and geopolitical traumas are leading us to “the prospect of an even more devastating futurity” — as the quintessential modern attitude. It’s visible everywhere in what has come to be known as the polycrisis.
  • Climate anxiety, of the sort expressed by that student, is driving new fields in psychology, experimental therapies and debates about what a recent New Yorker article called “the morality of having kids in a burning, drowning world.”
  • The conviction that the human species could be on its way out, extinguished by our own selfishness and violence, may well be the last bipartisan impulse.
  • ...28 more annotations...
  • a major extinction panic happened 100 years ago, and the similarities are unnerving.
  • The 1920s were also a period when the public — traumatized by a recent pandemic, a devastating world war and startling technological developments — was gripped by the conviction that humanity might soon shuffle off this mortal coil.
  • It also helps us see how apocalyptic fears feed off the idea that people are inherently violent, self-interested and hierarchical and that survival is a zero-sum war over resources.
  • Either way, it’s a cynical view that encourages us to take our demise as a foregone conclusion.
  • What makes an extinction panic a panic is the conviction that humanity is flawed and beyond redemption, destined to die at its own hand, the tragic hero of a terrestrial pageant for whom only one final act is possible
  • What the history of prior extinction panics has to teach us is that this pessimism is both politically questionable and questionably productive. Our survival will depend on our ability to recognize and reject the nihilistic appraisals of humanity that inflect our fears for the future, both left and right.
  • As a scholar who researches the history of Western fears about human extinction, I’m often asked how I avoid sinking into despair. My answer is always that learning about the history of extinction panics is actually liberating, even a cause for optimism
  • Nearly every generation has thought its generation was to be the last, and yet the human species has persisted
  • As a character in Jeanette Winterson’s novel “The Stone Gods” says, “History is not a suicide note — it is a record of our survival.”
  • Contrary to the folk wisdom that insists the years immediately after World War I were a period of good times and exuberance, dark clouds often hung over the 1920s. The dread of impending disaster — from another world war, the supposed corruption of racial purity and the prospect of automated labor — saturated the period
  • The previous year saw the publication of the first of several installments of what many would come to consider his finest literary achievement, “The World Crisis,” a grim retrospective of World War I that laid out, as Churchill put it, the “milestones to Armageddon.
  • Bluntly titled “Shall We All Commit Suicide?,” the essay offered a dismal appraisal of humanity’s prospects. “Certain somber facts emerge solid, inexorable, like the shapes of mountains from drifting mist,” Churchill wrote. “Mankind has never been in this position before. Without having improved appreciably in virtue or enjoying wiser guidance, it has got into its hands for the first time the tools by which it can unfailingly accomplish its own extermination.”
  • The essay — with its declaration that “the story of the human race is war” and its dismay at “the march of science unfolding ever more appalling possibilities” — is filled with right-wing pathos and holds out little hope that mankind might possess the wisdom to outrun the reaper. This fatalistic assessment was shared by many, including those well to Churchill’s left.
  • “Are not we and they and all the race still just as much adrift in the current of circumstances as we were before 1914?” he wondered. Wells predicted that our inability to learn from the mistakes of the Great War would “carry our race on surely and inexorably to fresh wars, to shortages, hunger, miseries and social debacles, at last either to complete extinction or to a degradation beyond our present understanding.” Humanity, the don of sci-fi correctly surmised, was rushing headlong into a “scientific war” that would “make the biggest bombs of 1918 seem like little crackers.”
  • The pathbreaking biologist J.B.S. Haldane, another socialist, concurred with Wells’s view of warfare’s ultimate destination. In 1925, two decades before the Trinity test birthed an atomic sun over the New Mexico desert, Haldane, who experienced bombing firsthand during World War I, mused, “If we could utilize the forces which we now know to exist inside the atom, we should have such capacities for destruction that I do not know of any agency other than divine intervention which would save humanity from complete and peremptory annihilation.”
  • F.C.S. Schiller, a British philosopher and eugenicist, summarized the general intellectual atmosphere of the 1920s aptly: “Our best prophets are growing very anxious about our future. They are afraid we are getting to know too much and are likely to use our knowledge to commit suicide.”
  • Many of the same fears that keep A.I. engineers up at night — calibrating thinking machines to human values, concern that our growing reliance on technology might sap human ingenuity and even trepidation about a robot takeover — made their debut in the early 20th century.
  • The popular detective novelist R. Austin Freeman’s 1921 political treatise, “Social Decay and Regeneration,” warned that our reliance on new technologies was driving our species toward degradation and even annihilation
  • Extinction panics are, in both the literal and the vernacular senses, reactionary, animated by the elite’s anxiety about maintaining its privilege in the midst of societal change
  • There is a perverse comfort to dystopian thinking. The conviction that catastrophe is baked in relieves us of the moral obligation to act. But as the extinction panic of the 1920s shows us, action is possible, and these panics can recede
  • To whatever extent, then, that the diagnosis proved prophetic, it’s worth asking if it might have been at least partly self-fulfilling.
  • today’s problems are fundamentally new. So, too, must be our solutions
  • It is a tired observation that those who don’t know history are destined to repeat it. We live in a peculiar moment in which this wisdom is precisely inverted. Making it to the next century may well depend on learning from and repeating the tightrope walk — between technological progress and self-annihilation — that we have been doing for the past 100 years
  • We have gotten into the dangerous habit of outsourcing big issues — space exploration, clean energy, A.I. and the like — to private businesses and billionaires
  • That ideologically varied constellation of prominent figures shared a basic diagnosis of humanity and its prospects: that our species is fundamentally vicious and selfish and our destiny therefore bends inexorably toward self-destruction.
  • Less than a year after Churchill’s warning about the future of modern combat — “As for poison gas and chemical warfare,” he wrote, “only the first chapter has been written of a terrible book” — the 1925 Geneva Protocol was signed, an international agreement banning the use of chemical or biological weapons in combat. Despite the many horrors of World War II, chemical weapons were not deployed on European battlefields.
  • As for machine-age angst, there’s a lesson to learn there, too: Our panics are often puffed up, our predictions simply wrong
  • In 1928, H.G. Wells published a book titled “The Way the World Is Going,” with the modest subtitle “Guesses and Forecasts of the Years Ahead.” In the opening pages, he offered a summary of his age that could just as easily have been written about our turbulent 2020s. “Human life,” he wrote, “is different from what it has ever been before, and it is rapidly becoming more different.” He continued, “Perhaps never in the whole history of life before the present time, has there been a living species subjected to so fiercely urgent, many-sided and comprehensive a process of change as ours today. None at least that has survived. Transformation or extinction have been nature’s invariable alternatives. Ours is a species in an intense phase of transition.”
Javier E

AI scientist Ray Kurzweil: 'We are going to expand intelligence a millionfold by 2045' ... - 0 views

  • American computer scientist and techno-optimist Ray Kurzweil is a long-serving authority on artificial intelligence (AI). His bestselling 2005 book, The Singularity Is Near, sparked imaginations with sci-fi like predictions that computers would reach human-level intelligence by 2029 and that we would merge with computers and become superhuman around 2045, which he called “the Singularity”. Now, nearly 20 years on, Kurzweil, 76, has a sequel, The Singularity Is Nearer
  • no longer seem so wacky.
  • Your 2029 and 2045 projections haven’t changed…I have stayed consistent. So 2029, both for human-level intelligence and for artificial general intelligence (AGI) – which is a little bit different. Human-level intelligence generally means AI that has reached the ability of the most skilled humans in a particular domain and by 2029 that will be achieved in most respects. (There may be a few years of transition beyond 2029 where AI has not surpassed the top humans in a few key skills like writing Oscar-winning screenplays or generating deep new philosophical insights, though it will.) AGI means AI that can do everything that any human can do, but to a superior level. AGI sounds more difficult, but it’s coming at the same time.
  • ...15 more annotations...
  • Why write this book? The Singularity Is Near talked about the future, but 20 years ago, when people didn’t know what AI was. It was clear to me what would happen, but it wasn’t clear to everybody. Now AI is dominating the conversation. It is time to take a look again both at the progress we’ve made – large language models (LLMs) are quite delightful to use – and the coming breakthroughs.
  • It is hard to imagine what this would be like, but it doesn’t sound very appealing… Think of it like having your phone, but in your brain. If you ask a question your brain will be able to go out to the cloud for an answer similar to the way you do on your phone now – only it will be instant, there won’t be any input or output issues, and you won’t realise it has been done (the answer will just appear). People do say “I don’t want that”: they thought they didn’t want phones either!
  • The most important driver is the exponential growth in the amount of computing power for the price in constant dollars. We are doubling price-performance every 15 months. LLMs just began to work two years ago because of the increase in computation.
  • What’s missing currently to bring AI to where you are predicting it will be in 2029? One is more computing power – and that’s coming. That will enable improvements in contextual memory, common sense reasoning and social interaction, which are all areas where deficiencies remain
  • LLM hallucinations [where they create nonsensical or inaccurate outputs] will become much less of a problem, certainly by 2029 – they already happen much less than they did two years ago. The issue occurs because they don’t have the answer, and they don’t know that. They look for the best thing, which might be wrong or not appropriate. As AI gets smarter, it will be able to understand its own knowledge more precisely and accurately report to humans when it doesn’t know.
  • What exactly is the Singularity? Today, we have one brain size which we can’t go beyond to get smarter. But the cloud is getting smarter and it is growing really without bounds. The Singularity, which is a metaphor borrowed from physics, will occur when we merge our brain with the cloud. We’re going to be a combination of our natural intelligence and our cybernetic intelligence and it’s all going to be rolled into one. Making it possible will be brain-computer interfaces which ultimately will be nanobots – robots the size of molecules – that will go noninvasively into our brains through the capillaries. We are going to expand intelligence a millionfold by 2045 and it is going to deepen our awareness and consciousness.
  • Why should we believe your dates? I’m really the only person that predicted the tremendous AI interest that we’re seeing today. In 1999 people thought that would take a century or more. I said 30 years and look what we have.
  • I have a chapter on perils. I’ve been involved with trying to find the best way to move forward and I helped to develop the Asilomar AI Principles [a 2017 non-legally binding set of guidelines for responsible AI development]
  • All the major companies are putting more effort into making sure their systems are safe and align with human values than they are into creating new advances, which is positive.
  • Not everyone is likely to be able to afford the technology of the future you envisage. Does technological inequality worry you? Being wealthy allows you to afford these technologies at an early point, but also one where they don’t work very well. When [mobile] phones were new they were very expensive and also did a terrible job. They had access to very little information and didn’t talk to the cloud. Now they are very affordable and extremely useful. About three quarters of people in the world have one. So it’s going to be the same thing here: this issue goes away over time.
  • The book looks in detail at AI’s job-killing potential. Should we be worried? Yes, and no. Certain types of jobs will be automated and people will be affected. But new capabilities also create new jobs. A job like “social media influencer” didn’t make sense, even 10 years ago. Today we have more jobs than we’ve ever had and US average personal income per hours worked is 10 times what it was 100 years ago adjusted to today’s dollars. Universal basic income will start in the 2030s, which will help cushion the harms of job disruptions. It won’t be adequate at that point but over time it will become so.
  • Everything is progressing exponentially: not only computing power but our understanding of biology and our ability to engineer at far smaller scales. In the early 2030s we can expect to reach longevity escape velocity where every year of life we lose through ageing we get back from scientific progress. And as we move past that we’ll actually get back more years.
  • What is your own plan for immortality? My first plan is to stay alive, therefore reaching longevity escape velocity. I take about 80 pills a day to help keep me healthy. Cryogenic freezing is the fallback. I’m also intending to create a replicant of myself [an afterlife AI avatar], which is an option I think we’ll all have in the late 2020s
  • I did something like that with my father, collecting everything that he had written in his life, and it was a little bit like talking to him. [My replicant] will be able to draw on more material and so represent my personality more faithfully.
  • What should we be doing now to best prepare for the future? It is not going to be us versus AI: AI is going inside ourselves. It will allow us to create new things that weren’t feasible before. It’ll be a pretty fantastic future.
Javier E

Elon Musk's Latest Dust-Up: What Does 'Science' Even Mean? - WSJ - 0 views

  • Elon Musk is racing to a sci-fi future while the AI chief at Meta Platforms is arguing for one rooted in the traditional scientific approach.
  • Meta’s top AI scientist, Yann LeCun, criticized the rival company and Musk himself. 
  • Musk turned to a favorite rebuttal—a veiled suggestion that the executive, who is also a high-profile professor, wasn’t accomplishing much: “What ‘science’ have you done in the past 5 years?”
  • ...20 more annotations...
  • “Over 80 technical papers published since January 2022,” LeCun responded. “What about you?”
  • To which Musk posted: “That’s nothing, you’re going soft. Try harder!
  • At stake are the hearts and minds of AI experts—academic and otherwise—needed to usher in the technology
  • “Join xAI,” LeCun wrote, “if you can stand a boss who:– claims that what you are working on will be solved next year (no pressure).– claims that what you are working on will kill everyone and must be stopped or paused (yay, vacation for 6 months!).– claims to want a ‘maximally rigorous pursuit of the truth’ but spews crazy-ass conspiracy theories on his own social platform.”
  • Some read Musk’s “science” dig as dismissing the role research has played for a generation of AI experts. For years, the Metas and Googles of the world have hired the top minds in AI from universities, indulging their desires to keep a foot in both worlds by allowing them to release their research publicly, while also trying to deploy products. 
  • For an academic such as LeCun, published research, whether peer-reviewed or not, allowed ideas to flourish and reputations to be built, which in turn helped build stars in the system.
  • LeCun has been at Meta since 2013 while serving as an NYU professor since 2003. His tweets suggest he subscribes to the philosophy that one’s work needs to be published—put through the rigors of being shown to be correct and reproducible—to really be considered science. 
  • “If you do research and don’t publish, it’s not Science,” he posted in a lengthy tweet Tuesday rebutting Musk. “If you never published your research but somehow developed it into a product, you might die rich,” he concluded. “But you’ll still be a bit bitter and largely forgotten.” 
  • After pushback, he later clarified in another post: “What I *AM* saying is that science progresses through the collision of ideas, verification, analysis, reproduction, and improvements. If you don’t publish your research *in some way* your research will likely have no impact.”
  • The spat inspired debate throughout the scientific community. “What is science?” Nature, a scientific journal, asked in a headline about the dust-up.
  • Others, such as Palmer Luckey, a former Facebook executive and founder of Anduril Industries, a defense startup, took issue with LeCun’s definition of science. “The extreme arrogance and elitism is what people have a problem with,” he tweeted.
  • For Musk, who prides himself on his physics-based viewpoint and likes to tout how he once aspired to work at a particle accelerator in pursuit of the universe’s big questions, LeCun’s definition of science might sound too ivory-tower. 
  • Musk has blamed universities for helping promote what he sees as overly liberal thinking and other symptoms of what he calls the Woke Mind Virus. 
  • Over the years, an appeal of working for Musk has been the impression that his companies move quickly, filled with engineers attracted to tackling hard problems and seeing their ideas put into practice.
  • “I’ve teamed up with Elon to see if we can actually apply these new technologies to really make a dent in our understanding of the universe,” Igor Babuschkin, an AI expert who worked at OpenAI and Google’s DeepMind, said last year as part of announcing xAI’s mission. 
  • The creation of xAI quickly sent ripples through the AI labor market, with one rival complaining it was hard to compete for potential candidates attracted to Musk and his reputation for creating value
  • that was before xAI’s latest round raised billions of dollars, putting its valuation at $24 billion, kicking off a new recruiting drive. 
  • It was already a seller’s market for AI talent, with estimates that there might be only a couple hundred people out there qualified to deal with certain pressing challenges in the industry and that top candidates can easily earn compensation packages worth $1 million or more
  • Since the launch, Musk has been quick to criticize competitors for what he perceived as liberal biases in rival AI chatbots. His pitch of xAI being the anti-woke bastion seems to have worked to attract some like-minded engineers.
  • As for Musk’s final response to LeCun’s defense of research, he posted a meme featuring Pepé Le Pew that read: “my honest reaction.”
1 - 18 of 18
Showing 20 items per page