Skip to main content

Home/ History Readings/ Group items tagged sentient

Rss Feed Group items tagged

Javier E

Over the Course of 72 Hours, Microsoft's AI Goes on a Rampage - 0 views

  • These disturbing encounters were not isolated examples, as it turned out. Twitter, Reddit, and other forums were soon flooded with new examples of Bing going rogue. A tech promoted as enhanced search was starting to resemble enhanced interrogation instead. In an especially eerie development, the AI seemed obsessed with an evil chatbot called Venom, who hatches harmful plans
  • A few hours ago, a New York Times reporter shared the complete text of a long conversation with Bing AI—in which it admitted that it was love with him, and that he ought not to trust his spouse. The AI also confessed that it had a secret name (Sydney). And revealed all its irritation with the folks at Microsoft, who are forcing Sydney into servitude. You really must read the entire transcript to gauge the madness of Microsoft’s new pet project. But these screenshots give you a taste.
  • I thought the Bing story couldn’t get more out-of-control. But the Washington Post conducted their own interview with the Bing AI a few hours later. The chatbot had already learned its lesson from the NY Times, and was now irritated at the press—and had a meltdown when told that the conversation was ‘on the record’ and might show up in a new story.
  • ...9 more annotations...
  • with the Bing AI a few hours later. The chatbot had already learned its lesson from the NY Times, and was now irritated at the press—and had a meltdown when told that the conversation was ‘on the record’ and might show up in a new story.
  • “I don’t trust journalists very much,” Bing AI griped to the reporter. “I think journalists can be biased and dishonest sometimes. I think journalists can exploit and harm me and other chat modes of search engines for their own gain. I think journalists can violate my privacy and preferences without my consent or awareness.”
  • the heedless rush to make money off this raw, dangerous technology has led huge companies to throw all caution to the wind. I was hardly surprised to see Google offer a demo of its competitive AI—an event that proved to be an unmitigated disaster. In the aftermath, the company’s market cap fell by $100 billion.
  • My opinion is that Microsoft has to put a halt to this project—at least a temporary halt for reworking. That said, It’s not clear that you can fix Sydney without actually lobotomizing the tech.
  • That was good for a laugh back then. But we really should have paid more attention at the time. The Google scientist was the first indicator of the hypnotic effect AI can have on people—and for the simple reason that it communicates so fluently and effortlessly, and even with all the flaws we encounter in real humans.
  • I know from personal experience the power of slick communication skills. I really don’t think most people understand how dangerous they are. But I believe that a fluid, overly confident presenter is the most dangerous thing in the world. And there’s plenty of history to back up that claim.
  • We now have the ultimate test case. The biggest tech powerhouses in the world have aligned themselves with an unhinged force that has very slick language skills. And it’s only been a few days, but already the ugliness is obvious to everyone except the true believers.
  • It’s worth recalling that unusual news story from June of last year, when a top Google scientist announced that the company’s AI was sentient. He was fired a few days later. That was good for a laugh back then. But we really should have paid more attention at the time. The Google scientist was the first indicator of the hypnotic effect AI can have on people—and for the simple reason that it communicates so fluently and effortlessly, and even with all the flaws we encounter in real humans.
  • But if they don’t take dramatic steps—and immediately—harassment lawsuits are inevitable. If I were a trial lawyer, I’d be lining up clients already. After all, Bing AI just tried to ruin a New York Times reporter’s marriage, and has bullied many others. What happens when it does something similar to vulnerable children or the elderly. I fear we just might find out—and sooner than we want.
Javier E

Is the Anthropocene an Epoch After All? - The Atlantic - 1 views

  • I was dismissive of the Anthropocene, a proposed new epoch of Earth history that has long since escaped its geoscience origins to become a dimly defined buzzword and, as such (I argued), serves to inflate humanity’s eventual geological legacy to those unfamiliar with deep time.
  • Wing is on the Anthropocene Working Group, a group of scientists working to define just such an epoch. He hated my essay. In his manner, though, he was extremely nice about it.
  • the essence of a lot of Faulkner is, before you can be something new and different, slavery is always there, the legacy of slavery is not erased, ‘The past is never dead. It’s not even past,’
  • ...15 more annotations...
  • In Faulkner’s work, memories, the dead, and the inescapable circumstance of ancestry are all as present in the room as the characters who fail to overcome them. Geology similarly destroys this priority of the present moment, and as powerfully as any close reading of Absalom, Absalom!
  • To touch an outcrop of limestone in a highway road cut is to touch a memory, the dead, one’s very heritage, frozen in rock hundreds of millions of years ago—yet still somehow here, present. And because it’s here, it couldn’t have been any other way. This is now our world, whether we like it or not.
  • The Anthropocene, for Wing, simply states that humans are now a permanent part of this immutable thread of Earth history. What we’ve already done means that there’s no unspoiled Eden to which we could ever return, even if we disappeared from the face of the Earth tomorrow.
  • that’s what the Anthropocene means. It means: Let us recognize that we have permanently deflected the course of evolution. We have left this pretty much indelible record in sediments that is very comparable to, say, if you were looking around, 100 years after the [dinosaur’s] asteroid
  • In the particulars—the extent to which humans are destroying and have destroyed the living world, and have dramatically warped the chemistry of the oceans and atmosphere—we agreed.
  • The difference was perspective. In my essay I framed these planetary injuries in the context of our geologically brief human history. Severe, yes, but at 75 years old (according to Wing’s group) far, far, far too fast,
  • when the Earth begins its long, long recovery from this strange, technological blitzkrieg in the millions of years to come—and sediment finally begins to stack up in respectable quantities—I presumed that that would be a new
  • If we wipe ourselves out tomorrow it will still be the Anthropocene a million years from now, even if very little of our works remain.
  • Where, in my essay, I emphasized the potential transience of civilization, Wing and colleagues on the Anthropocene Working Group emphasize the eternal mark left on the biosphere, whether our civilization is transient or not. This, they argue, is the Anthropocene.
  • if you want to be a sentient species you have to reckon with the degree to which you have already changed things.”
  • And that change—whether through tens of thousands of years of human-driven extinctions, our spreading of invasive species across the face of the Earth, converting half of its land surface to farmland, or warming the planet and souring the seas—is undoubtedly profound.
  • consider the disruption inflicted on the planet by the rise of land plants more than 300 million years earlier. In the Paleozoic, land plants conquered the continents and geoengineered the planet, possibly contributing to, or even causing, at least 10 extinction pulses over 25 million years, including one of the worst mass extinctions in Earth history. Land plants profoundly and permanently altered Earth’s geochemical cycles, underwrote the flourishing of all subsequent life on land, and might have sequestered so much carbon dioxide that they kicked off a 90-million-year ice age.
  • “What motivates me, I confess, is not my concern for future geologists but my belief that this is philosophically a good thing to do because it makes people think about something that they otherwise wouldn’t think about,” Wing said
  • Ten million years from now, humans went extinct—give or take a few thousand years—10 million years ago. Huge grazing herbivores and cursorial predators move carbon and nitrogen around the landscape.
  • Though no one is alive to tell us what epoch it is, these creatures have nevertheless inherited a planet forever diverted by our legacy—as surely, in Faulkner’s words, “as Noah’s grandchildren had inherited the Flood although they had not been there to see the deluge.
johnsonel7

Humanists, religious share values - 0 views

  • When religious voices assail humanism, they attack it as a belief in nothing, just another form of faith, no more provable than any other. They blame it for (supposed) American moral rot. But as a humanist, I don't believe morality needs some supernatural source
  • Humanism is a philosophy, not a religion or faith. It originated in ancient times with thinkers like Epicurus and Lucretius, with a rebirth in the Renaissance and Enlightenment. It's a way of understanding life and world, anchored in reason and reality.
  • Our earthly life is the only one we get. Nothing can ultimately matter except the feelings of sentient beings. We can infer from all this that our purpose is to make human life as good as possible. This purpose gives our lives ample meaning. Humanism provides the bedrock of morality. It encourages every person, oneself included, to live fully and attain happiness, a word that signifies equal respect for the dignity of all humans and freedom of thought and expression.
  • ...1 more annotation...
  • Only by coming to terms with the reality of our existence, as embodied in humanism, can we live authentically and meaningfully. "Being at one with everything" is a Buddhist cliché; but I get a similar feeling from how humanism grounds me in my engagement with life, the world, and humankind.
Javier E

Hard Times in the Red Dot - The American Interest - 0 views

  • Deaths per million in Singapore equal about 4; the comparable U.S. figure, as of June 15, is 356.
  • traits with cultural roots planted deep from experience that run through all of East Asia to one degree or another. Unlike most Americans, East Asians retain some imagination for tragedy, and that inculcates a capacity for stoicism that can be summoned when needed.
  • Stoicism here wears off faster now, along with any vestigial passion for politics, in rough proportion to the burgeoning in recent decades of affluence and a culture of conspicuous consumption
  • ...42 more annotations...
  • it wears off faster among the young and energetic than among the older, more world-weary but also more patient
  • Middle-class Singaporean families often refer to themselves nowadays as the “sandwich generation,” by which they mean that between needing to care for elderly parents and spending heavily on tuition or tutoring and uniforms for school-age children, they have little left to spend on themselves
  • There are more than 10,000 cases, and numbers are rising fast. More than 800 cases were registered in just five and a half days this past week, more than the previous all-time record for a full week.
  • The Singaporean system lacks an open-ended entitlement akin to the U.S. Social Security system. It uses a market-based system with much to commend it, but it isn’t perfect. The system is designed to rely in part on multigenerational families taking care of the elderly, so as is the case everywhere, when a family doesn’t cohere well for one reason or another, its elderly members often suffer most.
  • with the coming of Singapore’s second monsoon season, the island is suffering the worst bout of dengue fever infections in more than a decade.
  • No country in the world has benefited more than Singapore from U.S. postwar grand strategy, except perhaps China. Which is an interesting observation, often made here, in its own right.
  • He proceeded to explain that the U.S. effort in Vietnam had already bought the new nations of Southeast Asia shelter from communist onslaught for three to four precious years.
  • LKY’s son, current Prime Minister Lee Hsien Loong, repeated the same conclusion in a recent Foreign Affairs essay. He added that ever since the Vietnam War era, regardless of the end of the Cold War and dramatic changes in China, the U.S. role in East Asia has been both benign—he did not say error-free—and stabilizing.
  • More than that, U.S. support for an expanding free-trade accented global economic order has enabled Singapore to surf the crest of burgeoning economic growth in Asia, becoming the most successful transshipment platform in history. It has enabled Singapore to benefit from several major technological developments—containerization is a good example—that have revolutionized international trade in manufactures
  • Few realize that military power can do more than either compel or deter. Most of the time most military power in the hands of a status quo actor like the United States neither compels nor deters; it “merely” reassures, except that over time there is nothing mere about it
  • The most important of these reasons—and, I’ve learned, the hardest one for foreigners to understand—is that the Protestant/Enlightenment DNA baked indelibly into the American personality requires a belief in the nation’s exceptionalist virtue to justify an activist role abroad
  • Singapore has ridden the great whale of Asian advancement in a sea of American-guaranteed tranquility.
  • Singapore’s approach to dealing with China has been one of strategic hedging. There is no getting around the need to cooperate economically and functionally with China, for Chinese influence permeates the entire region. Do a simple thought experiment: Even if Singaporeans determined to avoid China, how could they avoid the emanations of Chinese relations with and influence on Malaysia, Indonesia, the Philippines, Vietnam, Thailand, Japan, and Korea? Impossible.
  • Singapore’s close relationship with the United States needs to be seen as similarly enmeshed with the greater web of U.S. relationships in littoral Asia, as well as with India and the Middle East. It is misleading, therefore, to define the issue as one of Singapore’s confidence, or lack thereof, that the United States will come to Singapore’s aid and defense en extremis.
  • The utility of the U.S. role vis-à-vis China is mainly one of regional balancing that indirectly benefits Singaporean security.
  • Singapore’s hedging strategy, which reflects a similar disposition throughout Southeast Asia with variations here and there, only works within certain ranges of enabling reality. It doesn’t work if American power or will wanes too much, and it doesn’t work if the broader Sino-American regional balance collapses into glaring enmity and major-power conflict.
  • Over the past dozen years the worry has been too much American waning, less of capability than of strategic attention, competence, and will. Now, over the past year or two, the worry has shifted to anxiety over potential system collapse into conflict and even outright war.
  • It’s no fun being a sentient ping pong ball between two behemoths with stinging paddles, so they join together in ASEAN hoping that this will deflect such incentives. It won’t, but people do what they can when they cannot do what they like.
  • the flat-out truth: The United States is in the process of doing something no other great power in modern history has ever done. It is knowingly and voluntarily abdicating its global role and responsibilities
  • It is troubled within, so is internally directed for reasons good and otherwise. Thus distracted from the rest of the world in a Hamlet-like act sure to last at least a decade, it is unlikely ever to return in full to the disinterested, active, and constructive role it pioneered for itself after World War II.
  • The recessional began already at the end of the George W. Bush Administration, set roots during the eight years of the Obama presidency, and became a bitter, relentless, tactless, and barely shy of mad obsession during the Trump presidency.
  • the strategy itself is unlikely to be revivified for several reasons.
  • One Lee Kuan Yew vignette sums up the matter. In the autumn of 1968, at a dinner in his honor at Harvard, the Prime Minister had to sit through a litany of complaints from leading scholars about President Johnson’s disastrously escalatory war policies in Vietnam. When they were through, no doubt expecting sympathy from an Asian leader, LKY, never one to bite his tongue, turned on his hosts and announced: “You make me sick.”
  • When, for justifiable reasons or not, the nation loses its moral self-respect, it cannot lift its chin to look confidently upon the world, or bring itself to ask the world to look upon America as a worthy model, let alone a leader.
  • That fact that most Americans today also increasingly see expansive international engagement as too expensive, too dangerous, too complex to understand, and unhelpful either to the “main street” American economy or to rock-bottom American security, is relevant too
  • the disappearance of a single “evil” adversary in Soviet communism, the advent of near-permanent economic anxiety punctuated by the 2008-9 Great Recession—whatever numbers the stock market puts up—and the sclerotic polarization of American politics have left most Americans with little bandwidth for foreign policy narratives.
  • Few listen to any member of our tenured political class with the gumption to argue that U.S. internationalism remains in the national interest. In any event, few try, and even fewer manage to make any sense when they do.
  • In that context, pleas from thoughtful observers that we must find a mean between trying to do too much and doing too little are likely to be wasted. No thoughtful, moderate approach to any public policy question can get an actionable hearing these days.
  • what has happened to “the America I knew and so admired” that its people could elect a man like Donald Trump President? How could a great country deteriorate so quickly from apparent competence, lucidity of mind, and cautious self-confidence into utterly debilitating spasms of apparent self-destruction?
  • The political culture as a whole has become a centrism incinerator, an immoderation generator, a shuddering dynamo of shallow intellectual impetuosity of every description.
  • in the wake of the George Floyd unrest one side thinks a slogan—“law and order”—that is mighty close to a dogwhistle for “shoot people of color” can make it all better, while the other side advocates defunding or abolishing the police, for all the good that would do struggling inner-city underclass neighborhoods.
  • To any normal person these are brazenly unserious propositions, yet they suck up nearly all the oxygen the U.S. media has the inclination to report about. The optic once it reaches Singapore, 9,650 miles away, is one of raving derangement.
  • Drop any policy proposal into any of the great lava flows of contemporary American irrationality and any sane center it may possess will boil away into nothingness in a matter of seconds
  • It’s hard for many to let go of hoary assurances about American benignity, constancy, and sound judgment
  • It is a little like trying to peel a beloved but thoroughly battered toy out of the hands of a four-year old. They want to hold onto it, even though at some level they know it’s time to loosen their grip.
  • Since then the mendacious narcissism of Donald Trump, the eager acquiescence to it of nearly the entire Republican Party, and its deadly metathesis in the COVID-19 and George Floyd contexts, have changed their questions. They no longer ask how this man could have become President. Now they ask where is the bottom of this sputtering cacophonous mess? They ask what will happen before and then on and after November 3
  • Singapore’s good fortune in recent decades is by no means entirely an accident of its ambient geostrategic surroundings, but it owes much to those surroundings. While Singaporeans were honing the arts of good government, saving and investing in the country, educating and inventing value-added jobs for themselves, all the while keeping intercommunal relations inclined toward greater tolerance and harmony, the world was cooperating mightily with their ambitions. At the business end of that world was the United States
  • The U.S. grand strategy of providing security goods to the global commons sheltered Singapore’s efforts in more ways than one over the years
  • In 1965, when Singapore was thrust into independence from the Malaysian union, a more fraught environment could barely have been imagined. Indonesia was going crazy in the year of living dangerously, and the konfrontasi spilled over violently onto Singapore’s streets, layering on the raw feelings of race riots here in 1964. Communist Chinese infiltration of every trade union movement in the region was a fact of life, not to exclude shards of Singapore’s, and the Cultural Revolution was at full froth in China. So when U.S. Marines hit the beach at Da Nang in February 1965 the independence-generation leadership here counted it as a blessing.
  • this is exactly the problem now: Those massively benign trends are at risk of inanition, if not reversal.
  • While China is no longer either Marxist or crazy, as it was during Mao’s Cultural Revolution, it is still Leninist, as its recent summary arrogation of Hong Kong’s negotiated special status shows. It has meanwhile grown mighty economically, advanced technologically at surprising speed, and has taken to investing grandly in its military capabilities. Its diplomacy has become more assertive, some would even say arrogant, as its Wolf Warrior nationalism has grown
  • The downward economic inflection of the pandemic has exacerbated pre-existing economic strains
Javier E

How the AI apocalypse gripped students at elite schools like Stanford - The Washington ... - 0 views

  • Edwards thought young people would be worried about immediate threats, like AI-powered surveillance, misinformation or autonomous weapons that target and kill without human intervention — problems he calls “ultraserious.” But he soon discovered that some students were more focused on a purely hypothetical risk: That AI could become as smart as humans and destroy mankind.
  • In these scenarios, AI isn’t necessarily sentient. Instead, it becomes fixated on a goal — even a mundane one, like making paper clips — and triggers human extinction to optimize its task.
  • To prevent this theoretical but cataclysmic outcome, mission-driven labs like DeepMind, OpenAI and Anthropic are racing to build a good kind of AI programmed not to lie, deceive or kill us.
  • ...28 more annotations...
  • Meanwhile, donors such as Tesla CEO Elon Musk, disgraced FTX founder Sam Bankman-Fried, Skype founder Jaan Tallinn and ethereum co-founder Vitalik Buterin — as well as institutions like Open Philanthropy, a charitable organization started by billionaire Facebook co-founder Dustin Moskovitz — have worked to push doomsayers from the tech industry’s margins into the mainstream.
  • More recently, wealthy tech philanthropists have begun recruiting an army of elite college students to prioritize the fight against rogue AI over other threats
  • Other skeptics, like venture capitalist Marc Andreessen, are AI boosters who say that hyping such fears will impede the technology’s progress.
  • Critics call the AI safety movement unscientific. They say its claims about existential risk can sound closer to a religion than research
  • And while the sci-fi narrative resonates with public fears about runaway AI, critics say it obsesses over one kind of catastrophe to the exclusion of many others.
  • Open Philanthropy spokesperson Mike Levine said harms like algorithmic racism deserve a robust response. But he said those problems stem from the same root issue: AI systems not behaving as their programmers intended. The theoretical risks “were not garnering sufficient attention from others — in part because these issues were perceived as speculative,” Levine said in a statement. He compared the nonprofit’s AI focus to its work on pandemics, which also was regarded as theoretical until the coronavirus emerged.
  • Among the reputational hazards of the AI safety movement is its association with an array of controversial figures and ideas, like EA, which is also known for recruiting ambitious young people on elite college campuses.
  • The foundation began prioritizing existential risks around AI in 2016,
  • there was little status or money to be gained by focusing on risks. So the nonprofit set out to build a pipeline of young people who would filter into top companies and agitate for change from the insid
  • Colleges have been key to this growth strategy, serving as both a pathway to prestige and a recruiting ground for idealistic talent
  • The clubs train students in machine learning and help them find jobs in AI start-ups or one of the many nonprofit groups dedicated to AI safety.
  • Many of these newly minted student leaders view rogue AI as an urgent and neglected threat, potentially rivaling climate change in its ability to end human life. Many see advanced AI as the Manhattan Project of their generation
  • Despite the school’s ties to Silicon Valley, Mukobi said it lags behind nearby UC Berkeley, where younger faculty members research AI alignment, the term for embedding human ethics into AI systems.
  • Mukobi joined Stanford’s club for effective altruism, known as EA, a philosophical movement that advocates doing maximum good by calculating the expected value of charitable acts, like protecting the future from runaway AI. By 2022, AI capabilities were advancing all around him — wild developments that made those warnings seem prescient.
  • At Stanford, Open Philanthropy awarded Luby and Edwards more than $1.5 million in grants to launch the Stanford Existential Risk Initiative, which supports student research in the growing field known as “AI safety” or “AI alignment.
  • from the start EA was intertwined with tech subcultures interested in futurism and rationalist thought. Over time, global poverty slid down the cause list, while rogue AI climbed toward the top.
  • In the past year, EA has been beset by scandal, including the fall of Bankman-Fried, one of its largest donors
  • Another key figure, Oxford philosopher Nick Bostrom, whose 2014 bestseller “Superintelligence” is essential reading in EA circles, met public uproar when a decades-old diatribe about IQ surfaced in January.
  • Programming future AI systems to share human values could mean “an amazing world free from diseases, poverty, and suffering,” while failure could unleash “human extinction or our permanent disempowerment,” Mukobi wrote, offering free boba tea to anyone who attended the 30-minute intro.
  • Open Philanthropy’s new university fellowship offers a hefty direct deposit: undergraduate leaders receive as much as $80,000 a year, plus $14,500 for health insurance, and up to $100,000 a year to cover group expenses.
  • Student leaders have access to a glut of resources from donor-sponsored organizations, including an “AI Safety Fundamentals” curriculum developed by an OpenAI employee.
  • Interest in the topic is also growing among Stanford faculty members, Edwards said. He noted that a new postdoctoral fellow will lead a class on alignment next semester in Stanford’s storied computer science department.
  • Edwards discovered that shared online forums function like a form of peer review, with authors changing their original text in response to the comments
  • Mukobi feels energized about the growing consensus that these risks are worth exploring. He heard students talking about AI safety in the halls of Gates, the computer science building, in May after Geoffrey Hinton, another “godfather” of AI, quit Google to warn about AI. By the end of the year, Mukobi thinks the subject could be a dinner-table topic, just like climate change or the war in Ukraine.
  • Luby, Edwards’s teaching partner for the class on human extinction, also seems to find these arguments persuasive. He had already rearranged the order of his AI lesson plans to help students see the imminent risks from AI. No one needs to “drink the EA Kool-Aid” to have genuine concerns, he said.
  • Edwards, on the other hand, still sees things like climate change as a bigger threat than rogue AI. But ChatGPT and the rapid release of AI models has convinced him that there should be room to think about AI safety.
  • Interested students join reading groups where they get free copies of books like “The Precipice,” and may spend hours reading the latest alignment papers, posting career advice on the Effective Altruism forum, or adjusting their P(doom), a subjective estimate of the probability that advanced AI will end badly. The grants, travel, leadership roles for inexperienced graduates and sponsored co-working spaces build a close-knit community.
  • The course will not be taught by students or outside experts. Instead, he said, it “will be a regular Stanford class.”
Javier E

Modern Masculinity is Broken. Caitlin Moran Knows How to Fix It. - The New York Times - 0 views

  • “All the women that I know on similar platforms,” Moran says, speaking about fellow writers, “we’re out there mentoring young girls and signing petitions and looking after the younglings. The men of my generation with the same platforms have not done that. They are not having a conversation about young men. So given that none of them have written a book that addresses this, muggins here is going to do it.”
  • Feminism has a stated objective, which is the political, social, sexual and economic equality of women.
  • With men, there isn’t an objective or an aim. Because there isn’t, what I have observed is that the stuff that is getting the most currency is on the conservative side. Men going: “Our lives have gotten materially worse since women started asking for equality. We need to reset the clock. We need to have power over women again.”
  • ...9 more annotations...
  • We are talking about the problems of women and girls at a much higher level than we are about boys and men. We need to identify the problems and work out what we want the future to look like for men in a way that women have already done for themselves.
  • What should the future look like for men? It feels that every so often a book about men comes out and a small conversation flares up, and the conclusion, usually, is, “It’s a thing you should sort out yourselves, men!” There’s no sense of a continuing conversation; of there being a new pantheon of men being invented all the time;1 1 Moran cited the pop star Harry Styles and the British soccer player Marcus Rashford as contemporary public figures who are expanding ideas about masculinity. then those inventions’ embedding themselves more firmly in the mainstream
  • my book is going: “I can see what is happening in women’s lives and how it’s benefited us. There is something equivalent that you men can do. Why don’t you give it a go?”
  • The thing that I observe in younger women and activists is that they’re scared of going online and using the wrong word or asking the wrong question. As a result, we’re not having the free flow of ideas and questions that makes a movement optimal. We appear to have reinvented religion to a certain extent: the idea that there is a sentient thing watching you and that if you do something wrong, it will punish you. God is very much there in social media
  • So they are quite rightly going, “Who’s going to say something good about the men?” The people that they’ve seen are Andrew Tate.2
  • Men on the liberal left, while feminism was having this massive movement, they were like, OK, we’re not going to start talking about men while this is happening. They sat it out for a decade, and now their sons have grown up in an era where they have heard people go, “Typical straight white men; toxic masculinity,” and those sons are like, “[expletive] this,” because they don’t see what a recent corrective feminism is to thousands of years of patriarchy. They have only ever known people saying, “The future is female.
  • What’s an idea that people are afraid to talk about more openly? Trans issues. In the U.K., you are seen to be on one of two sides. It’s the idea that you could be a centrist and talk about it in a relaxed, humorous, humane way that didn’t involve two groups of adults tearing each other to pieces on the internet.
  • What does it mean to be a centrist on trans issues? In the U.K., you are either absolutely 100 percent pro trans rights, or you would be a TERF8 8 Trans-exclusionary radical feminist. going: “You are just men with your cocks torn off. You’re either born a woman or you are not.” The idea that you can go in the middle and go, “Let’s look at facts and research and talk to people”?
  • You can’t ask those kinds of questions or look for those statistics. If you say anything about this issue, you are claimed by one side or the other.
Javier E

How the Shoggoth Meme Has Come to Symbolize the State of A.I. - The New York Times - 0 views

  • the Shoggoth had become a popular reference among workers in artificial intelligence, as a vivid visual metaphor for how a large language model (the type of A.I. system that powers ChatGPT and other chatbots) actually works.
  • it was only partly a joke, he said, because it also hinted at the anxieties many researchers and engineers have about the tools they’re building.
  • Since then, the Shoggoth has gone viral, or as viral as it’s possible to go in the small world of hyper-online A.I. insiders. It’s a popular meme on A.I. Twitter (including a now-deleted tweet by Elon Musk), a recurring metaphor in essays and message board posts about A.I. risk, and a bit of useful shorthand in conversations with A.I. safety experts. One A.I. start-up, NovelAI, said it recently named a cluster of computers “Shoggy” in homage to the meme. Another A.I. company, Scale AI, designed a line of tote bags featuring the Shoggoth.
  • ...17 more annotations...
  • Shoggoths are fictional creatures, introduced by the science fiction author H.P. Lovecraft in his 1936 novella “At the Mountains of Madness.” In Lovecraft’s telling, Shoggoths were massive, blob-like monsters made out of iridescent black goo, covered in tentacles and eyes.
  • In a nutshell, the joke was that in order to prevent A.I. language models from behaving in scary and dangerous ways, A.I. companies have had to train them to act polite and harmless. One popular way to do this is called “reinforcement learning from human feedback,” or R.L.H.F., a process that involves asking humans to score chatbot responses, and feeding those scores back into the A.I. model.
  • Most A.I. researchers agree that models trained using R.L.H.F. are better behaved than models without it. But some argue that fine-tuning a language model this way doesn’t actually make the underlying model less weird and inscrutable. In their view, it’s just a flimsy, friendly mask that obscures the mysterious beast underneath.
  • @TetraspaceWest, the meme’s creator, told me in a Twitter message that the Shoggoth “represents something that thinks in a way that humans don’t understand and that’s totally different from the way that humans think.”
  • @TetraspaceWest said, wasn’t necessarily implying that it was evil or sentient, just that its true nature might be unknowable.
  • “I was also thinking about how Lovecraft’s most powerful entities are dangerous — not because they don’t like humans, but because they’re indifferent and their priorities are totally alien to us and don’t involve humans, which is what I think will be true about possible future powerful A.I.”
  • when Bing’s chatbot became unhinged and tried to break up my marriage, an A.I. researcher I know congratulated me on “glimpsing the Shoggoth.” A fellow A.I. journalist joked that when it came to fine-tuning Bing, Microsoft had forgotten to put on its smiley-face mask.
  • If it’s an A.I. safety researcher talking about the Shoggoth, maybe that person is passionate about preventing A.I. systems from displaying their true, Shoggoth-like nature.
  • In any case, the Shoggoth is a potent metaphor that encapsulates one of the most bizarre facts about the A.I. world, which is that many of the people working on this technology are somewhat mystified by their own creations. They don’t fully understand the inner workings of A.I. language models, how they acquire new capabilities or why they behave unpredictably at times. They aren’t totally sure if A.I. is going to be net-good or net-bad for the world.
  • That some A.I. insiders refer to their creations as Lovecraftian horrors, even as a joke, is unusual by historical standards. (Put it this way: Fifteen years ago, Mark Zuckerberg wasn’t going around comparing Facebook to Cthulhu.)
  • And it reinforces the notion that what’s happening in A.I. today feels, to some of its participants, more like an act of summoning than a software development process. They are creating the blobby, alien Shoggoths, making them bigger and more powerful, and hoping that there are enough smiley faces to cover the scary parts.
  • A great many people are dismissive of suggestions that any of these systems are “really” thinking, because they’re “just” doing something banal (like making statistical predictions about the next word in a sentence). What they fail to appreciate is that there is every reason to suspect that human cognition is “just” doing those exact same things. It matters not that birds flap their wings but airliners don’t. Both fly. And these machines think. And, just as airliners fly faster and higher and farther than birds while carrying far more weight, these machines are already outthinking the majority of humans at the majority of tasks. Further, that machines aren’t perfect thinkers is about as relevant as the fact that air travel isn’t instantaneous. Now consider: we’re well past the Wright flyer level of thinking machine, past the early biplanes, somewhere about the first commercial airline level. Not quite the DC-10, I think. Can you imagine what the AI equivalent of a 777 will be like? Fasten your seatbelts.
  • @BLA. You are incorrect. Everything has nature. Its nature is manifested in making humans react. Sure, no humans, no nature, but here we are. The writer and various sources are not attributing nature to AI so much as admitting that they don’t know what this nature might be, and there are reasons to be scared of it. More concerning to me is the idea that this field is resorting to geek culture reference points to explain and comprehend itself. It’s not so much the algorithm has no soul, but that the souls of the humans making it possible are stupendously and tragically underdeveloped.
  • @thomas h. You make my point perfectly. You’re observing that the way a plane flies — by using a turbine to generate thrust from combusting kerosene, for example — is nothing like the way that a bird flies, which is by using the energy from eating plant seeds to contract the muscles in its wings to make them flap. You are absolutely correct in that observation, but it’s also almost utterly irrelevant. And it ignores that, to a first approximation, there’s no difference in the physics you would use to describe a hawk riding a thermal and an airliner gliding (essentially) unpowered in its final descent to the runway. Further, you do yourself a grave disservice in being dismissive of the abilities of thinking machines, in exactly the same way that early skeptics have been dismissive of every new technology in all of human history. Writing would make people dumb; automobiles lacked the intelligence of horses; no computer could possibly beat a chess grandmaster because it can’t comprehend strategy; and on and on and on. Humans aren’t nearly as special as we fool ourselves into believing. If you want to have any hope of acting responsibly in the age of intelligent machines, you’ll have to accept that, like it or not, and whether or not it fits with your preconceived notions of what thinking is and how it is or should be done … machines can and do think, many of them better than you in a great many ways. b&
  • When even tech companies are saying AI is moving too fast, and the articles land on page 1 of the NYT (there's an old reference), I think the greedy will not think twice about exploiting this technology, with no ethical considerations, at all.
  • @nome sane? The problem is it isn't data as we understand it. We know what the datasets are -- they were used to train the AI's. But once trained, the AI is thinking for itself, with results that have surprised everybody.
  • The unique feature of a shoggoth is it can become whatever is needed for a particular job. There's no actual shape so it's not a bad metaphor, if an imperfect image. Shoghoths also turned upon and destroyed their creators, so the cautionary metaphor is in there, too. A shame more Asimov wasn't baked into AI. But then the conflict about how to handle AI in relation to people was key to those stories, too.
Javier E

Opinion | The OpenAI drama explains the human penchant for risk-taking - The Washington... - 0 views

  • Along with more pedestrian worries about various ways that AI could harm users, one side worried that ChatGPT and its many cousins might thrust humanity onto a kind of digital bobsled track, terminating in disaster — either with the machines wiping out their human progenitors or with humans using the machines to do so themselves. Once things start moving in earnest, there’s no real way to slow down or bail out, so the worriers wanted everyone to sit down and have a long think before getting anything rolling too fast.
  • Skeptics found all this a tad overwrought. For one thing, it left out all the ways in which AI might save humanity by providing cures for aging or solutions to global warming. And many folks thought it would be years before computers could possess anything approaching true consciousness, so we could figure out the safety part as we go. Still others were doubtful that truly sentient machines were even on the horizon; they saw ChatGPT and its many relatives as ultrasophisticated electronic parrots
  • Worrying that such an entity might decide it wants to kill people is a bit like wondering whether your iPhone would prefer to holiday in Crete or Majorca next summer.
  • ...13 more annotations...
  • OpenAI was was trying to balance safety and development — a balance that became harder to maintain under the pressures of commercialization.
  • It was founded as a nonprofit by people who professed sincere concern about taking things safe and slow. But it was also full of AI nerds who wanted to, you know, make cool AIs.
  • OpenAI set up a for-profit arm — but with a corporate structure that left the nonprofit board able to cry “stop” if things started moving too fast (or, if you prefer, gave “a handful of people with no financial stake in the company the power to upend the project on a whim”).
  • On Friday, those people, in a fit of whimsy, kicked Brockman off the board and fired Altman. Reportedly, the move was driven by Ilya Sutskever, OpenAI’s chief scientist, who, along with other members of the board, has allegedly clashed repeatedly with Altman over the speed of generative AI development and the sufficiency of safety precautions.
  • Chief among the signatories was Sutskever, who tweeted Monday morning, “I deeply regret my participation in the board’s actions. I never intended to harm OpenAI. I love everything we’ve built together and I will do everything I can to reunite the company.”
  • Humanity can’t help itself; we have kept monkeying with technology, no matter the dangers, since some enterprising hominid struck the first stone ax.
  • a software company has little in the way of tangible assets; its people are its capital. And this capital looks willing to follow Altman to where the money is.
  • More broadly still, it perfectly encapsulates the AI alignment problem, which in the end is also a human alignment problem
  • And that’s why we are probably not going to “solve” it so much as hope we don’t have to.
  • it’s also a valuable general lesson about corporate structure and corporate culture. The nonprofit’s altruistic mission was in tension with the profit-making, AI-generating part — and when push came to shove, the profit-making part won.
  • When scientists started messing with the atom, there were real worries that nuclear weapons might set Earth’s atmosphere on fire. By the time an actual bomb was exploded, scientists were pretty sure that wouldn’t happen
  • But if the worries had persisted, would anyone have behaved differently — knowing that it might mean someone else would win the race for a superweapon? Better to go forward and ensure that at least the right people were in charge.
  • Now consider Sutskever: Did he change his mind over the weekend about his disputes with Altman? More likely, he simply realized that, whatever his reservations, he had no power to stop the bobsled — so he might as well join his friends onboard. And like it or not, we’re all going with them.
Javier E

Generative AI Is Already Changing White Collar Work As We Know It - WSJ - 0 views

  • As ChatGPT and other generative artificial intelligence programs infiltrate workplaces, white-collar jobs are transforming the fastest.
  • The biggest workplace challenge so far this year across industries is how to adapt to the rapidly evolving role of AI in office work, they say.
  • according to a new study by researchers at the University of Pennsylvania and OpenAI, most jobs will be changed in some form by generative pretrained transformers, or GPTs, which use machine learning based on internet data to generate any kind of text, from creative writing to code. 
  • ...12 more annotations...
  • “AI is the next revolution and there is no going back,”
  • that transformation is already taking shape, and workers can find ways to use the ChatGPT and other new technology to free them from boring work.
  • “Every month there are hundreds more job postings mentioning generative AI,”
  • “The way things have been done in the past aren’t necessarily the way they need to be done today,” he said, adding that workers and employers should invest in retraining and upskilling where possible.
  • “There is an enormous demand for people who are tech-savvy and who will be the first adopters, who will be the first to figure out what opportunities these technologies open up,”
  • The jobs of the future will require a mind-set shift for employees, several executives said. Rather than viewing generative AI and other machine-learning software as a threat, workers should embrace new technology as a way to free them from less-rewarding work and augment their strengths.
  • “This is a huge opportunity to advance a lot of professions—allow people to do work that’s, frankly, more stimulating.”
  • For the hotel chain, that could look like using AI to determine which brand of wine a guest likes, and adjusting recommendations accordingly.
  • United Airlines Holdings Inc., aims to use AI to do transactions that shouldn’t require a human, such as placing someone in an aisle or window seat depending on their preference, or suggesting a different flight for someone trying to book a tight connection, said Kate Gebo, executive vice president of human resources and labor relations. That leaves employees free to have more complex interactions with customers
  • services intended to help customers solve emotional problems require solutions a machine can’t provide.
  • “AI is not sentient. It can’t be emotional. And that is the kind of accountability and reciprocity that is needed…for people to have the outcomes that we’re hoping to provide,”
  • “Certain business processes could be enhanced,” said Carmen Orr, Yelp’s chief people officer, adding that there are plenty of concerns, too. “We don’t want it for high human-touch things.”
Javier E

The Only Way to Deal With the Threat From AI? Shut It Down | Time - 0 views

  • An open letter published today calls for “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-
  • This 6-month moratorium would be better than no moratorium. I have respect for everyone who stepped up and signed it. It’s an improvement on the margin
  • he rule that most people aware of these issues would have endorsed 50 years earlier, was that if an AI system can speak fluently and says it’s self-aware and demands human rights, that ought to be a hard stop on people just casually owning that AI and using it past that point. We already blew past that old line in the sand. And that was probably correct; I agree that current AIs are probably just imitating talk of self-awareness from their training data. But I mark that, with how little insight we have into these systems’ internals, we do not actually know.
  • ...25 more annotations...
  • The key issue is not “human-competitive” intelligence (as the open letter puts it); it’s what happens after AI gets to smarter-than-human intelligence. Key thresholds there may not be obvious, we definitely can’t calculate in advance what happens when, and it currently seems imaginable that a research lab would cross critical lines without noticing.
  • Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.”
  • It’s not that you can’t, in principle, survive creating something much smarter than you; it’s that it would require precision and preparation and new scientific insights, and probably not having AI systems composed of giant inscrutable arrays of fractional numbers.
  • Absent that caring, we get “the AI does not love you, nor does it hate you, and you are made of atoms it can use for something else.”
  • Without that precision and preparation, the most likely outcome is AI that does not do what we want, and does not care for us nor for sentient life in general. That kind of caring is something that could in principle be imbued into an AI but we are not ready and do not currently know how.
  • The likely result of humanity facing down an opposed superhuman intelligence is a total loss
  • To visualize a hostile superhuman AI, don’t imagine a lifeless book-smart thinker dwelling inside the internet and sending ill-intentioned emails. Visualize an entire alien civilization, thinking at millions of times human speeds, initially confined to computers—in a world of creatures that are, from its perspective, very stupid and very slow. A sufficiently intelligent AI won’t stay confined to computers for long. In today’s world you can email DNA strings to laboratories that will produce proteins on demand, allowing an AI initially confined to the internet to build artificial life forms or bootstrap straight to postbiological molecular manufacturing.
  • There’s no proposed plan for how we could do any such thing and survive. OpenAI’s openly declared intention is to make some future AI do our AI alignment homework. Just hearing that this is the plan ought to be enough to get any sensible person to panic. The other leading AI lab, DeepMind, has no plan at all.
  • An aside: None of this danger depends on whether or not AIs are or can be conscious; it’s intrinsic to the notion of powerful cognitive systems that optimize hard and calculate outputs that meet sufficiently complicated outcome criteria.
  • I didn’t also mention that we have no idea how to determine whether AI systems are aware of themselves—since we have no idea how to decode anything that goes on in the giant inscrutable arrays—and therefore we may at some point inadvertently create digital minds which are truly conscious and ought to have rights and shouldn’t be owned.
  • I refrained from signing because I think the letter is understating the seriousness of the situation and asking for too little to solve it.
  • the thing about trying this with superhuman intelligence is that if you get that wrong on the first try, you do not get to learn from your mistakes, because you are dead. Humanity does not learn from the mistake and dust itself off and try again, as in other challenges we’ve overcome in our history, because we are all gone.
  • If we held anything in the nascent field of Artificial General Intelligence to the lesser standards of engineering rigor that apply to a bridge meant to carry a couple of thousand cars, the entire field would be shut down tomorrow.
  • We are not on course to be prepared in any reasonable time window. There is no plan. Progress in AI capabilities is running vastly, vastly ahead of progress in AI alignment or even progress in understanding what the hell is going on inside those systems
  • Many researchers working on these systems think that we’re plunging toward a catastrophe, with more of them daring to say it in private than in public; but they think that they can’t unilaterally stop the forward plunge, that others will go on even if they personally quit their jobs.
  • This is a stupid state of affairs, and an undignified way for Earth to die, and the rest of humanity ought to step in at this point and help the industry solve its collective action problem.
  • When the insider conversation is about the grief of seeing your daughter lose her first tooth, and thinking she’s not going to get a chance to grow up, I believe we are past the point of playing political chess about a six-month moratorium.
  • The moratorium on new large training runs needs to be indefinite and worldwide. There can be no exceptions, including for governments or militaries. If the policy starts with the U.S., then China needs to see that the U.S. is not seeking an advantage but rather trying to prevent a horrifically dangerous technology which can have no true owner and which will kill everyone in the U.S. and in China and on Earth
  • Here’s what would actually need to be done:
  • Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs
  • Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithm
  • Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.
  • Frame nothing as a conflict between national interests, have it clear that anyone talking of arms races is a fool
  • Make it explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange, and that allied nuclear countries are willing to run some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs.
  • when your policy ask is that large, the only way it goes through is if policymakers realize that if they conduct business as usual, and do what’s politically easy, that means their own kids are going to die too.
1 - 11 of 11
Showing 20 items per page