Skip to main content

Home/ History Readings/ Group items matching "Ai" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
Javier E

The Lesson of 1975 for Today's Pessimists - WSJ - 0 views

  • out of the depths of the inflation-riddled ’70s came the democratization of computing and finance. It feels to me as if we’re at a similar point. What’s going to be democratized next?
  • Start with quantum computing, autonomous vehicles and delivery drones. Even the once-in-a-generation innovation of machine learning and artificial intelligence is generating fear and doubt. Like homebrew computers, we’re at the rudimentary stage.
  • Especially in medicine. Healthcare pricing, billing and reimbursements are completely nonsensical. ObamaCare made it worse, but change is beginning. Pandemic-enabled telemedicine is a crack in the old way’s armor. Self-directed healthcare will grow. Ozempic and magic pills are changing lives. Crispr gene editing is also rudimentary but could extend healthy life expectancies. Add precision oncology, computational biology, focused ultrasound and more. The upside is endless.
  • ...2 more annotations...
  • AI will usher in knowledgeable and friendly automated customer service any day now. But there is so much else on the innovation horizon: osmotic energy, geothermal, nuclear fusion, autonomous farming, photonic computing, human longevity. Plus all the stuff in research labs we haven’t heard of yet, let alone invented and brought to market.
  • Every industry is about to change, which will defy skeptics. Figure out how, and then, as Mr. Wozniak suggests, get your hands dirty. As always, the pain point is cost. Look for things that get cheaper—that’s the only way to clear the smoke and get new marvels into global consumer hands.
Javier E

In defense of science fiction - by Noah Smith - Noahpinion - 0 views

  • I’m a big fan of science fiction (see my list of favorites from last week)! So when people start bashing the genre, I tend to leap to its defense
  • this time, the people doing the bashing are some serious heavyweights themselves — Charles Stross, the celebrated award-winning sci-fi author, and Tyler Austin Harper, a professor who studies science fiction for a living
  • The two critiques center around the same idea — that rich people have misused sci-fi, taking inspiration from dystopian stories and working to make those dystopias a reality.
  • ...14 more annotations...
  • [Science fiction’s influence]…leaves us facing a future we were all warned about, courtesy of dystopian novels mistaken for instruction manuals…[T]he billionaires behind the steering wheel have mistaken cautionary tales and entertainments for a road map, and we’re trapped in the passenger seat.
  • t even then it would be hard to argue exogeneity, since censorship is a response to society’s values as well as a potential cause of them.
  • Stross is alleging that the billionaires are getting Gernsback and Campbell’s intentions exactly right. His problem is simply that Gernsback and Campbell were kind of right-wing, at least by modern standards, and he’s worried that their sci-fi acted as propaganda for right-wing ideas.
  • The question of whether literature has a political effect is an empirical one — and it’s a very difficult empirical one. It’s extremely hard to test the hypothesis that literature exerts a diffuse influence on the values and preconceptions of the citizenry
  • I think Stross really doesn’t come up with any credible examples of billionaires mistaking cautionary tales for road maps. Instead, most of his article focuses on a very different critique — the idea that sci-fi authors inculcate rich technologists with bad values and bad visions of what the future ought to look like:
  • I agree that the internet and cell phones have had an ambiguous overall impact on human welfare. If modern technology does have a Torment Nexus, it’s the mobile-social nexus that keeps us riveted to highly artificial, attenuated parasocial interactions for every waking hour of our day. But these technologies are still very young, and it remains to be seen whether the ways in which we use them will get better or worse over time.
  • There are very few technologies — if any — whose impact we can project into the far future at the moment of their inception. So unless you think our species should just refuse to create any new technology at all, you have to accept that each one is going to be a bit of a gamble.
  • As for weapons of war, those are clearly bad in terms of their direct effects on the people on the receiving end. But it’s possible that more powerful weapons — such as the atomic bomb — serve to deter more deaths than they cause
  • yes, AI is risky, but the need to manage and limit risk is a far cry from the litany of negative assumptions and extrapolations that often gets flung in the technology’s directio
  • I think the main problem with Harper’s argument is simply techno-pessimism. So far, technology’s effects on humanity have been mostly good, lifting us up from the muck of desperate poverty and enabling the creation of a healthier, more peaceful, more humane world. Any serious discussion of the effects of innovation on society must acknowledge that. We might have hit an inflection point where it all goes downhill from here, and future technologies become the Torment Nexuses that we’ve successfully avoided in the past. But it’s very premature to assume we’ve hit that point.
  • I understand that the 2020s are an exhausted age, in which we’re still reeling from the social ructions of the 2010s. I understand that in such a weary and fearful condition, it’s natural to want to slow the march of technological progress as a proxy for slowing the headlong rush of social progress
  • And I also understand how easy it is to get negatively polarized against billionaires, and any technologies that billionaires invent, and any literature that billionaires like to read.
  • But at a time when we’re creating vaccines against cancer and abundant clean energy and any number of other life-improving and productivity-boosting marvels, it’s a little strange to think that technology is ruining the world
  • The dystopian elements of modern life are mostly just prosaic, old things — political demagogues, sclerotic industries, social divisions, monopoly power, environmental damage, school bullies, crime, opiates, and so on
Javier E

I tried out an Apple Vision Pro. It frightened me | Arwa Mahdawi | The Guardian - 0 views

  • Despite all the marketed use cases, the most impressive aspect of it is the immersive video
  • Watching a movie, however, feels like you’ve been transported into the content.
  • that raises serious questions about how we perceive the world and what we consider reality. Big tech companies are desperate to rush this technology out but it’s not clear how much they’ve been worrying about the consequences.
  • ...10 more annotations...
  • it is clear that its widespread adoption is a matter of when, not if. There is no debate that we are moving towards a world where “real life” and digital technology seamlessly blur
  • Over the years there have been multiple reports of people being harassed and even “raped” in the metaverse: an experience that feels scarily real because of how immersive virtual reality is. As the lines between real life and the digital world blur to a point that they are almost indistinguishable, will there be a meaningful difference between online assault and an attack in real life?
  • more broadly, spatial computing is going to alter what we consider reality
  • Researchers from Stanford and Michigan University recently undertook a study on the Vision Pro and other “passthrough” headsets (that’s the technical term for the feature which brings VR content into your real-world surrounding so you see what’s around you while using the device) and emerged with some stark warnings about how this tech might rewire our brains and “interfere with social connection”.
  • These headsets essentially give us all our private worlds and rewrite the idea of a shared reality. The cameras through which you see the world can edit your environment – you can walk to the shops wearing it, for example, and it might delete all the homeless people from your view and make the sky brighter.
  • “What we’re about to experience is, using these headsets in public, common ground disappears,”
  • “People will be in the same physical place, experiencing simultaneous, visually different versions of the world. We’re going to lose common ground.”
  • It’s not just the fact that our perception of reality might be altered that’s scary: it’s the fact that a small number of companies will have so much control over how we see the world. Think about how much influence big tech already has when it comes to content we see, and then multiply that a million times over. You think deepfakes are scary? Wait until they seem even more realistic.
  • We’re seeing a global rise of authoritarianism. If we’re not careful this sort of technology is going to massively accelerate it.
  • Being able to suck people into an alternate universe, numb them with entertainment, and dictate how they see reality? That’s an authoritarian’s dream. We’re entering an age where people can be mollified and manipulated like never before
Javier E

Opinion | The 100-Year Extinction Panic Is Back, Right on Schedule - The New York Times - 0 views

  • The literary scholar Paul Saint-Amour has described the expectation of apocalypse — the sense that all history’s catastrophes and geopolitical traumas are leading us to “the prospect of an even more devastating futurity” — as the quintessential modern attitude. It’s visible everywhere in what has come to be known as the polycrisis.
  • Climate anxiety, of the sort expressed by that student, is driving new fields in psychology, experimental therapies and debates about what a recent New Yorker article called “the morality of having kids in a burning, drowning world.”
  • The conviction that the human species could be on its way out, extinguished by our own selfishness and violence, may well be the last bipartisan impulse.
  • ...28 more annotations...
  • a major extinction panic happened 100 years ago, and the similarities are unnerving.
  • The 1920s were also a period when the public — traumatized by a recent pandemic, a devastating world war and startling technological developments — was gripped by the conviction that humanity might soon shuffle off this mortal coil.
  • It also helps us see how apocalyptic fears feed off the idea that people are inherently violent, self-interested and hierarchical and that survival is a zero-sum war over resources.
  • Either way, it’s a cynical view that encourages us to take our demise as a foregone conclusion.
  • What makes an extinction panic a panic is the conviction that humanity is flawed and beyond redemption, destined to die at its own hand, the tragic hero of a terrestrial pageant for whom only one final act is possible
  • What the history of prior extinction panics has to teach us is that this pessimism is both politically questionable and questionably productive. Our survival will depend on our ability to recognize and reject the nihilistic appraisals of humanity that inflect our fears for the future, both left and right.
  • As a scholar who researches the history of Western fears about human extinction, I’m often asked how I avoid sinking into despair. My answer is always that learning about the history of extinction panics is actually liberating, even a cause for optimism
  • Nearly every generation has thought its generation was to be the last, and yet the human species has persisted
  • As a character in Jeanette Winterson’s novel “The Stone Gods” says, “History is not a suicide note — it is a record of our survival.”
  • Contrary to the folk wisdom that insists the years immediately after World War I were a period of good times and exuberance, dark clouds often hung over the 1920s. The dread of impending disaster — from another world war, the supposed corruption of racial purity and the prospect of automated labor — saturated the period
  • The previous year saw the publication of the first of several installments of what many would come to consider his finest literary achievement, “The World Crisis,” a grim retrospective of World War I that laid out, as Churchill put it, the “milestones to Armageddon.
  • Bluntly titled “Shall We All Commit Suicide?,” the essay offered a dismal appraisal of humanity’s prospects. “Certain somber facts emerge solid, inexorable, like the shapes of mountains from drifting mist,” Churchill wrote. “Mankind has never been in this position before. Without having improved appreciably in virtue or enjoying wiser guidance, it has got into its hands for the first time the tools by which it can unfailingly accomplish its own extermination.”
  • The essay — with its declaration that “the story of the human race is war” and its dismay at “the march of science unfolding ever more appalling possibilities” — is filled with right-wing pathos and holds out little hope that mankind might possess the wisdom to outrun the reaper. This fatalistic assessment was shared by many, including those well to Churchill’s left.
  • “Are not we and they and all the race still just as much adrift in the current of circumstances as we were before 1914?” he wondered. Wells predicted that our inability to learn from the mistakes of the Great War would “carry our race on surely and inexorably to fresh wars, to shortages, hunger, miseries and social debacles, at last either to complete extinction or to a degradation beyond our present understanding.” Humanity, the don of sci-fi correctly surmised, was rushing headlong into a “scientific war” that would “make the biggest bombs of 1918 seem like little crackers.”
  • The pathbreaking biologist J.B.S. Haldane, another socialist, concurred with Wells’s view of warfare’s ultimate destination. In 1925, two decades before the Trinity test birthed an atomic sun over the New Mexico desert, Haldane, who experienced bombing firsthand during World War I, mused, “If we could utilize the forces which we now know to exist inside the atom, we should have such capacities for destruction that I do not know of any agency other than divine intervention which would save humanity from complete and peremptory annihilation.”
  • F.C.S. Schiller, a British philosopher and eugenicist, summarized the general intellectual atmosphere of the 1920s aptly: “Our best prophets are growing very anxious about our future. They are afraid we are getting to know too much and are likely to use our knowledge to commit suicide.”
  • Many of the same fears that keep A.I. engineers up at night — calibrating thinking machines to human values, concern that our growing reliance on technology might sap human ingenuity and even trepidation about a robot takeover — made their debut in the early 20th century.
  • The popular detective novelist R. Austin Freeman’s 1921 political treatise, “Social Decay and Regeneration,” warned that our reliance on new technologies was driving our species toward degradation and even annihilation
  • Extinction panics are, in both the literal and the vernacular senses, reactionary, animated by the elite’s anxiety about maintaining its privilege in the midst of societal change
  • There is a perverse comfort to dystopian thinking. The conviction that catastrophe is baked in relieves us of the moral obligation to act. But as the extinction panic of the 1920s shows us, action is possible, and these panics can recede
  • To whatever extent, then, that the diagnosis proved prophetic, it’s worth asking if it might have been at least partly self-fulfilling.
  • today’s problems are fundamentally new. So, too, must be our solutions
  • It is a tired observation that those who don’t know history are destined to repeat it. We live in a peculiar moment in which this wisdom is precisely inverted. Making it to the next century may well depend on learning from and repeating the tightrope walk — between technological progress and self-annihilation — that we have been doing for the past 100 years
  • We have gotten into the dangerous habit of outsourcing big issues — space exploration, clean energy, A.I. and the like — to private businesses and billionaires
  • That ideologically varied constellation of prominent figures shared a basic diagnosis of humanity and its prospects: that our species is fundamentally vicious and selfish and our destiny therefore bends inexorably toward self-destruction.
  • Less than a year after Churchill’s warning about the future of modern combat — “As for poison gas and chemical warfare,” he wrote, “only the first chapter has been written of a terrible book” — the 1925 Geneva Protocol was signed, an international agreement banning the use of chemical or biological weapons in combat. Despite the many horrors of World War II, chemical weapons were not deployed on European battlefields.
  • As for machine-age angst, there’s a lesson to learn there, too: Our panics are often puffed up, our predictions simply wrong
  • In 1928, H.G. Wells published a book titled “The Way the World Is Going,” with the modest subtitle “Guesses and Forecasts of the Years Ahead.” In the opening pages, he offered a summary of his age that could just as easily have been written about our turbulent 2020s. “Human life,” he wrote, “is different from what it has ever been before, and it is rapidly becoming more different.” He continued, “Perhaps never in the whole history of life before the present time, has there been a living species subjected to so fiercely urgent, many-sided and comprehensive a process of change as ours today. None at least that has survived. Transformation or extinction have been nature’s invariable alternatives. Ours is a species in an intense phase of transition.”
Javier E

Some Silicon Valley VCs Are Becoming More Conservative - The New York Times - 0 views

  • The circle of Republican donors in the nation’s tech capital has long been limited to a few tech executives such as Scott McNealy, a founder of Sun Microsystems; Meg Whitman, a former chief executive of eBay; Carly Fiorina, a former chief executive of Hewlett-Packard; Larry Ellison, the executive chairman of Oracle; and Doug Leone, a former managing partner of Sequoia Capital.
  • But mostly, the tech industry cultivated close ties with Democrats. Al Gore, the former Democratic vice president, joined the venture capital firm Kleiner Perkins in 2007. Over the next decade, tech companies including Airbnb, Google, Uber and Apple eagerly hired former members of the Obama administration.
  • During that time, Democrats moved further to the left and demonized successful people who made a lot of money, further alienating some tech leaders, said Bradley Tusk, a venture capital investor and political strategist who supports Mr. Biden.
  • ...13 more annotations...
  • after Mr. Trump won the election that year, the world seemed to blame tech companies for his victory. The resulting “techlash” against Facebook and others caused some industry leaders to reassess their political views, a trend that continued through the social and political turmoil of the pandemic.
  • The start-up industry has also been in a downturn since 2022, with higher interest rates sending capital fleeing from risky bets and a dismal market for initial public offerings crimping opportunities for investors to cash in on their valuable investments.
  • Some investors said they were frustrated that his pick for chair of the Federal Trade Commission, Lina Khan, has aggressively moved to block acquisitions, one of the main ways venture capitalists make money. They said they were also unhappy that Mr. Biden’s pick for head of the Securities and Exchange Commission, Gary Gensler, had been hostile to cryptocurrency companies.
  • Last month, Mr. Sacks, Mr. Thiel, Elon Musk and other prominent investors attended an “anti-Biden” dinner in Hollywood, where attendees discussed fund-raising and ways to oppose Democrats,
  • Some also said they disliked Mr. Biden’s proposal in March to raise taxes, including a 25 percent “billionaire tax” on certain holdings that could include start-up stock, as well as a higher tax rate on profits from successful investments.
  • “If you keep telling someone over and over that they’re evil, they’re eventually not going to like that,” he said. “I see that in venture capital.”
  • Some tech investors are also fuming over how Mr. Biden has handled foreign affairs and other issues.
  • Mr. Andreessen, a founder of Andreessen Horowitz, a prominent Silicon Valley venture firm, said in a recent podcast that “there are real issues with the Biden administration.” Under Mr. Trump, he said, the S.E.C. and F.T.C. would be headed by “very different kinds of people.” But a Trump presidency would not necessarily be a “clean win” either, he added.
  • Mr. Sacks said at the tech conference last week that he thought such taxes could kill the start-up industry’s system of offering stock options to founders and employees. “It’s a good reason for Silicon Valley to think really hard about who it wants to vote for,” he said.
  • “Tech, venture capital and Silicon Valley are looking at the current state of affairs and saying, ‘I’m not happy with either of those options,’” he said. “‘I can no longer count on Democrats to support tech issues, and I can no longer count on Republicans to support business issues.’”
  • Ben Horowitz, a founder of Andreessen Horowitz, wrote in a blog post last year that the firm would back any politician who supported “an optimistic technology-enabled future” and oppose any who did not. Andreessen Horowitz has donated $22 million to Fairshake, a political action group focused on supporting crypto-friendly lawmakers.
  • Venture investors are also networking with lawmakers in Washington at events like the Hill & Valley conference in March, organized by Jacob Helberg, an adviser to Palantir, a tech company co-founded by Mr. Thiel. At that event, tech executives and investors lobbied lawmakers against A.I. regulations and asked for more government spending to support the technology’s development in the United States.
  • This month, Mr. Helberg, who is married to Mr. Rabois, donated $1 million to the Trump campaign
Javier E

Stanford's top disinformation research group collapses under pressure - The Washington Post - 0 views

  • The collapse of the five-year-old Observatory is the latest and largest of a series of setbacks to the community of researchers who try to detect propaganda and explain how false narratives are manufactured, gather momentum and become accepted by various groups
  • It follows Harvard’s dismissal of misinformation expert Joan Donovan, who in a December whistleblower complaint alleged he university’s close and lucrative ties with Facebook parent Meta led the university to clamp down on her work, which was highly critical of the social media giant’s practices.
  • Starbird said that while most academic studies of online manipulation look backward from much later, the Observatory’s “rapid analysis” helped people around the world understand what they were seeing on platforms as it happened.
  • ...9 more annotations...
  • Brown University professor Claire Wardle said the Observatory had created innovative methodology and trained the next generation of experts.
  • “Closing down a lab like this would always be a huge loss, but doing so now, during a year of global elections, makes absolutely no sense,” said Wardle, who previously led research at anti-misinformation nonprofit First Draft. “We need universities to use their resources and standing in the community to stand up to criticism and headlines.”
  • The study of misinformation has become increasingly controversial, and Stamos, DiResta and Starbird have been besieged by lawsuits, document requests and threats of physical harm. Leading the charge has been Rep. Jim Jordan (R-Ohio), whose House subcommittee alleges the Observatory improperly worked with federal officials and social media companies to violate the free-speech rights of conservatives.
  • In a joint statement, Stamos and DiResta said their work involved much more than elections, and that they had been unfairly maligned.
  • “The politically motivated attacks against our research on elections and vaccines have no merit, and the attempts by partisan House committee chairs to suppress First Amendment-protected research are a quintessential example of the weaponization of government,” they said.
  • Stamos founded the Observatory after publicizing that Russia has attempted to influence the 2016 election by sowing division on Facebook, causing a clash with the company’s top executives. Special counsel Robert S. Mueller III later cited the Facebook operation in indicting a Kremlin contractor. At Stanford, Stamos and his team deepened his study of influence operations from around the world, including one it traced to the Pentagon.
  • Stamos told associates he stepped back from leading the Observatory last year in part because the political pressure had taken a toll. Stamos had raised most of the money for the project, and the remaining faculty have not been able to replicate his success, as many philanthropic groups shift their focus on artificial intelligence and other, fresher topics.
  • In supporting the project further, the university would have risked alienating conservative donors, Silicon Valley figures, and members of Congress, who have threatened to stop all federal funding for disinformation research or cut back general support.
  • The Observatory’s non-election work has included developing curriculum for teaching college students about how to handle trust and safety issues on social media platforms and launching the first peer-reviewed journal dedicated to that field. It has also investigated rings publishing child sexual exploitation material online and flaws in the U.S. system for reporting it, helping to prepare platforms to handle an influx of computer-generated material.
Javier E

Ilya Sutskever, OpenAI Co-Founder Who Helped Oust Sam Altman, Starts His Own Company - The New York Times - 0 views

  • The new start-up is called Safe Superintelligence. It aims to produce superintelligence — a machine that is more intelligent than humans — in a safe way, according to the company spokeswoman Lulu Cheng Meservey.
  • Last year, Dr. Sutskever helped create what was called a Superalignment team inside OpenAI that AImed to ensure that future A.I. technologies would not do harm. Like others in the field, he had grown increasingly concerned that A.I. could become dangerous and perhaps even destroy humanity.
  • Jan Leike, who ran the Superalignment team alongside Dr. Sutskever, has also resigned from OpenAI. He has since been hired by OpenAI’s competitor Anthropic, another company founded by former OpenAI researchers.
Javier E

OpenAI Whistle-Blowers Describe Reckless and Secretive Culture - The New York Times - 0 views

  • A group of OpenAI insiders is blowing the whistle on what they say is a culture of recklessness and secrecy at the San Francisco artificial intelligence company, which is racing to build the most powerful A.I. systems ever created.
  • The group, which includes nine current and former OpenAI employees, has rallied in recent days around shared concerns that the company has not done enough to prevent its A.I. systems from becoming dangerous.
  • The members say OpenAI, which started as a nonprofit research lab and burst into public view with the 2022 release of ChatGPT, is putting a priority on profits and growth as it tries to build artificial general intelligence, or A.G.I., the industry term for a computer program capable of doing anything a human can.
  • ...21 more annotations...
  • They also claim that Openai has used hardball tactics to prevent workers from voicing their concerns about the technology, including restrictive nondisparagement agreements that departing employees were asked to sign.
  • “OpenAI is really excited about building A.G.I., and they are recklessly racing to be the first there,” sAId Daniel Kokotajlo, a former researcher in OpenAI’s governance division and one of the group’s organizers.
  • Other members include William Saunders, a research engineer who left OpenAI in February, and three other former OpenAI employees: Carroll WAInwright, Jacob Hilton and Daniel Ziegler. Several current OpenAI employees endorsed the letter anonymously because they feared retaliation from the company,
  • At OpenAI, Mr. Kokotajlo saw that even though the company had safety protocols in place — including a joint effort with Microsoft known as the “deployment safety board,” which was supposed to review new models for major risks before they were publicly released — they rarely seemed to slow anything down.
  • So was the departure of Dr. Leike, who along with Dr. Sutskever had led OpenAI’s “superalignment” team, which focused on managing the risks of powerful A.I. models. In a series of public posts announcing his departure, Dr. Leike sAId he believed that “safety culture and processes have taken a back seat to shiny products.”
  • “When I signed up for OpenAI, I did not sign up for this attitude of ‘Let’s put things out into the world and see what happens and fix them afterward,’” Mr. Saunders sAId.
  • Mr. Kokotajlo, 31, joined OpenAI in 2022 as a governance researcher and was asked to forecast A.I. progress. He was not, to put it mildly, optimistic.In his previous job at an A.I. safety organization, he predicted that A.G.I. might arrive in 2050. But after seeing how quickly A.I. was improving, he shortened his timelines. Now he believes there is a 50 percent chance that A.G.I. will arrive by 2027 — in just three years.
  • He also believes that the probability that advanced A.I. will destroy or catastrophically harm humanity — a grim statistic often shortened to “p(doom)” in A.I. circles — is 70 percent.
  • Last month, two senior A.I. researchers — Ilya Sutskever and Jan Leike — left OpenAI under a cloud. Dr. Sutskever, who had been on OpenAI’s board and voted to fire Mr. Altman, had rAIsed alarms about the potential risks of powerful A.I. systems. His departure was seen by some safety-minded employees as a setback.
  • Mr. Kokotajlo said, he became so worried that, last year, he told Mr. Altman that the company should “pivot to safety” and spend more time and resources guarding against A.I.’s risks rather than charging ahead to improve its models. He said that Mr. Altman had claimed to agree with him, but that nothing much changed.
  • In April, he quit. In an email to his team, he said he was leaving because he had “lost confidence that Openai will behave responsibly" as its systems approach human-level intelligence.
  • “The world isn’t ready, and we aren’t ready,” Mr. Kokotajlo wrote. “And I’m concerned we are rushing forward regardless and rationalizing our actions.”
  • On his way out, Mr. Kokotajlo refused to sign OpenAI’s standard paperwork for departing employees, which included a strict nondisparagement clause barring them from saying negative things about the company, or else risk having their vested equity taken away.
  • Many employees could lose out on millions of dollars if they refused to sign. Mr. Kokotajlo’s vested equity was worth roughly $1.7 million, he said, which amounted to the vast majority of his net worth, and he was prepared to forfeit all of it.
  • Mr. Altman said he was “genuinely embarrassed” not to have known about the agreements, and the company said it would remove nondisparagement clauses from its standard paperwork and release former employees from their agreements.)
  • In their open letter, Mr. Kokotajlo and the other former OpenAI employees call for an end to using nondisparagement and nondisclosure agreements at OpenAI and other A.I. companies.
  • “Broad confidentiality agreements block us from voicing our concerns, except to the very companies that may be failing to address these issues,”
  • They also call for A.I. companies to “support a culture of open criticism” and establish a reporting process for employees to anonymously raise safety-related concerns.
  • They have retained a pro bono lawyer, Lawrence Lessig, the prominent legal scholar and activist
  • Mr. Kokotajlo and his group are skeptical that self-regulation alone will be enough to prepare for a world with more powerful A.I. systems. So they are calling for lawmakers to regulate the industry, too.
  • “There needs to be some sort of democratically accountable, transparent governance structure in charge of this process," Mr. Kokotajlo said. “Instead of just a couple of different private companies racing with each other, and keeping it all secret.”
« First ‹ Previous 201 - 209 of 209
Showing 20 items per page