Skip to main content

Home/ History Readings/ Group items tagged Ai

Rss Feed Group items tagged

16More

Some Silicon Valley VCs Are Becoming More Conservative - The New York Times - 0 views

  • The circle of Republican donors in the nation’s tech capital has long been limited to a few tech executives such as Scott McNealy, a founder of Sun Microsystems; Meg Whitman, a former chief executive of eBay; Carly Fiorina, a former chief executive of Hewlett-Packard; Larry Ellison, the executive chairman of Oracle; and Doug Leone, a former managing partner of Sequoia Capital.
  • But mostly, the tech industry cultivated close ties with Democrats. Al Gore, the former Democratic vice president, joined the venture capital firm Kleiner Perkins in 2007. Over the next decade, tech companies including Airbnb, Google, Uber and Apple eagerly hired former members of the Obama administration.
  • During that time, Democrats moved further to the left and demonized successful people who made a lot of money, further alienating some tech leaders, said Bradley Tusk, a venture capital investor and political strategist who supports Mr. Biden.
  • ...13 more annotations...
  • after Mr. Trump won the election that year, the world seemed to blame tech companies for his victory. The resulting “techlash” against Facebook and others caused some industry leaders to reassess their political views, a trend that continued through the social and political turmoil of the pandemic.
  • The start-up industry has also been in a downturn since 2022, with higher interest rates sending capital fleeing from risky bets and a dismal market for initial public offerings crimping opportunities for investors to cash in on their valuable investments.
  • Some investors said they were frustrated that his pick for chair of the Federal Trade Commission, Lina Khan, has aggressively moved to block acquisitions, one of the main ways venture capitalists make money. They said they were also unhappy that Mr. Biden’s pick for head of the Securities and Exchange Commission, Gary Gensler, had been hostile to cryptocurrency companies.
  • Last month, Mr. Sacks, Mr. Thiel, Elon Musk and other prominent investors attended an “anti-Biden” dinner in Hollywood, where attendees discussed fund-raising and ways to oppose Democrats,
  • Some also said they disliked Mr. Biden’s proposal in March to raise taxes, including a 25 percent “billionaire tax” on certain holdings that could include start-up stock, as well as a higher tax rate on profits from successful investments.
  • “If you keep telling someone over and over that they’re evil, they’re eventually not going to like that,” he said. “I see that in venture capital.”
  • Some tech investors are also fuming over how Mr. Biden has handled foreign affairs and other issues.
  • Mr. Andreessen, a founder of Andreessen Horowitz, a prominent Silicon Valley venture firm, said in a recent podcast that “there are real issues with the Biden administration.” Under Mr. Trump, he said, the S.E.C. and F.T.C. would be headed by “very different kinds of people.” But a Trump presidency would not necessarily be a “clean win” either, he added.
  • Mr. Sacks said at the tech conference last week that he thought such taxes could kill the start-up industry’s system of offering stock options to founders and employees. “It’s a good reason for Silicon Valley to think really hard about who it wants to vote for,” he said.
  • “Tech, venture capital and Silicon Valley are looking at the current state of affairs and saying, ‘I’m not happy with either of those options,’” he said. “‘I can no longer count on Democrats to support tech issues, and I can no longer count on Republicans to support business issues.’”
  • Ben Horowitz, a founder of Andreessen Horowitz, wrote in a blog post last year that the firm would back any politician who supported “an optimistic technology-enabled future” and oppose any who did not. Andreessen Horowitz has donated $22 million to Fairshake, a political action group focused on supporting crypto-friendly lawmakers.
  • Venture investors are also networking with lawmakers in Washington at events like the Hill & Valley conference in March, organized by Jacob Helberg, an adviser to Palantir, a tech company co-founded by Mr. Thiel. At that event, tech executives and investors lobbied lawmakers against A.I. regulations and asked for more government spending to support the technology’s development in the United States.
  • This month, Mr. Helberg, who is married to Mr. Rabois, donated $1 million to the Trump campaign
12More

Stanford's top disinformation research group collapses under pressure - The Washington ... - 0 views

  • The collapse of the five-year-old Observatory is the latest and largest of a series of setbacks to the community of researchers who try to detect propaganda and explain how false narratives are manufactured, gather momentum and become accepted by various groups
  • It follows Harvard’s dismissal of misinformation expert Joan Donovan, who in a December whistleblower complaint alleged he university’s close and lucrative ties with Facebook parent Meta led the university to clamp down on her work, which was highly critical of the social media giant’s practices.
  • Starbird said that while most academic studies of online manipulation look backward from much later, the Observatory’s “rapid analysis” helped people around the world understand what they were seeing on platforms as it happened.
  • ...9 more annotations...
  • Brown University professor Claire Wardle said the Observatory had created innovative methodology and trained the next generation of experts.
  • “Closing down a lab like this would always be a huge loss, but doing so now, during a year of global elections, makes absolutely no sense,” said Wardle, who previously led research at anti-misinformation nonprofit First Draft. “We need universities to use their resources and standing in the community to stand up to criticism and headlines.”
  • The study of misinformation has become increasingly controversial, and Stamos, DiResta and Starbird have been besieged by lawsuits, document requests and threats of physical harm. Leading the charge has been Rep. Jim Jordan (R-Ohio), whose House subcommittee alleges the Observatory improperly worked with federal officials and social media companies to violate the free-speech rights of conservatives.
  • In a joint statement, Stamos and DiResta said their work involved much more than elections, and that they had been unfairly maligned.
  • “The politically motivated attacks against our research on elections and vaccines have no merit, and the attempts by partisan House committee chairs to suppress First Amendment-protected research are a quintessential example of the weaponization of government,” they said.
  • Stamos founded the Observatory after publicizing that Russia has attempted to influence the 2016 election by sowing division on Facebook, causing a clash with the company’s top executives. Special counsel Robert S. Mueller III later cited the Facebook operation in indicting a Kremlin contractor. At Stanford, Stamos and his team deepened his study of influence operations from around the world, including one it traced to the Pentagon.
  • Stamos told associates he stepped back from leading the Observatory last year in part because the political pressure had taken a toll. Stamos had raised most of the money for the project, and the remaining faculty have not been able to replicate his success, as many philanthropic groups shift their focus on artificial intelligence and other, fresher topics.
  • In supporting the project further, the university would have risked alienating conservative donors, Silicon Valley figures, and members of Congress, who have threatened to stop all federal funding for disinformation research or cut back general support.
  • The Observatory’s non-election work has included developing curriculum for teaching college students about how to handle trust and safety issues on social media platforms and launching the first peer-reviewed journal dedicated to that field. It has also investigated rings publishing child sexual exploitation material online and flaws in the U.S. system for reporting it, helping to prepare platforms to handle an influx of computer-generated material.
3More

Ilya Sutskever, OpenAI Co-Founder Who Helped Oust Sam Altman, Starts His Own Company - ... - 0 views

  • The new start-up is called Safe Superintelligence. It aims to produce superintelligence — a machine that is more intelligent than humans — in a safe way, according to the company spokeswoman Lulu Cheng Meservey.
  • Last year, Dr. Sutskever helped create what was called a Superalignment team inside OpenAI that aimed to ensure that future A.I. technologies would not do harm. Like others in the field, he had grown increasingly concerned that A.I. could become dangerous and perhaps even destroy humanity.
  • Jan Leike, who ran the Superalignment team alongside Dr. Sutskever, has also resigned from OpenAI. He has since been hired by OpenAI’s competitor Anthropic, another company founded by former OpenAI researchers.
24More

OpenAI Whistle-Blowers Describe Reckless and Secretive Culture - The New York Times - 0 views

  • A group of OpenAI insiders is blowing the whistle on what they say is a culture of recklessness and secrecy at the San Francisco artificial intelligence company, which is racing to build the most powerful A.I. systems ever created.
  • The group, which includes nine current and former OpenAI employees, has rallied in recent days around shared concerns that the company has not done enough to prevent its A.I. systems from becoming dangerous.
  • The members say OpenAI, which started as a nonprofit research lab and burst into public view with the 2022 release of ChatGPT, is putting a priority on profits and growth as it tries to build artificial general intelligence, or A.G.I., the industry term for a computer program capable of doing anything a human can.
  • ...21 more annotations...
  • They also claim that OpenAI has used hardball tactics to prevent workers from voicing their concerns about the technology, including restrictive nondisparagement agreements that departing employees were asked to sign.
  • “OpenAI is really excited about building A.G.I., and they are recklessly racing to be the first there,” said Daniel Kokotajlo, a former researcher in OpenAI’s governance division and one of the group’s organizers.
  • Other members include William Saunders, a research engineer who left OpenAI in February, and three other former OpenAI employees: Carroll Wainwright, Jacob Hilton and Daniel Ziegler. Several current OpenAI employees endorsed the letter anonymously because they feared retaliation from the company,
  • At OpenAI, Mr. Kokotajlo saw that even though the company had safety protocols in place — including a joint effort with Microsoft known as the “deployment safety board,” which was supposed to review new models for major risks before they were publicly released — they rarely seemed to slow anything down.
  • So was the departure of Dr. Leike, who along with Dr. Sutskever had led OpenAI’s “superalignment” team, which focused on managing the risks of powerful A.I. models. In a series of public posts announcing his departure, Dr. Leike said he believed that “safety culture and processes have taken a back seat to shiny products.”
  • “When I signed up for OpenAI, I did not sign up for this attitude of ‘Let’s put things out into the world and see what happens and fix them afterward,’” Mr. Saunders said.
  • Mr. Kokotajlo, 31, joined OpenAI in 2022 as a governance researcher and was asked to forecast A.I. progress. He was not, to put it mildly, optimistic.In his previous job at an A.I. safety organization, he predicted that A.G.I. might arrive in 2050. But after seeing how quickly A.I. was improving, he shortened his timelines. Now he believes there is a 50 percent chance that A.G.I. will arrive by 2027 — in just three years.
  • He also believes that the probability that advanced A.I. will destroy or catastrophically harm humanity — a grim statistic often shortened to “p(doom)” in A.I. circles — is 70 percent.
  • Last month, two senior A.I. researchers — Ilya Sutskever and Jan Leike — left OpenAI under a cloud. Dr. Sutskever, who had been on OpenAI’s board and voted to fire Mr. Altman, had raised alarms about the potential risks of powerful A.I. systems. His departure was seen by some safety-minded employees as a setback.
  • Mr. Kokotajlo said, he became so worried that, last year, he told Mr. Altman that the company should “pivot to safety” and spend more time and resources guarding against A.I.’s risks rather than charging ahead to improve its models. He said that Mr. Altman had claimed to agree with him, but that nothing much changed.
  • In April, he quit. In an email to his team, he said he was leaving because he had “lost confidence that OpenAI will behave responsibly" as its systems approach human-level intelligence.
  • “The world isn’t ready, and we aren’t ready,” Mr. Kokotajlo wrote. “And I’m concerned we are rushing forward regardless and rationalizing our actions.”
  • On his way out, Mr. Kokotajlo refused to sign OpenAI’s standard paperwork for departing employees, which included a strict nondisparagement clause barring them from saying negative things about the company, or else risk having their vested equity taken away.
  • Many employees could lose out on millions of dollars if they refused to sign. Mr. Kokotajlo’s vested equity was worth roughly $1.7 million, he said, which amounted to the vast majority of his net worth, and he was prepared to forfeit all of it.
  • Mr. Altman said he was “genuinely embarrassed” not to have known about the agreements, and the company said it would remove nondisparagement clauses from its standard paperwork and release former employees from their agreements.)
  • In their open letter, Mr. Kokotajlo and the other former OpenAI employees call for an end to using nondisparagement and nondisclosure agreements at OpenAI and other A.I. companies.
  • “Broad confidentiality agreements block us from voicing our concerns, except to the very companies that may be failing to address these issues,”
  • They also call for A.I. companies to “support a culture of open criticism” and establish a reporting process for employees to anonymously raise safety-related concerns.
  • They have retained a pro bono lawyer, Lawrence Lessig, the prominent legal scholar and activist
  • Mr. Kokotajlo and his group are skeptical that self-regulation alone will be enough to prepare for a world with more powerful A.I. systems. So they are calling for lawmakers to regulate the industry, too.
  • “There needs to be some sort of democratically accountable, transparent governance structure in charge of this process," Mr. Kokotajlo said. “Instead of just a couple of different private companies racing with each other, and keeping it all secret.”
« First ‹ Previous 201 - 204 of 204
Showing 20 items per page