Skip to main content

Home/ History Readings/ Group items tagged confidence

Rss Feed Group items tagged

24More

OpenAI Whistle-Blowers Describe Reckless and Secretive Culture - The New York Times - 0 views

  • A group of OpenAI insiders is blowing the whistle on what they say is a culture of recklessness and secrecy at the San Francisco artificial intelligence company, which is racing to build the most powerful A.I. systems ever created.
  • The group, which includes nine current and former OpenAI employees, has rallied in recent days around shared concerns that the company has not done enough to prevent its A.I. systems from becoming dangerous.
  • The members say OpenAI, which started as a nonprofit research lab and burst into public view with the 2022 release of ChatGPT, is putting a priority on profits and growth as it tries to build artificial general intelligence, or A.G.I., the industry term for a computer program capable of doing anything a human can.
  • ...21 more annotations...
  • They also claim that OpenAI has used hardball tactics to prevent workers from voicing their concerns about the technology, including restrictive nondisparagement agreements that departing employees were asked to sign.
  • “OpenAI is really excited about building A.G.I., and they are recklessly racing to be the first there,” said Daniel Kokotajlo, a former researcher in OpenAI’s governance division and one of the group’s organizers.
  • Other members include William Saunders, a research engineer who left OpenAI in February, and three other former OpenAI employees: Carroll Wainwright, Jacob Hilton and Daniel Ziegler. Several current OpenAI employees endorsed the letter anonymously because they feared retaliation from the company,
  • At OpenAI, Mr. Kokotajlo saw that even though the company had safety protocols in place — including a joint effort with Microsoft known as the “deployment safety board,” which was supposed to review new models for major risks before they were publicly released — they rarely seemed to slow anything down.
  • So was the departure of Dr. Leike, who along with Dr. Sutskever had led OpenAI’s “superalignment” team, which focused on managing the risks of powerful A.I. models. In a series of public posts announcing his departure, Dr. Leike said he believed that “safety culture and processes have taken a back seat to shiny products.”
  • “When I signed up for OpenAI, I did not sign up for this attitude of ‘Let’s put things out into the world and see what happens and fix them afterward,’” Mr. Saunders said.
  • Mr. Kokotajlo, 31, joined OpenAI in 2022 as a governance researcher and was asked to forecast A.I. progress. He was not, to put it mildly, optimistic.In his previous job at an A.I. safety organization, he predicted that A.G.I. might arrive in 2050. But after seeing how quickly A.I. was improving, he shortened his timelines. Now he believes there is a 50 percent chance that A.G.I. will arrive by 2027 — in just three years.
  • He also believes that the probability that advanced A.I. will destroy or catastrophically harm humanity — a grim statistic often shortened to “p(doom)” in A.I. circles — is 70 percent.
  • Last month, two senior A.I. researchers — Ilya Sutskever and Jan Leike — left OpenAI under a cloud. Dr. Sutskever, who had been on OpenAI’s board and voted to fire Mr. Altman, had raised alarms about the potential risks of powerful A.I. systems. His departure was seen by some safety-minded employees as a setback.
  • Mr. Kokotajlo said, he became so worried that, last year, he told Mr. Altman that the company should “pivot to safety” and spend more time and resources guarding against A.I.’s risks rather than charging ahead to improve its models. He said that Mr. Altman had claimed to agree with him, but that nothing much changed.
  • In April, he quit. In an email to his team, he said he was leaving because he had “lost confidence that OpenAI will behave responsibly" as its systems approach human-level intelligence.
  • “The world isn’t ready, and we aren’t ready,” Mr. Kokotajlo wrote. “And I’m concerned we are rushing forward regardless and rationalizing our actions.”
  • On his way out, Mr. Kokotajlo refused to sign OpenAI’s standard paperwork for departing employees, which included a strict nondisparagement clause barring them from saying negative things about the company, or else risk having their vested equity taken away.
  • Many employees could lose out on millions of dollars if they refused to sign. Mr. Kokotajlo’s vested equity was worth roughly $1.7 million, he said, which amounted to the vast majority of his net worth, and he was prepared to forfeit all of it.
  • Mr. Altman said he was “genuinely embarrassed” not to have known about the agreements, and the company said it would remove nondisparagement clauses from its standard paperwork and release former employees from their agreements.)
  • In their open letter, Mr. Kokotajlo and the other former OpenAI employees call for an end to using nondisparagement and nondisclosure agreements at OpenAI and other A.I. companies.
  • “Broad confidentiality agreements block us from voicing our concerns, except to the very companies that may be failing to address these issues,”
  • They also call for A.I. companies to “support a culture of open criticism” and establish a reporting process for employees to anonymously raise safety-related concerns.
  • They have retained a pro bono lawyer, Lawrence Lessig, the prominent legal scholar and activist
  • Mr. Kokotajlo and his group are skeptical that self-regulation alone will be enough to prepare for a world with more powerful A.I. systems. So they are calling for lawmakers to regulate the industry, too.
  • “There needs to be some sort of democratically accountable, transparent governance structure in charge of this process," Mr. Kokotajlo said. “Instead of just a couple of different private companies racing with each other, and keeping it all secret.”
13More

Opinion | The Reason People Aren't Telling Joe Biden the Truth - The New York Times - 0 views

  • They entered with courage and exited as cowards. In the past two weeks, several leaders have told me they arrived at meetings with President Biden planning to have serious discussions about whether he should withdraw from the 2024 election. They all chickened out.
  • There’s a gap between what people say behind the president’s back and what they say to his face. Instead of dissent and debate, they’re falling victim to groupthink.
  • According to the original theory, groupthink happens when people become so cohesive and close-knit that they put harmony above honesty. Extensive evidence has debunked that idea
  • ...10 more annotations...
  • The root causes of silence are not social solidarity but fear and futility. People bite their tongues when they doubt that it’s safe and worthwhile to speak up. Leaders who want to make informed decisions need to make it clear they value candid input.
  • Mr. Biden has done the opposite, declaring first that only the Lord almighty could change his mind and then saying that he’ll drop out only if polls say there’s no way for him to win. That sends a strong message
  • If you’re not an immortal being or a time traveler from the future, it’s pointless to share any concerns about the viability of his candidacy.
  • I’ve reminded them that they’re lucky to have a president who doesn’t punish dissenters with an indefinite prison sentence or a trial for treason. That diffusion of responsibility is a recipe for groupthink — if everyone leaves it to someone else, no one will end up speaking up.
  • Although it can help to assign devil’s advocates, it’s more effective to unearth them. Genuine dissenters argue more convincingly and get taken more seriously.
  • It’s time for Mr. Biden’s team to run an anonymous poll of advisers, governors and lawmakers. The results of the poll could be given to an honest broker — someone with a vested interest in winning the election rather than appeasing the president
  • To avoid pressure from the top, I might try a fishbowl format, asking Mr. Biden to listen first and speak last.
  • Over the past week, I’ve raised these ideas with several leaders close to the president who reached out for advice. They’ve each made it clear that they’re afraid to put their relationship on the line and they don’t think Mr. Biden will listen to them
  • Showing openness can raise people’s confidence, but it’s not always enough to quell their fear. In our research, Constantinos Coutifaris and I found that it helps for leaders to criticize themselves out loud. That way, instead of just claiming that they want the truth, they can show that they can handle the truth.
  • “President Biden, I know you believe that politicians shouldn’t let hubris cloud their judgment. I’m worried that people are telling you what you want to hear, not what you need to hear. We know the good things that could happen if you run and win, but we also need to discuss the good things that could happen if you don’t run. You could be hailed as a hero like George Washington for choosing not to seek another term. Regardless of the result, you could make history through your selfless stewardship of the next generation. Personally, I don’t know if that’s the right decision. I just want to make sure it gets due consideration. Would you be open to hosting a meeting to hear the dissenting views?
« First ‹ Previous 601 - 602 of 602
Showing 20 items per page