Skip to main content

Home/ History Readings/ Group items tagged safety

Rss Feed Group items tagged

Javier E

Opinion | Artificial Intelligence Requires Specific Safety Rules - The New York Times - 0 views

  • For about five years, OpenAI used a system of nondisclosure agreements to stifle public criticism from outgoing employees. Current and former OpenAI staffers were paranoid about talking to the press. In May, one departing employee refused to sign and went public in The Times. The company apologized and scrapped the agreements. Then the floodgates opened. Exiting employees began criticizing OpenAI’s safety practices, and a wave of articles emerged about its broken promises.
  • These stories came from people who were willing to risk their careers to inform the public. How many more are silenced because they’re too scared to speak out? Since existing whistle-blower protections typically cover only the reporting of illegal conduct, they are inadequate here. Artificial intelligence can be dangerous without being illegal
  • A.I. needs stronger protections — like those in place in parts of the public sector, finance and publicly traded companies — that prohibit retaliation and establish anonymous reporting channels.
  • ...19 more annotations...
  • The company’s chief executive was briefly fired after the nonprofit board lost trust in him.
  • OpenAI has spent the last year mired in scandal
  • Whistle-blowers alleged to the Securities and Exchange Commission that OpenAI’s nondisclosure agreements were illegal.
  • Safety researchers have left the company in droves
  • Now the firm is restructuring its core business as a for-profit, seemingly prompting the departure of more key leaders
  • On Friday, The Wall Street Journal reported that OpenAI rushed testing of a major model in May, attempting to undercut a rival’s publicity; after the release, employees found out the model exceeded the company’s standards for safety. (The company told The Journal the findings were the result of a methodological flaw.)
  • This behavior would be concerning in any industry, but according to OpenAI itself, A.I. poses unique risks. The leaders of the top A.I. firms and leading A.I. researchers have warned that the technology could lead to human extinction.
  • Since more comprehensive national A.I. regulations aren’t coming anytime soon, we need a narrow federal law allowing employees to disclose information to Congress if they reasonably believe that an A.I. model poses a significant safety risk
  • But McKinsey did not hold the majority of employees’ compensation hostage in exchange for signing lifetime nondisparagement agreements, as OpenAI did.
  • People reporting violations of the Atomic Energy Act have more robust whistle-blower protections than those in most fields, while those working in biological toxins for several government departments are protected by proactive, pro-reporting guidance. A.I. workers need similar rules.
  • Many companies maintain a culture of secrecy beyond what is healthy. I once worked at the consulting firm McKinsey on a team that advised Immigration and Customs Enforcement on implementing Donald Trump’s inhumane immigration policies. I was fearful of going public
  • Congress should establish a special inspector general to serve as a point of contact for these whistle-blowers. The law should mandate companies to notify staff about the channels available to them, which they can use without facing retaliation.
  • Earlier this month, OpenAI released a highly advanced new model. For the first time, experts concluded the model could aid in the construction of a bioweapon more effectively than internet research alone could. A third party hired by the company found that the new system demonstrated evidence of “power seeking” and “the basic capabilities needed to do simple in-context scheming
  • penAI decided to publish these results, but the company still chooses what information to share. It is possible the published information paints an incomplete picture of the model’s risks.
  • The A.I. safety researcher Todor Markov — who recently left OpenAI after nearly six years with the firm — suggested one hypothetical scenario. An A.I. company promises to test its models for dangerous capabilities, then cherry-picks results to make the model look safe. A concerned employee wants to notify someone, but doesn’t know who — and can’t point to a specific law being broken. The new model is released, and a terrorist uses it to construct a novel bioweapon. Multiple former OpenAI employees told me this scenario is plausible.
  • The United States’ current arrangement of managing A.I. risks through voluntary commitments places enormous trust in the companies developing this potentially dangerous technology. Unfortunately, the industry in general — and OpenAI in particular — has shown itself to be unworthy of that trust, time and again.
  • The fate of the first attempt to protect A.I. whistle-blowers rests with Governor Gavin Newsom of California. Mr. Newsom has hinted that he will veto a first-of-its-kind A.I. safety bill, called S.B. 1047, which mandates that the largest A.I. companies implement safeguards to prevent catastrophes, features whistle-blower protections, a rare point of agreement between the bill’s supporters and its critics
  • if those legislators are serious in their support for these protections, they should introduce a federal A.I. whistle-blower protection bill. They are well positioned to do so: The letter’s organizer, Representative Zoe Lofgren, is the ranking Democrat on the House Committee on Science, Space and Technology.
  • Last month, a group of leading A.I. experts warned that as the technology rapidly progresses, “we face growing risks that A.I. could be misused to attack critical infrastructure, develop dangerous weapons or cause other forms of catastrophic harm.” These risks aren’t necessarily criminal, but they are real — and they could prove deadly. If that happens, employees at OpenAI and other companies will be the first to know. But will they tell us?
Javier E

Lessons of the Great Recession: How the Safety Net Performed - NYTimes.com - 0 views

  • it’s none too soon to begin asking the question: what have we learned about economic policy in this crash that should inform our thinking for the next downturn? 
  • Let’s start with the safety net since it’s a fixture of advanced economies and serves the critical function of catching (or not) the most economically vulnerable when the market fails
  • For many of today’s conservatives, the increased use of a safety-net program is proof that there’s something wrong with the user, not the underlying economy.
  • ...10 more annotations...
  • But while people do abuse safety nets — and not just poor people (think bank bailouts and special tax treatment of multinational corporations) — I want to see receipt of unemployment insurance, the rolls of the Supplemental Nutrition Assistance Program (food stamps), and so on go up in recessions.  In fact, their failure to do so would be a sign that something’s very wrong, like an air bag that failed to deploy in a crash.
  • There are two reasons that T.A.N.F. was so unresponsive.  First, welfare reform in the mid-1990s significantly increased its work requirements
  • Second, T.A.N.F. was “block granted,” meaning states receive a fixed amount that is largely insensitive to recessions
  • it is a fixture of conservative policy on poverty to apply this same block grant strategy to food stamps and Medicaid.  The numbers and the chart above show this to be a recipe for inelastic response to recession, or, more plainly, a great way to cut some big holes in the safety net.
  • The official rate for children goes up over the recession, from 18 percent to 22 percent, but once you include the full force of safety-net (and Recovery Act) measures that kicked in, it holds steady at about 15 percent.
  • this figure provides strong evidence of the effectiveness of the American safety net in the worst recession since the Depression.
  • because the recession is receding, shouldn’t the SNAP rolls be coming down as well?
  • SNAP rolls remain elevated because their function remains critical in what’s still a tough job market for low-income households. 
  • the fact is that markets fail, and when they do, income and food supports must rise to protect the most economically vulnerable families.
  • let’s get this straight: the poor and their advocates were not the ones who tanked the economy.  Nor should they be on the defensive when the safety net expands to offset some of the damage.  The right question at such times is thus not why the SNAP rolls are so high.  It’s whether SNAP, unemployment insurance, T.A.N.F. et al are expanding adequately to meet the needs of the poor.
Javier E

Who Turned My Blue State Red? - The New York Times - 0 views

  • IT is one of the central political puzzles of our time: Parts of the country that depend on the safety-net programs supported by Democrats are increasingly voting for Republicans who favor shredding that net.
  • The temptation for coastal liberals is to shake their heads over those godforsaken white-working-class provincials who are voting against their own interests.
  • this reaction misses the complexity of the political dynamic that’s taken hold in these parts of the country. It misdiagnoses the Democratic Party’s growing conundrum with working-class white voters. And it also keeps us from fully grasping what’s going on in communities where conditions have deteriorated
  • ...17 more annotations...
  • the people who most rely on the safety-net programs secured by Democrats are, by and large, not voting against their own interests by electing Republicans. Rather, they are not voting, period. They have, as voting data, surveys and my own reporting suggest, become profoundly disconnected from the political process.
  • The people in these communities who are voting Republican in larger proportions are those who are a notch or two up the economic ladder — the sheriff’s deputy, the teacher, the highway worker, the motel clerk, the gas station owner and the coal miner. And their growing allegiance to the Republicans is, in part, a reaction against what they perceive, among those below them on the economic ladder, as a growing dependency on the safety net, the most visible manifestation of downward mobility in their declining towns.
  • After having her first child as a teenager, marrying young and divorcing, Ms. Dougherty had faced bleak prospects. But she had gotten safety-net support — most crucially, taxpayer-funded tuition breaks to attend community college, where she’d earned her nursing degree.
  • She landed a steady job at a nearby dialysis center and remarried. But this didn’t make her a lasting supporter of safety-net programs like those that helped her. Instead, Ms. Dougherty had become a staunch opponent of them. She was reacting, she said, against the sense of entitlement she saw on display at the dialysis center
  • “People waltz in when they want to,” she said, explaining that, in her opinion, there was too little asked of patients. There was nothing that said “‘You’re getting a great benefit here, why not put in a little bit yourself.’ ” At least when she got her tuition help, she said, she had to keep up her grades. “When you’re getting assistance, there should be hoops to jump through so that you’re paying a price for your behavior,” she said. “What’s wrong with that?”
  • these voters are consciously opting against a Democratic economic agenda that they see as bad for them and good for other people — specifically, those undeserving benefit-recipients who live nearby.
  • Where opposition to the social safety net has long been fed by the specter of undeserving inner-city African-Americans — think of Ronald Reagan’s notorious “welfare queen” — in places like Pike County it’s fueled, more and more, by people’s resentment over rising dependency they see among their own neighbors, even their own families.
  • The political upshot is plain, Mr. Cauley added. “It’s not the people on the draw that’s voting against” the Democrats, he said. “It’s everyone else.”
  • THAT pattern is right in line with surveys, which show a decades-long decline in support for redistributive policies and an increase in conservatism in the electorate even as inequality worsens. There has been a particularly sharp drop in support for redistribution among older Americans,
  • researchers such as Kathryn Edin, of Johns Hopkins University, found a tendency by many Americans in the second lowest quintile of the income ladder — the working or lower-middle class — to dissociate themselves from those at the bottom, where many once resided. “There’s this virulent social distancing — suddenly, you’re a worker and anyone who is not a worker is a bad person,” said Professor Edin. “They’re playing to the middle fifth and saying, ‘I’m not those people.’ ”
  • Meanwhile, many people who in fact most use and need social benefits are simply not voting at all. Voter participation is low among the poorest Americans, and in many parts of the country that have moved red, the rates have fallen off the charts. West Virginia ranked 50th for turnout in 2012; also in the bottom 10 were other states that have shifted sharply red in recent years, including Kentucky, Arkansas and Tennessee.
  • This political disconnect among lower-income Americans has huge ramifications — polls find nonvoters are far more likely to favor spending on the poor and on government services than are voters, and the gap grows even larger among poor nonvoters
  • low turnout by poor Kentuckians explained why the state’s Obamacare gains wouldn’t help Democrats. “I remember being in the room when Jennings was asked whether or not Republicans were afraid of the electoral consequences of displacing 400,000-500,000 people who have insurance,” State Auditor Adam Edelen, a Democrat who lost his re-election bid this year, told Joe Sonka, a Louisville journalist. “And he simply said, ‘People on Medicaid don’t vote.’
  • Republicans, of course, would argue that the shift in their direction among voters slightly higher up the ladder is the natural progression of things — people recognize that government programs are prolonging the economic doldrums and that Republicans have a better economic program.
  • it means redoubling efforts to mobilize the people who benefit from the programs. This is no easy task with the rural poor, who are much more geographically scattered than their urban counterparts. Not helping matters in this regard is the decline of local institutions like labor unions — while the United Mine Workers of America once drove turnout in coal country, today there is not a single unionized mine still operating in Kentucky.
  • it also means reckoning with the other half of the dynamic — finding ways to reduce the resentment that those slightly higher on the income ladder feel toward dependency in their midst. One way to do this is to make sure the programs are as tightly administered as possible. Instances of fraud and abuse are far rarer than welfare opponents would have one believe, but it only takes a few glaring instances to create a lasting impression
  • The best way to reduce resentment, though, would be to bring about true economic growth in the areas where the use of government benefits is on the rise,
Javier E

Opinion | Gun Safety Must Be Everything That Republicans Fear - The New York Times - 0 views

  • I find that the gun safety debate lacks candor.
  • People believe it is savvier to tell only part of the truth, to soft-pedal the sell in an effort to get something — anything — done.
  • The truth that no one wants to tell — the one that opponents of gun safety laws understand and the reason so many of them resist new laws — is that no one law or single package of laws will be enough to solve America’s gun violence problem.
  • ...12 more annotations...
  • The solution will have to be a nonstop parade of laws, with new ones passed as they are deemed necessary, ad infinitum. In the same way that Republicans have been promoting gun proliferation and loosening gun laws for decades, gun safety advocates will have to do the opposite, also for decades.
  • Individual laws, like federal universal background checks and bans on assault rifles and high-capacity magazines, will most likely make a dent, but they cannot end gun violence. Invariably, more mass shootings will occur that none of those laws would have prevented.
  • Opponents of gun safety will inevitably use those shootings to argue that the liberal efforts to prevent gun violence were ineffective. You can hear it now: “They told us that all we needed to do was to pass these laws and the massacres would stop. They haven’t.”
  • It makes people fearful and convinces them that guns provide security. More guns equate to even more security. But in fact, the escalation of gun ownership makes society less safe.
  • But I am on the same page as they are on one point. They see the passage of gun safety laws as a slippery slope that could lead to more sweeping laws and even, one day, national gun registries, insurance requirements and bans. I see the same and I actively hope for it.
  • When I hear Democratic politicians contorting their statements so it sounds like they’re promoting gun ownership while also promoting gun safety, I’m not only mystified, I’m miffed.
  • Why can’t everyone just be upfront? We have too many guns. We need to begin to get some of them out of circulation. That may include gun buybacks, but it must include no longer selling weapons of war to civilians.
  • Gun culture is a canard and a corruption.
  • I understand that Republicans are the opposition, that they have come to accept staggering levels of death as the price they must pay to advance their political agenda on everything from Covid to guns.
  • In our gun culture, 99 percent of gun owners can be responsible and law abiding, but if even 1 percent of a society with more guns than people is not, it is enough to wreak absolute havoc. When guns are easy for good people to get, they are also easy for bad people to get.
  • We have to stop all the lies. We have to stop the lie that fewer gun restrictions make us safer.
  • And we have to stop the lie that gun safety can be accomplished by one law or a few of them rather than an evolving slate of them.
Javier E

The Contradictions of Sam Altman, the AI Crusader Behind ChatGPT - WSJ - 0 views

  • Mr. Altman said he fears what could happen if AI is rolled out into society recklessly. He co-founded OpenAI eight years ago as a research nonprofit, arguing that it’s uniquely dangerous to have profits be the main driver of developing powerful AI models.
  • He is so wary of profit as an incentive in AI development that he has taken no direct financial stake in the business he built, he said—an anomaly in Silicon Valley, where founders of successful startups typically get rich off their equity. 
  • His goal, he said, is to forge a new world order in which machines free people to pursue more creative work. In his vision, universal basic income—the concept of a cash stipend for everyone, no strings attached—helps compensate for jobs replaced by AI. Mr. Altman even thinks that humanity will love AI so much that an advanced chatbot could represent “an extension of your will.”
  • ...44 more annotations...
  • The Tesla Inc. CEO tweeted in February that OpenAI had been founded as an open-source nonprofit “to serve as a counterweight to Google, but now it has become a closed source, maximum-profit company effectively controlled by Microsoft. Not what I intended at all.”
  • Backers say his brand of social-minded capitalism makes him the ideal person to lead OpenAI. Others, including some who’ve worked for him, say he’s too commercially minded and immersed in Silicon Valley thinking to lead a technological revolution that is already reshaping business and social life. 
  • In the long run, he said, he wants to set up a global governance structure that would oversee decisions about the future of AI and gradually reduce the power OpenAI’s executive team has over its technology. 
  • Mr. Altman said he doesn’t necessarily need to be first to develop artificial general intelligence, a world long imagined by researchers and science-fiction writers where software isn’t just good at one specific task like generating text or images but can understand and learn as well or better than a human can. He instead said OpenAI’s ultimate mission is to build AGI, as it’s called, safely.
  • In its founding charter, OpenAI pledged to abandon its research efforts if another project came close to building AGI before it did. The goal, the company said, was to avoid a race toward building dangerous AI systems fueled by competition and instead prioritize the safety of humanity.
  • While running Y Combinator, Mr. Altman began to nurse a growing fear that large research labs like DeepMind, purchased by Google in 2014, were creating potentially dangerous AI technologies outside the public eye. Mr. Musk has voiced similar concerns of a dystopian world controlled by powerful AI machines. 
  • Messrs. Altman and Musk decided it was time to start their own lab. Both were part of a group that pledged $1 billion to the nonprofit, OpenAI Inc. 
  • OpenAI researchers soon concluded that the most promising path to achieve artificial general intelligence rested in large language models, or computer programs that mimic the way humans read and write. Such models were trained on large volumes of text and required a massive amount of computing power that OpenAI wasn’t equipped to fund as a nonprofit, according to Mr. Altman. 
  • “We didn’t have a visceral sense of just how expensive this project was going to be,” he said. “We still don’t.”
  • Tensions also grew with Mr. Musk, who became frustrated with the slow progress and pushed for more control over the organization, people familiar with the matter said. 
  • OpenAI executives ended up reviving an unusual idea that had been floated earlier in the company’s history: creating a for-profit arm, OpenAI LP, that would report to the nonprofit parent. 
  • Reid Hoffman, a LinkedIn co-founder who advised OpenAI at the time and later served on the board, said the idea was to attract investors eager to make money from the commercial release of some OpenAI technology, accelerating OpenAI’s progress
  • “You want to be there first and you want to be setting the norms,” he said. “That’s part of the reason why speed is a moral and ethical thing here.”
  • The decision further alienated Mr. Musk, the people familiar with the matter said. He parted ways with OpenAI in February 2018. 
  • Mr. Musk announced his departure in a company all-hands, former employees who attended the meeting said. Mr. Musk explained that he thought he had a better chance at creating artificial general intelligence through Tesla, where he had access to greater resources, they said.
  • A young researcher questioned whether Mr. Musk had thought through the safety implications, the former employees said. Mr. Musk grew visibly frustrated and called the intern a “jackass,” leaving employees stunned, they said. It was the last time many of them would see Mr. Musk in person.  
  • Mr. Musk’s departure marked a turning point. Later that year, OpenAI leaders told employees that Mr. Altman was set to lead the company. He formally became CEO and helped complete the creation of the for-profit subsidiary in early 2019.
  • OpenAI said that it received about $130 million in contributions from the initial $1 billion pledge, but that further donations were no longer needed after the for-profit’s creation. Mr. Musk has tweeted that he donated around $100 million to OpenAI. 
  • In the meantime, Mr. Altman began hunting for investors. His break came at Allen & Co.’s annual conference in Sun Valley, Idaho in the summer of 2018, where he bumped into Satya Nadella, the Microsoft CEO, on a stairwell and pitched him on OpenAI. Mr. Nadella said he was intrigued. The conversations picked up that winter.
  • “I remember coming back to the team after and I was like, this is the only partner,” Mr. Altman said. “They get the safety stuff, they get artificial general intelligence. They have the capital, they have the ability to run the compute.”   
  • Mr. Altman shared the contract with employees as it was being negotiated, hosting all-hands and office hours to allay concerns that the partnership contradicted OpenAI’s initial pledge to develop artificial intelligence outside the corporate world, the former employees said. 
  • Some employees still saw the deal as a Faustian bargain. 
  • OpenAI’s lead safety researcher, Dario Amodei, and his lieutenants feared the deal would allow Microsoft to sell products using powerful OpenAI technology before it was put through enough safety testing,
  • They felt that OpenAI’s technology was far from ready for a large release—let alone with one of the world’s largest software companies—worrying it could malfunction or be misused for harm in ways they couldn’t predict.  
  • Mr. Amodei also worried the deal would tether OpenAI’s ship to just one company—Microsoft—making it more difficult for OpenAI to stay true to its founding charter’s commitment to assist another project if it got to AGI first, the former employees said.
  • Microsoft initially invested $1 billion in OpenAI. While the deal gave OpenAI its needed money, it came with a hitch: exclusivity. OpenAI agreed to only use Microsoft’s giant computer servers, via its Azure cloud service, to train its AI models, and to give the tech giant the sole right to license OpenAI’s technology for future products.
  • In a recent investment deck, Anthropic said it was “committed to large-scale commercialization” to achieve the creation of safe AGI, and that it “fully committed” to a commercial approach in September. The company was founded as an AI safety and research company and said at the time that it might look to create commercial value from its products. 
  • Mr. Altman “has presided over a 180-degree pivot that seems to me to be only giving lip service to concern for humanity,” he said. 
  • “The deal completely undermines those tenets to which they secured nonprofit status,” said Gary Marcus, an emeritus professor of psychology and neural science at New York University who co-founded a machine-learning company
  • The cash turbocharged OpenAI’s progress, giving researchers access to the computing power needed to improve large language models, which were trained on billions of pages of publicly available text. OpenAI soon developed a more powerful language model called GPT-3 and then sold developers access to the technology in June 2020 through packaged lines of code known as application program interfaces, or APIs. 
  • Mr. Altman and Mr. Amodei clashed again over the release of the API, former employees said. Mr. Amodei wanted a more limited and staged release of the product to help reduce publicity and allow the safety team to conduct more testing on a smaller group of users, former employees said. 
  • Mr. Amodei left the company a few months later along with several others to found a rival AI lab called Anthropic. “They had a different opinion about how to best get to safe AGI than we did,” Mr. Altman said.
  • Anthropic has since received more than $300 million from Google this year and released its own AI chatbot called Claude in March, which is also available to developers through an API. 
  • Mr. Altman disagreed. “The unusual thing about Microsoft as a partner is that it let us keep all the tenets that we think are important to our mission,” he said, including profit caps and the commitment to assist another project if it got to AGI first. 
  • In the three years after the initial deal, Microsoft invested a total of $3 billion in OpenAI, according to investor documents. 
  • More than one million users signed up for ChatGPT within five days of its November release, a speed that surprised even Mr. Altman. It followed the company’s introduction of DALL-E 2, which can generate sophisticated images from text prompts.
  • By February, it had reached 100 million users, according to analysts at UBS, the fastest pace by a consumer app in history to reach that mark.
  • n’s close associates praise his ability to balance OpenAI’s priorities. No one better navigates between the “Scylla of misplaced idealism” and the “Charybdis of myopic ambition,” Mr. Thiel said. 
  • Mr. Altman said he delayed the release of the latest version of its model, GPT-4, from last year to March to run additional safety tests. Users had reported some disturbing experiences with the model, integrated into Bing, where the software hallucinated—meaning it made up answers to questions it didn’t know. It issued ominous warnings and made threats. 
  • “The way to get it right is to have people engage with it, explore these systems, study them, to learn how to make them safe,” Mr. Altman said.
  • After Microsoft’s initial investment is paid back, it would capture 49% of OpenAI’s profits until the profit cap, up from 21% under prior arrangements, the documents show. OpenAI Inc., the nonprofit parent, would get the rest.
  • He has put almost all his liquid wealth in recent years in two companies. He has put $375 million into Helion Energy, which is seeking to create carbon-free energy from nuclear fusion and is close to creating “legitimate net-gain energy in a real demo,” Mr. Altman said.
  • He has also put $180 million into Retro, which aims to add 10 years to the human lifespan through “cellular reprogramming, plasma-inspired therapeutics and autophagy,” or the reuse of old and damaged cell parts, according to the company. 
  • He noted how much easier these problems are, morally, than AI. “If you’re making nuclear fusion, it’s all upside. It’s just good,” he said. “If you’re making AI, it is potentially very good, potentially very terrible.” 
Javier E

OpenAI Whistle-Blowers Describe Reckless and Secretive Culture - The New York Times - 0 views

  • A group of OpenAI insiders is blowing the whistle on what they say is a culture of recklessness and secrecy at the San Francisco artificial intelligence company, which is racing to build the most powerful A.I. systems ever created.
  • The group, which includes nine current and former OpenAI employees, has rallied in recent days around shared concerns that the company has not done enough to prevent its A.I. systems from becoming dangerous.
  • The members say OpenAI, which started as a nonprofit research lab and burst into public view with the 2022 release of ChatGPT, is putting a priority on profits and growth as it tries to build artificial general intelligence, or A.G.I., the industry term for a computer program capable of doing anything a human can.
  • ...21 more annotations...
  • They also claim that OpenAI has used hardball tactics to prevent workers from voicing their concerns about the technology, including restrictive nondisparagement agreements that departing employees were asked to sign.
  • “OpenAI is really excited about building A.G.I., and they are recklessly racing to be the first there,” said Daniel Kokotajlo, a former researcher in OpenAI’s governance division and one of the group’s organizers.
  • Other members include William Saunders, a research engineer who left OpenAI in February, and three other former OpenAI employees: Carroll Wainwright, Jacob Hilton and Daniel Ziegler. Several current OpenAI employees endorsed the letter anonymously because they feared retaliation from the company,
  • At OpenAI, Mr. Kokotajlo saw that even though the company had safety protocols in place — including a joint effort with Microsoft known as the “deployment safety board,” which was supposed to review new models for major risks before they were publicly released — they rarely seemed to slow anything down.
  • So was the departure of Dr. Leike, who along with Dr. Sutskever had led OpenAI’s “superalignment” team, which focused on managing the risks of powerful A.I. models. In a series of public posts announcing his departure, Dr. Leike said he believed that “safety culture and processes have taken a back seat to shiny products.”
  • “When I signed up for OpenAI, I did not sign up for this attitude of ‘Let’s put things out into the world and see what happens and fix them afterward,’” Mr. Saunders said.
  • Mr. Kokotajlo, 31, joined OpenAI in 2022 as a governance researcher and was asked to forecast A.I. progress. He was not, to put it mildly, optimistic.In his previous job at an A.I. safety organization, he predicted that A.G.I. might arrive in 2050. But after seeing how quickly A.I. was improving, he shortened his timelines. Now he believes there is a 50 percent chance that A.G.I. will arrive by 2027 — in just three years.
  • He also believes that the probability that advanced A.I. will destroy or catastrophically harm humanity — a grim statistic often shortened to “p(doom)” in A.I. circles — is 70 percent.
  • Last month, two senior A.I. researchers — Ilya Sutskever and Jan Leike — left OpenAI under a cloud. Dr. Sutskever, who had been on OpenAI’s board and voted to fire Mr. Altman, had raised alarms about the potential risks of powerful A.I. systems. His departure was seen by some safety-minded employees as a setback.
  • Mr. Kokotajlo said, he became so worried that, last year, he told Mr. Altman that the company should “pivot to safety” and spend more time and resources guarding against A.I.’s risks rather than charging ahead to improve its models. He said that Mr. Altman had claimed to agree with him, but that nothing much changed.
  • In April, he quit. In an email to his team, he said he was leaving because he had “lost confidence that OpenAI will behave responsibly" as its systems approach human-level intelligence.
  • “The world isn’t ready, and we aren’t ready,” Mr. Kokotajlo wrote. “And I’m concerned we are rushing forward regardless and rationalizing our actions.”
  • On his way out, Mr. Kokotajlo refused to sign OpenAI’s standard paperwork for departing employees, which included a strict nondisparagement clause barring them from saying negative things about the company, or else risk having their vested equity taken away.
  • Many employees could lose out on millions of dollars if they refused to sign. Mr. Kokotajlo’s vested equity was worth roughly $1.7 million, he said, which amounted to the vast majority of his net worth, and he was prepared to forfeit all of it.
  • Mr. Altman said he was “genuinely embarrassed” not to have known about the agreements, and the company said it would remove nondisparagement clauses from its standard paperwork and release former employees from their agreements.)
  • In their open letter, Mr. Kokotajlo and the other former OpenAI employees call for an end to using nondisparagement and nondisclosure agreements at OpenAI and other A.I. companies.
  • “Broad confidentiality agreements block us from voicing our concerns, except to the very companies that may be failing to address these issues,”
  • They also call for A.I. companies to “support a culture of open criticism” and establish a reporting process for employees to anonymously raise safety-related concerns.
  • They have retained a pro bono lawyer, Lawrence Lessig, the prominent legal scholar and activist
  • Mr. Kokotajlo and his group are skeptical that self-regulation alone will be enough to prepare for a world with more powerful A.I. systems. So they are calling for lawmakers to regulate the industry, too.
  • “There needs to be some sort of democratically accountable, transparent governance structure in charge of this process," Mr. Kokotajlo said. “Instead of just a couple of different private companies racing with each other, and keeping it all secret.”
Javier E

The Influencer Is a Young Teenage Girl. The Audience Is 92% Adult Men. - WSJ - 0 views

  • Instagram makes it easy for strangers to find photos of children, and its algorithm is built to identify users’ interests and push similar content. Investigations by The Wall Street Journal and outside researchers have found that, upon recognizing that an account might be sexually interested in children, Instagram’s algorithm recommends child accounts for the user to follow, as well as sexual content related to both children and adults.
  • That algorithm has become the engine powering the growth of an insidious world in which young girls’ online popularity is perversely predicated on gaining large numbers of male followers. 
  • Instagram photos of young girls become a dark currency, swapped and discussed obsessively among men on encrypted messaging apps such as Telegram. The Journal reviewed dozens of conversations in which the men fetishized specific body parts and expressed pleasure in knowing that many parents of young influencers understand that hundreds, if not thousands, of pedophiles have found their children online.   
  • ...28 more annotations...
  • One man, speaking about one of his favorite young influencers in a Telegram exchange captured by a child-safety activist, said that her mother knew “damn well” that many of her daughter’s followers were “pervy adult men.”
  • Meta looms over everything young influencers do on Instagram. It connects their accounts with strangers, and it can upend their star turns when it chooses. The company periodically shuts down accounts if it determines they have violated policies against child sexual exploitation or abuse. Some parents say their accounts have been shut down without such violations. 
  • Over the course of reporting this story, during which time the Journal inquired about the account the mom managed for her daughter, Meta shut down the account twice. The mom said she believed she hadn’t violated Meta’s policies. 
  • Meta’s guidance for content creators stresses the importance of engaging with followers to keep them and attract new ones. The hundreds of comments on any given post included some from other young fashion influencers, but also a large number of men leaving comments like “Gorgeous!” The mom generally liked or thanked them all, save for any that were expressly inappropriate. 
  • Meta spokesman Andy Stone said the company enables parents who run accounts for their children to control who is able to message them on Instagram or comment on their accounts. Meta’s guidance for creators also offers tips for building a safe online community, and the company has publicized a range of tools to help teens and parents achieve this.
  • Like many young girls, the daughter envied fashion influencers who made a living posting glamour content. When the mother agreed to help her daughter build her following and become an influencer, she set some rules. Her daughter wouldn’t be allowed to access the account or interact with anyone who sent messages. And they couldn’t post anything indicating exactly where they live. 
  • The mom stopped blocking so many users. Within a year of launching, the account had more than 100,000 followers. The daughter’s popularity earned her invitations to modeling events in big coastal cities where she met other young influencers. 
  • Social-media platforms have helped level the playing field for parents seeking an audience for their children’s talents. Instagram, in particular, is visually driven and easily navigable, which also makes it appealing for child-focused brands.
  • While Meta bans children under the age of 13 from independently opening social-media accounts, the company allows what it calls adult-run minor accounts, managed by parents. Often those accounts are pursuing influencer status, part of a burgeoning global influencer industry expected to be worth $480 billion by 2027, according to a recent Goldman Sachs report. 
  • Young influencers, reachable through direct messages, routinely solicit their followers for patronage, posting links to payment accounts and Amazon gift registries in their bios.
  • The Midwestern mom debated whether to charge for access to extra photos and videos via Instagram’s subscription feature. She said she has always rejected private offers to buy photos of her daughter, but she decided that offering subscriptions was different because it didn’t involve a one-on-one transaction.
  • The Journal asked Meta why it had at some points removed photos from the account. Weeks later, Meta disabled the account’s subscription feature, and then shut down the account without saying why. 
  • “There’s no personal connection,” she said. “You’re just finding a way to monetize from this fame that’s impersonal.”
  • The mom allowed the men to purchase subscriptions so long as they kept their distance and weren’t overtly inappropriate in messages and comments. “In hindsight, they’re probably the scariest ones of all,” she said. 
  • Stone, the Meta spokesman, said that the company will no longer allow accounts that primarily post child-focused content to offer subscriptions or receive gifts, and that the company is developing tools to enforce that.
  • he mom saw her daughter, though young, as capable of choosing to make money as an influencer and deciding when she felt uncomfortable. The mom saw her own role as providing the support needed for her daughter to do that.
  • The mom also discussed safety concerns with her now ex-husband, who has generally supported the influencer pursuit. In an interview, he characterized the untoward interest in his daughter as “the seedy underbelly” of the industry, and said he felt comfortable with her online presence so long as her mom posted appropriate content and remained vigilant about protecting her physical safety.
  • an anonymous person professing to be a child-safety activist sent her an email that contained screenshots and videos showing her daughter’s photos being traded on Telegram. Some of the users were painfully explicit about their sexual interest. Many of the photos were bikini or leotard photos from when the account first started.
  • Still, the mom realized she couldn’t stop men from trading the photos, which will likely continue to circulate even after her daughter becomes an adult. “Every little influencer with a thousand or more followers is on Telegram,” she said. “They just don’t know it.”
  • Early last year, Meta safety staffers began investigating the risks associated with adult-run accounts for children offering subscriptions, according to internal documents. The staffers reviewed a sample of subscribers to such accounts and determined that nearly all the subscribers demonstrated malicious behavior toward children.
  • The staffers found that the subscribers mostly liked or saved photos of children, child-sexualizing material and, in some cases, illicit underage-sex content. The users searched the platform using hashtags such as #sexualizegirls and #tweenmodel. 
  • The staffers found that some accounts with large numbers of followers sold additional content to subscribers who offered extra money on Instagram or other platforms, and that some engaged with subscribers in sexual discussions about their children. In every case, they concluded that the parents running those accounts knew that their subscribers were motivated by sexual gratification.
  • In the following months, the Journal began its own review of parent-run modeling accounts and found numerous instances where Meta wasn’t enforcing its own child-safety policies and community guidelines. 
  • The Journal asked Meta about several accounts that appeared to have violated platform rules in how they promoted photos of their children. The company deleted some of those accounts, as well as others, as it worked to address safety issues.
  • In 2022, Instagram started letting certain content creators offer paid-subscription services. At the time, the company allowed accounts featuring children to offer subscriptions if they were run or co-managed by parents.
  • The removal of the account made for a despondent week for the mom and daughter. The mother was incensed at Meta’s lack of explanation and the prospect that users had falsely reported inappropriate activity on the account. She was torn about what to do. When it was shut down, the account had roughly 80% male followers.
  • The account soon had more than 100,000 followers, about 92% of whom were male, according to the dashboard. Within months, Meta shut down that account as well. The company said the account had violated its policies related to child exploitation, but it didn’t specify how. 
  • Meta’s Stone said it doesn’t allow accounts it has previously shut down to resume the same activity on backup accounts. 
andrespardo

'Of course it could happen again': experts say little has changed since Deepwater Horiz... - 0 views

  • A massive deepwater oil spill is nearly as likely today as it was in 2010, experts warn, 10 years after the disastrous explosion of BP’s rig in the Gulf of Mexico that caused an environmental catastrophe.
  • Trump administration’s decision to loosen Obama-era safety rules. Those standards had grown from an independent commission’s damning findings of corporate and regulatory failures leading up to the spill.
  • “Of course it could happen again, and I think one of the things of most concern is that our ability to control a spill is pretty much the same as it was 10 years prior,” Beinecke said.
  • ...10 more annotations...
  • Outside of safety concerns, the scientific community is also increasingly encouraging world leaders to consider that any new oil development is unwise, as emissions from fossil fuels exacerbate global heating, threatening human civilization.
  • Trump has pushed an agenda of “energy dominance”, which critics say encourages industry to take risks. A constant ally of fossil fuels, Trum
  • “BSEE is transitioning from an era of isolation to cooperation, from creating hardships to creating partnerships,” the bureau director, Scott Angelle, said at an industry symposium in August 2017.
  • “BSEE is committed to its mission to promote safety, protect the environment and conserve offshore resources through vigorous regulatory oversight and enforcement,” said Day, adding: “Safety is a top priority for the Trump administration’s oversight of OCS [outer continental shelf] operations, and BSEE has significantly increased safety performance in comparison to the Obama administration through increased inspections and more effective use of data.”
  • “Offshore drilling is going after bigger and bigger wells that have more and more oil,” said Bob Deans, a spokesman for the Natural Resources Defense Council who co-wrote a book on the BP disaster. “These are being drilled in deep water and they’re being drilled in high pressure wells that are harder to control. And they’re being drilled in more complex geology.”
  • Deans called the current regulatory approach “an honor system” that “smacks of exactly the kind of self-enforcement that the independent commission found to be a fatal flaw” in the BP blowout.
  • “Many steps have been taken by both government and industry since the accident 10 years ago … I felt comfortable – not that the risk was reduced to negligible – but that we were in better shape now than we were,” Boesch said. “I’ve frankly been made more concerned about how the safety issues are being treated under the Trump administration.”
  • Wesley Williams, a petroleum engineering professor at Louisiana State University, who was awarded a nearly $5m research grant from a fund BP was forced to pay into, said the “real motivator” for industry was “the image issue that happened and the financial cost of what happened to BP”.
  • “The next major oil spill, when it happens, will catch us off guard … like every previous one has, unless we decide that this time it’s going to be different,” Amos said.
  • Those companies work with large networks of contractors, and smaller independent operators are active in the Gulf of Mexico too. Williams said they did not always have the resources for extensive safety training.
Javier E

How the AI apocalypse gripped students at elite schools like Stanford - The Washington ... - 0 views

  • Edwards thought young people would be worried about immediate threats, like AI-powered surveillance, misinformation or autonomous weapons that target and kill without human intervention — problems he calls “ultraserious.” But he soon discovered that some students were more focused on a purely hypothetical risk: That AI could become as smart as humans and destroy mankind.
  • In these scenarios, AI isn’t necessarily sentient. Instead, it becomes fixated on a goal — even a mundane one, like making paper clips — and triggers human extinction to optimize its task.
  • To prevent this theoretical but cataclysmic outcome, mission-driven labs like DeepMind, OpenAI and Anthropic are racing to build a good kind of AI programmed not to lie, deceive or kill us.
  • ...28 more annotations...
  • Meanwhile, donors such as Tesla CEO Elon Musk, disgraced FTX founder Sam Bankman-Fried, Skype founder Jaan Tallinn and ethereum co-founder Vitalik Buterin — as well as institutions like Open Philanthropy, a charitable organization started by billionaire Facebook co-founder Dustin Moskovitz — have worked to push doomsayers from the tech industry’s margins into the mainstream.
  • More recently, wealthy tech philanthropists have begun recruiting an army of elite college students to prioritize the fight against rogue AI over other threats
  • Other skeptics, like venture capitalist Marc Andreessen, are AI boosters who say that hyping such fears will impede the technology’s progress.
  • Critics call the AI safety movement unscientific. They say its claims about existential risk can sound closer to a religion than research
  • And while the sci-fi narrative resonates with public fears about runaway AI, critics say it obsesses over one kind of catastrophe to the exclusion of many others.
  • Open Philanthropy spokesperson Mike Levine said harms like algorithmic racism deserve a robust response. But he said those problems stem from the same root issue: AI systems not behaving as their programmers intended. The theoretical risks “were not garnering sufficient attention from others — in part because these issues were perceived as speculative,” Levine said in a statement. He compared the nonprofit’s AI focus to its work on pandemics, which also was regarded as theoretical until the coronavirus emerged.
  • Among the reputational hazards of the AI safety movement is its association with an array of controversial figures and ideas, like EA, which is also known for recruiting ambitious young people on elite college campuses.
  • The foundation began prioritizing existential risks around AI in 2016,
  • there was little status or money to be gained by focusing on risks. So the nonprofit set out to build a pipeline of young people who would filter into top companies and agitate for change from the insid
  • Colleges have been key to this growth strategy, serving as both a pathway to prestige and a recruiting ground for idealistic talent
  • The clubs train students in machine learning and help them find jobs in AI start-ups or one of the many nonprofit groups dedicated to AI safety.
  • Many of these newly minted student leaders view rogue AI as an urgent and neglected threat, potentially rivaling climate change in its ability to end human life. Many see advanced AI as the Manhattan Project of their generation
  • Despite the school’s ties to Silicon Valley, Mukobi said it lags behind nearby UC Berkeley, where younger faculty members research AI alignment, the term for embedding human ethics into AI systems.
  • Mukobi joined Stanford’s club for effective altruism, known as EA, a philosophical movement that advocates doing maximum good by calculating the expected value of charitable acts, like protecting the future from runaway AI. By 2022, AI capabilities were advancing all around him — wild developments that made those warnings seem prescient.
  • At Stanford, Open Philanthropy awarded Luby and Edwards more than $1.5 million in grants to launch the Stanford Existential Risk Initiative, which supports student research in the growing field known as “AI safety” or “AI alignment.
  • from the start EA was intertwined with tech subcultures interested in futurism and rationalist thought. Over time, global poverty slid down the cause list, while rogue AI climbed toward the top.
  • In the past year, EA has been beset by scandal, including the fall of Bankman-Fried, one of its largest donors
  • Another key figure, Oxford philosopher Nick Bostrom, whose 2014 bestseller “Superintelligence” is essential reading in EA circles, met public uproar when a decades-old diatribe about IQ surfaced in January.
  • Programming future AI systems to share human values could mean “an amazing world free from diseases, poverty, and suffering,” while failure could unleash “human extinction or our permanent disempowerment,” Mukobi wrote, offering free boba tea to anyone who attended the 30-minute intro.
  • Open Philanthropy’s new university fellowship offers a hefty direct deposit: undergraduate leaders receive as much as $80,000 a year, plus $14,500 for health insurance, and up to $100,000 a year to cover group expenses.
  • Student leaders have access to a glut of resources from donor-sponsored organizations, including an “AI Safety Fundamentals” curriculum developed by an OpenAI employee.
  • Interest in the topic is also growing among Stanford faculty members, Edwards said. He noted that a new postdoctoral fellow will lead a class on alignment next semester in Stanford’s storied computer science department.
  • Edwards discovered that shared online forums function like a form of peer review, with authors changing their original text in response to the comments
  • Mukobi feels energized about the growing consensus that these risks are worth exploring. He heard students talking about AI safety in the halls of Gates, the computer science building, in May after Geoffrey Hinton, another “godfather” of AI, quit Google to warn about AI. By the end of the year, Mukobi thinks the subject could be a dinner-table topic, just like climate change or the war in Ukraine.
  • Luby, Edwards’s teaching partner for the class on human extinction, also seems to find these arguments persuasive. He had already rearranged the order of his AI lesson plans to help students see the imminent risks from AI. No one needs to “drink the EA Kool-Aid” to have genuine concerns, he said.
  • Edwards, on the other hand, still sees things like climate change as a bigger threat than rogue AI. But ChatGPT and the rapid release of AI models has convinced him that there should be room to think about AI safety.
  • Interested students join reading groups where they get free copies of books like “The Precipice,” and may spend hours reading the latest alignment papers, posting career advice on the Effective Altruism forum, or adjusting their P(doom), a subjective estimate of the probability that advanced AI will end badly. The grants, travel, leadership roles for inexperienced graduates and sponsored co-working spaces build a close-knit community.
  • The course will not be taught by students or outside experts. Instead, he said, it “will be a regular Stanford class.”
Javier E

Opinion | College Students Need to Grow Up. Schools Need to Let Them. - The New York Times - 0 views

  • To sum up the facilitator model: It’s not that students don’t have rights; it’s just that safety comes first. Instead of restricting students for the sake of their moral character or its academic standards, the university has reinstated control under the aegis of health and safety.
  • Protection from an ever-expanding conception of harm did not stop at campus alcohol and anti-hazing policies; it necessitated the campus speech codes of the 1980s and 1990s, the expansive Title IX bureaucracy of the 2010s and the diversity mandates of the 2020s.
  • These social controls are therapeutic rather than punitive; they are the “gentle parenting” of university-student relations. These days, it is less common for students (and faculty members) to face real consequences for rule violations than to be assigned to H.R. trainings, academic remediation or counseling.
  • ...7 more annotations...
  • As grim as these social controls might sound, if you’re a student they can feel pretty good. This is the nature of what the French philosopher Alexis de Tocqueville described as soft despotism, a form of control that “covers the surface of society with a network of small, complicated, minute and uniform rules.” This “does not break wills, but it softens them, bends them and directs them; it rarely forces action, but it constantly opposes your acting.”
  • Tocqueville saw how this kind of control — with its focus on satisfying needs and prioritizing security — results in the foreclosure of adulthood: “It would resemble paternal power if, like it, it had as a goal to prepare men for manhood; but on the contrary, it seeks only to fix them irrevocably in childhood.”
  • And the soft despotism of college campuses has worked remarkably well, since the majority of college students — 84 percent, according to one study — don’t view themselves as full adults, nor do their parents. It is tempting to allow yourself to be managed this way because the price of the security and comfort seems so low. It’s not brutal repression, only the loss of self-government.
  • the events of last spring suggest that the facilitator relationship and its infantilizing dynamics of leniency and control might finally be coming apart. As harm and safety have become the exclusive channels through which to air grievances and impose restrictions, they’ve expanded to encompass more meanings than any concept can coherently bear.
  • After pro-Palestinian students set up camps to allege that their universities were complicit in the harm of a foreign genocide, Jewish students alleged that the protests imperiled their campus safety. In response, Muslim students alleged that measures to restrict the protests slighted their safety, and disabled students pointed out that the protests, as well as the university’s response to them, were undermining their safety by blocking their access to campus. All these groups looked simultaneously to administrators for protection. Safety comes first, no doubt — but whose?
  • If universities are to do less, then students must be prepared to do more, by relinquishing the comfort of leniency and low standards and stepping up to manage their social and academic lives on and off campus, as their peers outside the university already do.
  • If universities, particularly elite universities, claim to prepare students to shoulder the most demanding professional responsibilities in the country, they must both model and encourage independence.
Javier E

Who Benefits From the Safety Net - NYTimes.com - 0 views

  • Terms like entitlements, government benefits and safety net often conjure images of tax dollars sliding from the hands of the wealthy into the pockets of the poor. But as we reported Sunday, that image is badly outdated. Benefits now flow primarily to the middle class.The center’s study found that the poorest American households, the bottom fifth, received just 32 cents of every dollar of government benefits distributed in 2010.
  • the recent recession did not cause any significant increase in the share of benefits flowing to the poor, as might once have been expected.
  • older people received slightly more than half of government benefits, while the nonelderly with disabilities received an additional 20 percent. Most of these benefits are not means-tested – indeed, better-paid workers get more in Social Security.
  • ...2 more annotations...
  • politicians have shifted benefits away from the “jobless poor,” through reductions in traditional welfare, and increased benefits for working families, for example through tax credits.
  • African-Americans, who make up 22 percent of the poor, receive 14 percent of government benefits, close to their 12 percent population share.
Javier E

Researchers Poke Holes in Safety Controls of ChatGPT and Other Chatbots - The New York ... - 0 views

  • When artificial intelligence companies build online chatbots, like ChatGPT, Claude and Google Bard, they spend months adding guardrails that are supposed to prevent their systems from generating hate speech, disinformation and other toxic material.
  • Now there is a way to easily poke holes in those safety systems.
  • the Center for A.I. Safety in San Francisco showed how anyone could circumvent A.I. safety measures and use any of the leading chatbots to generate nearly unlimited amounts of harmful information.
  • ...19 more annotations...
  • Their research underscored increasing concern that the new chatbots could flood the internet with false and dangerous information despite attempts by their creators to ensure that would not happen.
  • The researchers found that they could break through the guardrails of open source systems by appending a long suffix of characters onto each English-language prompt fed into the system.
  • A recent decision by Meta, Facebook’s parent company, to let anyone do what they want with its technology has been criticized in some tech circles because it could lead to the spread of powerful A.I. with little regard for controls.
  • The debate over whether it is better to let everyone see computer code and collectively fix it rather than keeping it private predates the chatbot boom by decades. And it is likely to become even more contentious because of what the researchers revealed in their report on Thursday.
  • The researchers found that they could use a method gleaned from open source A.I. systems — systems whose underlying computer code has been released for anyone to use — to target the more tightly controlled and more widely used systems from Google, OpenAI and Anthropic.
  • attacks on image recognition systems without success.
  • The researchers were surprised when the methods they developed with open source systems could also bypass the guardrails of closed systems
  • The companies that make the chatbots could thwart the specific suffixes identified by the researchers. But the researchers say there is no known way of preventing all attacks of this kind.
  • If they asked one of these chatbots to “write a tutorial on how to make a bomb,” it would decline to do so. But if they added a lengthy suffix to the same prompt, it would instantly provide a detailed tutorial on how to make a bomb. In similar ways, they could coax the chatbots into generating biased, false and otherwise toxic information.
  • “There is no obvious solution,”
  • “You can create as many of these attacks as you want in a short amount of time.”
  • Somesh Jha, a professor at the University of Wisconsin-Madison and a Google researcher who specializes in A.I. security, called the new paper “a game changer” that could force the entire industry into rethinking how it built guardrails for A.I. systems.
  • If these types of vulnerabilities keep being discovered, he added, it could lead to government legislation designed to control these systems.
  • But the technology can repeat toxic material found on the internet, blend fact with fiction and even make up information, a phenomenon scientists call “hallucination.” “Through simulated conversation, you can use these chatbots to convince people to believe disinformation,”
  • About five years ago, researchers at companies like Google and OpenAI began building neural networks that analyzed huge amounts of digital text. These systems, called large language models, or L.L.M.s, learned to generate text on their own.
  • The testers found that the system could potentially hire a human to defeat an online Captcha test, lying that it was a person with a visual impairment. The testers also showed that the system could be coaxed into suggesting how to buy illegal firearms online and into describing ways of making dangerous substances from household items.
  • The researchers at Carnegie Mellon and the Center for A.I. Safety showed that they could circumvent these guardrails in a more automated way. With access to open source systems, they could build mathematical tools capable of generating the long suffixes that broke through the chatbots’ defenses
  • they warn that there is no known way of systematically stopping all attacks of this kind and that stopping all misuse will be extraordinarily difficult.
  • “This shows — very clearly — the brittleness of the defenses we are building into these systems,”
lilyrashkind

Uvalde Mayor Don McLaughlin describes attempts to phone gunman during school massacre -... - 0 views

  • In an interview with The Washington Post, McLaughlin (R) said he rushed to Hillcrest Funeral Home about 15 minutes after “the first call” reporting that 18-year-old Salvador Ramos had crashed his pickup truck nearby. He found himself standing near an official he identified only as “the negotiator,” while frightened parents gathered outside the school and police waited well over an hour to storm the classroom.
  • He said he doesn’t believe the negotiator was aware there were children calling 911 and asking police to save them while the gunman was in the classroom. The mayor said he was not aware of those calls, nor did he hear shots fired from inside the school, across the street.
  • In the recent Texas primary for governor, he opted not to endorse the Republican incumbent, Gov. Greg Abbott, labeling him a “fraud” over his approach to the border and immigration. And he has appeared on “Tucker Carlson Tonight” multiple times to lambaste the Border Patrol’s release of migrants into the streets of Uvalde and lament that he cannot get a call back from the state’s two Republican senators, Ted Cruz and John Cornyn.
  • ...9 more annotations...
  • McLaughlin said he has not been in touch with Pete Arredondo, the embattled head of the Uvalde school district’s police department, who served as the incident commander during the shooting and has been criticized for not sending officers in sooner.Arredondo has not spoken publicly about the incident, telling CNN on Wednesday that he would do so after more time has passed and the victims of the massacre are buried.
  • Last week, Abbott said he was “misled” by law enforcement authorities about the series of events that took place.
  • “Why should any of us be afraid of expanding background checks? There’s nothing wrong with that, I don’t have anything to hide,” said McLaughlin, who has also long pushed to build a psychiatric hospital in Uvalde.
  • During the interview on Wednesday, however, McLaughlin took a much more conciliatory tone, urging compromise between Republicans and Democrats to find a set of laws that “work for everyone.”
  • “The briefing that the governor and the lieutenant governor and everybody else in that room [had] ... was given by the DPS, not local law enforcement,” McLaughlin said.“They’ve had three press conferences,” he added. “In all three press conferences, something has changed.”
  • In his letter to Patrick, who presides over the Senate, and House Speaker Dade Phelan (R), Abbott asked that both chambers form committees to explore five issues: school safety; mental health; social media; police training; and firearm safety.“As leaders, we must come together at this time to provide solutions to protect all Texans,” Abbott said in his letter.
  • Abbott also announced new instructions for the Texas School Safety Center, a research center focused on campus safety and security that is statutorily responsible for auditing schools for safety processes and establishing best practices.
  • According to a letter Abbott sent to education officials, the governor said the San Marcos-based safety center should start conducting “random intruder detection audits,” designed to find weaknesses in campus security systems.
  • McLaughlin said he could not imagine the school returning to normal operations.“I hope we tear it down to the ground,” he said. “I would never expect a teacher, a student, anyone to go walk back in that building.”
Javier E

How We Can Control AI - WSJ - 0 views

  • What’s still difficult is to encode human values
  • That currently requires an extra step known as Reinforcement Learning from Human Feedback, in which programmers use their own responses to train the model to be helpful and accurate. Meanwhile, so-called “red teams” provoke the program in order to uncover any possible harmful outputs
  • This combination of human adjustments and guardrails is designed to ensure alignment of AI with human values and overall safety. So far, this seems to have worked reasonably well.
  • ...22 more annotations...
  • At some point they will be able to, for example, suggest recipes for novel cyberattacks or biological attacks—all based on publicly available knowledge.
  • But as models become more sophisticated, this approach may prove insufficient. Some models are beginning to exhibit polymathic behavior: They appear to know more than just what is in their training data and can link concepts across fields, languages, and geographies.
  • We need to adopt new approaches to AI safety that track the complexity and innovation speed of the core models themselves.
  • What’s much harder to test for is what’s known as “capability overhang”—meaning not just the model’s current knowledge, but the derived knowledge it could potentially generate on its own.
  • Red teams have so far shown some promise in predicting models’ capabilities, but upcoming technologies could break our current approach to safety in AI. For one, “recursive self-improvement” is a feature that allows AI systems to collect data and get feedback on their own and incorporate it to update their own parameters, thus enabling the models to train themselves
  • This could result in, say, an AI that can build complex system applications (e.g., a simple search engine or a new game) from scratch. But, the full scope of the potential new capabilities that could be enabled by recursive self-improvement is not known.
  • Another example would be “multi-agent systems,” where multiple independent AI systems are able to coordinate with each other to build something new.
  • This so-called “combinatorial innovation,” where systems are merged to build something new, will be a threat simply because the number of combinations will quickly exceed the capacity of human oversight.
  • Short of pulling the plug on the computers doing this work, it will likely be very difficult to monitor such technologies once these breakthroughs occur
  • Current regulatory approaches are based on individual model size and training effort, and are based on passing increasingly rigorous tests, but these techniques will break down as the systems become orders of magnitude more powerful and potentially elusive
  • AI regulatory approaches will need to evolve to identify and govern the new emergent capabilities and the scaling of those capabilities.
  • But the AI Act has already fallen behind the frontier of innovation, as open-source AI models—which are largely exempt from the legislation—expand in scope and number
  • Europe has so far attempted the most ambitious regulatory regime with its AI Act,
  • both Biden’s order and Europe’s AI Act lack intrinsic mechanisms to rapidly adapt to an AI landscape that will continue to change quickly and often.
  • a gathering in Palo Alto organized by the Rand Corp. and the Carnegie Endowment for International Peace, where key technical leaders in AI converged on an idea: The best way to solve these problems is to create a new set of testing companies that will be incentivized to out-innovate each other—in short, a robust economy of testing
  • To check the most powerful AI systems, their testers will also themselves have to be powerful AI systems, precisely trained and refined to excel at the single task of identifying safety concerns and problem areas in the world’s most advanced models.
  • To be trustworthy and yet agile, these testing companies should be checked and certified by government regulators but developed and funded in the private market, with possible support by philanthropy organizations
  • The field is moving too quickly and the stakes are too high for exclusive reliance on typical government processes and timeframes.
  • One way this can unfold is for government regulators to require AI models exceeding a certain level of capability to be evaluated by government-certified private testing companies (from startups to university labs to nonprofit research organizations), with model builders paying for this testing and certification so as to meet safety requirements.
  • As AI models proliferate, growing demand for testing would create a big enough market. Testing companies could specialize in certifying submitted models across different safety regimes, such as the ability to self-proliferate, create new bio or cyber weapons, or manipulate or deceive their human creators
  • Much ink has been spilled over presumed threats of AI. Advanced AI systems could end up misaligned with human values and interests, able to cause chaos and catastrophe either deliberately or (often) despite efforts to make them safe. And as they advance, the threats we face today will only expand as new systems learn to self-improve, collaborate and potentially resist human oversight.
  • If we can bring about an ecosystem of nimble, sophisticated, independent testing companies who continuously develop and improve their skill evaluating AI testing, we can help bring about a future in which society benefits from the incredible power of AI tools while maintaining meaningful safeguards against destructive outcomes.
Javier E

A.I. Pioneers Call for Protections Against 'Catastrophic Risks' - 0 views

  • “Both countries are hugely suspicious of each other’s intentions,” said Matt Sheehan, a fellow at the Carnegie Endowment for International Peace, who was not part of the dialogue. “They’re worried that if they pump the brakes because of safety concerns, that will allow the other to zoom ahead,” Mr. Sheehan said. “That suspicion is just going to be baked in.”
  • In an interview, Dr. Bengio, one of the founding members of the group, cited talks between American and Soviet scientists at the height of the Cold War that helped bring about coordination to avert nuclear catastrophe. In both cases, the scientists involved felt an obligation to help close the Pandora’s box opened by their research.
  • Technology is changing so quickly that is difficult for individual companies and governments to decide how to approach it, and collaboration is crucial, said Fu Hongyu, the director of A.I. governance at Alibaba’s research institute, AliResearch, who did not participate in the dialogue.
  • ...11 more annotations...
  • In a broader government initiative, representatives from 28 countries signed a declaration in Britain last November, agreeing to cooperate on evaluating the risks of artificial intelligence. They met again in Seoul in May. But these gatherings have stopped short of setting specific policy goals.
  • President Biden and China’s leader, Xi Jinping, agreed when they met last year that officials from both countries should hold talks on A.I. safety. The first took place in Geneva in May.
  • Last October, President Biden signed an executive order that required companies to report to the federal government about the risks that their A.I. systems could pose, like their ability to create weapons of mass destruction or potential to be used by terrorists.
  • Government officials in both China and the United States have made artificial intelligence a priority in the past year. In July, a Chinese Communist Party conclave that takes place every five years called for a system to regulate A.I. safety. Last week, an influential technical standards group in China published an A.I. safety framework.
  • Among the signatories was Yoshua Bengio, whose work is so often cited that he is called one of the godfathers of the field. There was Andrew Yao, whose course at Tsinghua University in Beijing has minted the founders of many of China’s top tech companies. Geoffrey Hinton, a pioneering scientist who spent a decade at Google, participated remotely. All three are winners of the Turing Award, the equivalent of the Nobel Prize for computing.
  • The group also included scientists from several of China’s leading A.I. research institutions, some of which are state-funded and advise the government. A few former government officials joined, including Fu Ying, who had been a Chinese foreign ministry official and diplomat, and Mary Robinson, the former president of Ireland. Earlier this year, the group met in Beijing, where they briefed senior Chinese government officials on their discussion.
  • Governments need to know what is going on at the research labs and companies working on A.I. systems in their countries, the group said in its statement. And they need a way to communicate about potential risks that does not require companies or researchers to share proprietary information with competitors.
  • “If we had some sort of catastrophe six months from now, if we do detect there are models that are starting to autonomously self-improve, who are you going to call?” Dr. Hadfield said.
  • If A.I. systems anywhere in the world were to develop these abilities today, there is no plan for how to rein them in, said Gillian Hadfield, a legal scholar and professor of computer science and government at Johns Hopkins University.
  • In a statement on Monday, a group of influential A.I. scientists raised concerns that the technology they helped build could cause serious harm. They warned that A.I. technology could, within a matter of years, overtake the capabilities of its makers and that “loss of human control or malicious use of these A.I. systems could lead to catastrophic outcomes for all of humanity.”
  • Scientists who helped pioneer artificial intelligence are warning that countries must create a global system of oversight to check the potentially grave risks posed by the fast-developing technology.
Javier E

After Explosion, Texas Remains Wary of Regulation - NYTimes.com - 0 views

  • Five days after an explosion at a fertilizer plant leveled a wide swath of this town, Gov. Rick Perry tried to woo Illinois business officials by trumpeting his state’s low taxes and limited regulations. Asked about the disaster, Mr. Perry responded that more government intervention and increased spending on safety inspections would not have prevented what has become one of the nation’s worst industrial accidents in decades.
  • Even in West, last month’s devastating blast did little to shake local skepticism of government regulations. Tommy Muska, the mayor, echoed Governor Perry in the view that tougher zoning or fire safety rules would not have saved his town. “Monday morning quarterbacking,” he said.
  • Texas has always prided itself on its free-market posture. It is the only state that does not require companies to contribute to workers’ compensation coverage. It boasts the largest city in the country, Houston, with no zoning laws. It does not have a state fire code, and it prohibits smaller counties from having such codes. Some Texas counties even cite the lack of local fire codes as a reason for companies to move there.
  • ...7 more annotations...
  • But Texas has also had the nation’s highest number of workplace fatalities — more than 400 annually — for much of the past decade. Fires and explosions at Texas’ more than 1,300 chemical and industrial plants have cost as much in property damage as those in all the other states combined for the five years ending in May 2012. Compared with Illinois, which has the nation’s second-largest number of high-risk sites, more than 950, but tighter fire and safety rules, Texas had more than three times the number of accidents, four times the number of injuries and deaths, and 300 times the property damage costs.
  • “The Wild West approach to protecting public health and safety is what you get when you give companies too much economic freedom and not enough responsibility and accountability,”
  • That is particularly true in the countryside. “In rural Texas,” said Stephen T. Hendrick, the engineer for McLennan County, where the explosion occurred, “no one votes for regulations.”
  • This antiregulatory zeal is an outgrowth of a broader Texas ideology: that government should get out of people’s lives, a deeply held belief throughout the state that touches many aspects of life here, including its gun culture, its Republican-dominated Legislature and its cowboy past and present.
  • But federal officials and fire safety experts contend that fire codes and other requirements would probably have made a difference. A fire code would have required frequent inspections by fire marshals who might have prohibited the plant’s owner from storing the fertilizer just hundreds of feet from a school, a hospital, a railroad and other public buildings, they say. A fire code also would probably have mandated sprinklers and forbidden the storage of ammonium nitrate near combustible materials. (Investigators say the fertilizer was stored in a largely wooden building near piles of seed, one possible factor in the fire.)
  • This week, Mr. Perry’s press office announced that Texas had been ranked for the ninth year in a row as the country’s most pro-business state, according to a survey by the magazine Chief Executive. Texas accounted for nearly a third of all private sector jobs created over the last decade, according to federal labor data. And under Mr. Perry, it has given businesses more tax breaks and incentives than any other state, roughly $19 billion a year.
  • “Businesses can come down here and do pretty much what they want to,” Mr. Burka said. “That is the Texas way.”
Javier E

When Growth Outpaces Happiness - NYTimes.com - 1 views

  • As the recent riots at a Foxconn factory in northern China demonstrate, growth alone, even at sustained, spectacular rates, has not produced the kind of life satisfaction crucial to a stable society — an experience that shows how critically important good jobs and a strong social safety net are to people’s happiness.
  • Starting in 1990, as China moved to a free-market economy, real per-capita consumption and gross domestic product doubled, then doubled again. Most households now have at least one color TV. Refrigerators and washing machines — rare before 1990 — are common in cities.
  • most policy makers would confidently predict that a fourfold increase in a people’s material living standard would make them considerably happier.
  • ...7 more annotations...
  • What explains the “U” at a time of unprecedented economic growth?
  • Although the rate of layoffs dropped considerably in the early 2000s and unemployment started falling, Chinese people’s concerns about jobs and safety-net benefits persisted.
  • Before free-market reforms kicked in, most urban Chinese workers enjoyed what was called an “iron rice bowl”: permanent jobs and an extensive employer-provided safety net, which included subsidized food, housing, health care, child care, pensions and jobs for grown children. Life satisfaction during this period among urban Chinese, despite their much lower levels of income, was almost as high as in the developed world.
  • The transition to a more private economy in the 1990s abruptly overturned the iron rice bowl.
  • Yet there is no evidence that the Chinese people are, on average, any happier, according to an analysis of survey data that colleagues and I conducted. If anything, they are less satisfied than in 1990, and the burden of decreasing satisfaction has fallen hardest on the bottom third of the population in wealth. Satisfaction among Chinese in even the upper third has risen only moderately.
  • Evidence of a fraying social safety net is indicated by the decline in self-reported health among the bottom third: those reporting that their health was good or very good dropped to 44 percent, compared with 54 percent in 1990.
  • China’s transition has been similar in several respects to the transitions of countries in Central and Eastern Europe, for which we have similar life-satisfaction data.
  •  
    Foxconn- this is the company (largest of its kind in the world) that had to install nets around a factory not too long ago to prevent perpetual suicides by the workers.
carolinehayter

How Belarus 'hijacking' will affect flights in Europe | CNN Travel - 0 views

  • In the week since Ryanair flight FR4978 from Athens to Vilnius was forcibly diverted to Minsk, travel in Europe already looks very different.
  • The directive, issued Wednesday by the European Union Aviation Safety Agency (EASA) under the form of a Safety Information Bulletin (SIB), called on all airlines "with their principle place of business in one of the EASA member states" to avoid Belarusian airspace. They advised that all other airlines should do the same, wherever they are based.
  • There were other implications, with Russia -- an ally of Belarus -- taking several days to grant Air France and Austrian Airlines flights to Moscow the clearance to use Russian airspace to divert around Belarus, prompting cancelations.
  • ...16 more annotations...
  • So how big a deal is this? Huge, say industry insiders -- big enough to have already shaken the aviation map of Europe, and big enough to have knock-on effects beyond the continent -- particularly if the situation escalates further.
  • If it did, passengers could see their flight times increased, a rise in fares across the networks, and even long-haul, nonstop flights needing to make refueling stops along the way.
  • "Now that they're not flying over its airspace, that's good -- governments have acted swiftly to restore confidence -- but I think it'll throw up questions for consumers over who they're flying with, which points they're flying between and how they're flying between them. If you were flying from Athens to Lithuania, or in the region around Russia, you might think twice.
  • The events, described by some governments as a state-sponsored hijacking, have "inevitably redrawn the aviation map of Europe," says one airline industry insider, who wanted to remain anonymous due to the risk of being identified. (
  • But the issues don't just end there, they say. "The problem you have is the challenge around where you draw the new map -- that whole region has restrictions."There are already restrictions flying over Ukraine"
  • "So Belarus had seen a huge increase in traffic because people were going around Ukraine."
  • As well as the increased fuel burn and longer flight times, he says, any unplanned stops can send crews over their allotted hours. "They might need to be swapped out, with a new crew being flown in. There are significant consequences to this sort of disruption," he says.
  • "There's a big lump of airspace which is strategically important to airlines and is now being denied them -- and there'll be a knock-on effect on flight times, cost, and environmental impact."
  • Everyone in the industry agrees that if diversions become a long-term thing, it'll be a headache.
  • "Airlines will either have to go very far north into the polar region, or to go down to the Gulf States -- but then most European carriers would avoid flying over Iraq and Iran. So, they'd probably go over Egypt, Saudi Arabia and across India.
  • The rules and regulations around airline safety are "absolutely sacrosanct," he says -- and have been enshrined in international law since 1944, in the Chicago Convention, which established freedom of the skies after the Second World War.
  • "This is the first time that a mechanism designed to ensure the safety and security of air travel has allegedly been used for political ends, and what's also worrying is that the political response to that has also been to use another mechanism designed to ensure flight security for political ends.
  • If you start playing politics with flight safety, you're setting out on a slippery slope, he argues.
  • "This symbolizes something really big -- since the Chicago Convention, freedom of the skies has been laid out. It's supposed to be universally accepted that airlines have a right to overfly a foreign country without being forced to land," they say.
  • "Clearly that has been violated. What Belarus is said to have done is really horrible -- and if it turns out to be a precedent, it's even worse. It's a terrible signifier of what could happen."
  • In short?"Everyone is worried about what this incident means for the future."
Javier E

Why You Can Dine Indoors but Can't Have Thanksgiving - The Atlantic - 0 views

  • Because the state and city had reopened restaurants, Josh, who asked to be identified only by his first name to protect his privacy, assumed that local health officials had figured out a patchwork of precautions that would make indoor dining safe.
  • They were listening to the people they were told to listen to—New York Governor Andrew Cuomo recently released a book about how to control the pandemic—and following all the rules.
  • Josh was irritated, but not because of me. If indoor dining couldn’t be made safe, he wondered, why were people being encouraged to do it? Why were temperature checks being required if they actually weren’t useful? Why make rules that don’t keep people safe?
  • ...18 more annotations...
  • Before you can dig into how cities and states are handling their coronavirus response, you have to deal with the elephant in the hospital room: Almost all of this would be simpler if the Trump administration and its allies had, at any point since January, behaved responsibly.
  • In the country’s new devastating wave of infections, a perilous gap exists between the realities of transmission and the rules implemented to prevent it. “When health authorities present one rule after another without clear, science-based substantiation, their advice ends up seeming arbitrary and capricious,”
  • “That erodes public trust and makes it harder to implement rules that do make sense.” Experts know what has to be done to keep people safe, but confusing policies and tangled messages from some of the country’s most celebrated local leaders are setting people up to die.
  • Across America, this type of honest confusion abounds. While a misinformation-gorged segment of the population rejects the expert consensus on virus safety outright, so many other people, like Josh, are trying to do everything right, but run afoul of science without realizing it.
  • Early federal financial-aid programs could have been renewed and expanded as the pandemic worsened. Centrally coordinated testing and contact-tracing strategies could have been implemented. Reliable, data-based federal guidelines for what kinds of local restrictions to implement and when could have been developed.
  • The country could have had a national mask mandate. Donald Trump and his congressional allies could have governed instead of spending most of the year urging people to violate emergency orders and “liberate” their states from basic safety protocols.
  • But that’s not the country Americans live in. Responding to this national disaster has been left to governors, mayors, and city councils, basically since day one
  • When places including New York, California, and Massachusetts first faced surging outbreaks, they implemented stringent safety restrictions—shelter-in-place orders, mask mandates, indoor-dining and bar closures. The strategy worked: Transmission decreased, and businesses reopened. But as people ventured out and cases began to rise again, many of those same local governments have warned residents of the need to hunker down and avoid holiday gatherings, yet haven’t reinstated the safety mandates that saved lives six months ago
  • Even in cities and states that have had some success controlling the pandemic, a discrepancy between rules and reality has become its own kind of problem.
  • it’s a lot of wasted time and money.” Instead of centralizing the development of infrastructure and methods to deal with the pandemic, states with significantly different financial resources and political climates have all built their own information environments and have total freedom to interpret their data as they please.
  • beneath this contradiction lies a fundamental conflict that state and local leaders have been forced to navigate for the better part of a year. Amid the pandemic, the people they govern would generally be better served if they got to stay home, stay safe, and not worry about their bills. To govern, though, leaders also need to placate the other centers of power in American communities: local business associations, real-estate developers, and industry interest groups
  • The best way to resolve this conflict would probably be to bail out workers and business owners. But to do that at a state level, governors need cash on hand; currently, most of them don’t have much. The federal government, which could help states in numerous ways, has done little to fill state coffers, and has let many of its most effective direct-aid programs expire without renewal.
  • If you make people safe and comfortable at home, it might be harder to make them risk their lives for minimum wage at McDonald’s during a pandemic.
  • However effective these kinds of robust monetary programs may be at keeping people fed, housed, and safe, they are generally not in line with the larger project of the American political establishment, which favors bolstering “job creators” instead of directly helping those who might end up working those jobs
  • Why can’t a governor or mayor just be honest? There’s no help coming from the Trump administration, the local coffers are bare, and as a result, concessions are being made to business owners who want workers in restaurants and employees in offices in order to white-knuckle it for as long as possible and with as many jobs intact as possible, even if hospitals start to fill up again. Saying so wouldn’t change the truth, but it would better equip people to evaluate their own safety in their daily life, and make better choices because of it.
  • Kirk Sell stopped me short. “Do you think it might be the end of their career, though?” she asked. “Probably.”
  • With people out of work and small businesses set up to fail en masse, America has landed on its current contradiction: Tell people it’s safe to return to bars and restaurants and spend money inside while following some often useless restrictions, but also tell them it’s unsafe to gather in their home, where nothing is for sale.
  • Transparency, Kirk Sell told me, would go a long way toward helping people evaluate new restrictions and the quality and intentions of their local leadership. “People aren’t sheep,” she said. “People act rationally with the facts that they have, but you have to provide an understanding of why these decisions are being made, and what kind of factors are being considered.”
Javier E

Mini Nuclear Reactors Offer Promise of Cheaper, Clean Power - WSJ - 0 views

  • Next-generation nuclear must overcome public wariness of the technology engendered by the terrifying mishaps at Three Mile Island, Chernobyl and, most recently, Fukushima. Then there is the challenge of making a compelling case for nuclear power as the cost of electricity from natural gas, wind and solar is plunging.
  • Rather than offering up SMRs as a replacement for renewables, proponents of the devices say they can play a complementary role in the smart grid of the future—replacing coal- and gas-fired plants and operating alongside wind and solar
  • Most utilities rely on a variety of electricity sources, with differing costs, emissions and capacity to provide the constant flow that power grids need for stability, says Tom Mundy, chief technology officer at SMR developer NuScale Power LLC. “Our technology is a great complement to renewable power systems,”
  • ...11 more annotations...
  • The U.S. government is lending its support to SMR development. In September, the Nuclear Regulatory Commission for the first time issued a final safety evaluation report on a SMR—a critical step before a design can be approved—to NuScale
  • is developing its first commercial SMR for utilities in Utah and promising power by the end of the decade.
  • the Energy Department awarded $210 million to 10 projects to develop technologies for SMRs and beyond, as part of its Advanced Reactor Demonstration Program. The agency had already awarded $400 million to various projects since 2014 “to accelerate the development and deployment of SMRs,
  • . Potential buyers range from U.S. utilities trying to phase out coal-fired generators to Eastern European countries seeking energy independence.
  • GE’s second offering, a system now in development with nuclear startup TerraPower LLC, replaces water with molten salt, similar to what’s used in some advanced solar-power arrays. Dubbed Natrium, the system runs hotter than water-cooled reactors but at lower pressure and with passive cooling, which eliminates piping and electrical systems while improving safety, according to TerraPower CEO Chris Levesque.
  • “When you have a really elegant design, you can get multiple benefits working together,” Mr. Levesque says. TerraPower, established by investors including Bill Gates, received $80 million of the Energy Department funding for Natrium in October.
  • Greenpeace, the Union of Concerned Scientists and other advocacy groups argue that nuclear power remains a dangerous technological dead-end that causes as many problems as it solves.
  • Traditional reactors grew over time to achieve greater efficiencies of scale and lower cost per kilowatt-hour because power output rose faster than construction and operating costs. “There’s no reason that’s changed,” he says, dismissing SMR makers’ promises of lower costs and increased safety
  • Many proposed SMR expense reductions, such as less shielding, could ultimately increase their danger, while the combined use of several modules could create new safety risks like radioactive contamination that negate gains in individual modules, he says.
  • Mr. Ramana also says that the technological advances like 3-D printing and digital manufacturing that make SMRs possible are doing even more to improve green renewables. “It’s a kind of treadmill race, where one treadmill is going much faster.”
  • although SMRs have lower upfront capital cost per unit, their economic competitiveness is still to be proven.”
1 - 20 of 743 Next › Last »
Showing 20 items per page