Skip to main content

Home/ History Readings/ Group items tagged carbon

Rss Feed Group items tagged

lucieperloff

Danish energy fund to lead massive green hydrogen project in Spain - 0 views

  • On Tuesday, Copenhagen Infrastructure Partners announced details of a partnership with Spanish companies Naturgy, Enagás and Fertiberia. Vestas, the Danish wind turbine manufacturer, is also involved.
  • A pipeline will link Aragon with Valencia in the east of Spain, sending the hydrogen to a green ammonia facility. CIP said this ammonia would then be “upgraded” into fertilizer.
  • Hydrogen has a diverse range of applications and can be deployed in a wide range of industries. It can be produced in a number of ways. One method includes using electrolysis, with an electric current splitting water into oxygen and hydrogen.
  • ...3 more annotations...
  • The scale of the overall development is considerable. “Once fully implemented, Catalina will produce enough green hydrogen to supply 30% of Spain’s current hydrogen demand,” CIP said.
  • And in July 2021, a briefing from the World Energy Council said low-carbon hydrogen was not currently “cost-competitive with other energy supplies in most applications and locations.” It added that the situation was unlikely to change unless there was “significant support to bridge the price gap.”
  • For its part, the European Commission has laid out plans to install 40 GW of renewable hydrogen electrolyzer capacity in the European Union by the year 2030.
Javier E

Opinion | Children in the Hands of God and Climate Change - The New York Times - 0 views

  • Ezra Klein, who devoted his weekend column to arguing for an optimistic, life-affirming response to the challenges of rising temperatures.
  • I endorse my colleague’s argument unreservedly, especially his reasonable historical perspective on how the risks of a hotter future compare to the far more impoverished and brutal straits in which our ancestors chose life for their children and, ultimately, for us
  • In worrying about hypothetical kids faring badly under climate change, the secular imagination is letting itself be steered toward the harsh analysis of Blaise Pascal:Let us imagine a number of men in chains and all condemned to death, where some are killed each day in the sight of the others, and those who remain see their own fate in that of their fellows and wait their turn, looking at each other sorrowfully and without hope. It is an image of the condition of men.
  • ...11 more annotations...
  • Why this, why now?
  • One answer is simple misapprehension: People steeped in the most alarmist forms of activism and argument may believe, wrongly, that we’re on track for the imminent collapse of human civilization or the outright extinction of the human race.
  • Another answer is ideological: The ideas of white and Western guilt are particularly important to contemporary progressivism, and in certain visions of ecological economy, removing one’s potential kids from the carbon-emitting equation amounts to a kind of eco-reparations.
  • I still suspect the fear of suffering and dying per se is more important than the kind of suffering and death being envisioned — that it’s the general idea of bearing a child fated to extinction that’s most frightening, not the specific perils of climate change.
  • the psychological roots of the procreation-amid-climate-change anxiety.
  • Or, rather, an image of men in a godless universe.
  • the problem of meaning in a purposeless cosmos clearly hangs over the more secularized precincts of our society, lending surprising resilience to all kinds of spiritual impulses and ideas but also probably contributing to certain forms of existential dread.
  • to the extent that every child deliberately conceived is a direct wager against Pascal’s dire analysis, it would make sense that under such shadows, anxieties about the ethics of childbearing would be particularly acute.
  • Against these anxieties, my colleague’s column urges a belief in a future where human agency overcomes existential threats and ushers in a “welcoming” and even “thrilling” world. This is a welcome admonition; I believe in those possibilities myself.
  • But the promise of a purposive, divinely created universe — in which, I would stress, it remains more than reasonable to believe — is that life is worth living and worth conceiving even if the worst happens, the crisis comes, the hope of progress fails.
  • The child who lives to see the green future is infinitely valuable; so is the child who lives to see the apocalypse. For us, there is only the duty to give that child its chance to join the story; its destiny belongs to God.
lilyrashkind

MIT Engineers Create A Lightweight Material That Is Stronger Than Steel Kids News Article - 0 views

  • nt polymers, which include all plastics, are made up of chains of building blocks called monomers. They are strung together in repetitive patterns. While the monomer chains are strong, the gaps between them are weak and porous. This is the reason you are sometimes able to smell food stored inside ziplock bags.
  • The researchers assert that the flat sheets of polymer can be stacked together to make strong, ultra-light building materials that could replace steel. Since 2DPA-1 is cheap to manufacture in large quantities, it would substantially reduce the cost of building different structures. It would also be better for the environment because steel production is responsible for about 8 percent of global carbon dioxide emissions.
  • The MIT scientists, who published their findings in the journal Nature on February 3, 2022, did not test to see if 2DPA-1 can be recycled. However, they believe the stronger, durable material could someday replace disposable containers. This would help reduce plastic pollution.
Javier E

The Contradictions of Sam Altman, the AI Crusader Behind ChatGPT - WSJ - 0 views

  • Mr. Altman said he fears what could happen if AI is rolled out into society recklessly. He co-founded OpenAI eight years ago as a research nonprofit, arguing that it’s uniquely dangerous to have profits be the main driver of developing powerful AI models.
  • He is so wary of profit as an incentive in AI development that he has taken no direct financial stake in the business he built, he said—an anomaly in Silicon Valley, where founders of successful startups typically get rich off their equity. 
  • His goal, he said, is to forge a new world order in which machines free people to pursue more creative work. In his vision, universal basic income—the concept of a cash stipend for everyone, no strings attached—helps compensate for jobs replaced by AI. Mr. Altman even thinks that humanity will love AI so much that an advanced chatbot could represent “an extension of your will.”
  • ...44 more annotations...
  • The Tesla Inc. CEO tweeted in February that OpenAI had been founded as an open-source nonprofit “to serve as a counterweight to Google, but now it has become a closed source, maximum-profit company effectively controlled by Microsoft. Not what I intended at all.”
  • Backers say his brand of social-minded capitalism makes him the ideal person to lead OpenAI. Others, including some who’ve worked for him, say he’s too commercially minded and immersed in Silicon Valley thinking to lead a technological revolution that is already reshaping business and social life. 
  • In the long run, he said, he wants to set up a global governance structure that would oversee decisions about the future of AI and gradually reduce the power OpenAI’s executive team has over its technology. 
  • OpenAI researchers soon concluded that the most promising path to achieve artificial general intelligence rested in large language models, or computer programs that mimic the way humans read and write. Such models were trained on large volumes of text and required a massive amount of computing power that OpenAI wasn’t equipped to fund as a nonprofit, according to Mr. Altman. 
  • In its founding charter, OpenAI pledged to abandon its research efforts if another project came close to building AGI before it did. The goal, the company said, was to avoid a race toward building dangerous AI systems fueled by competition and instead prioritize the safety of humanity.
  • While running Y Combinator, Mr. Altman began to nurse a growing fear that large research labs like DeepMind, purchased by Google in 2014, were creating potentially dangerous AI technologies outside the public eye. Mr. Musk has voiced similar concerns of a dystopian world controlled by powerful AI machines. 
  • Messrs. Altman and Musk decided it was time to start their own lab. Both were part of a group that pledged $1 billion to the nonprofit, OpenAI Inc. 
  • Mr. Altman said he doesn’t necessarily need to be first to develop artificial general intelligence, a world long imagined by researchers and science-fiction writers where software isn’t just good at one specific task like generating text or images but can understand and learn as well or better than a human can. He instead said OpenAI’s ultimate mission is to build AGI, as it’s called, safely.
  • “We didn’t have a visceral sense of just how expensive this project was going to be,” he said. “We still don’t.”
  • Tensions also grew with Mr. Musk, who became frustrated with the slow progress and pushed for more control over the organization, people familiar with the matter said. 
  • OpenAI executives ended up reviving an unusual idea that had been floated earlier in the company’s history: creating a for-profit arm, OpenAI LP, that would report to the nonprofit parent. 
  • Reid Hoffman, a LinkedIn co-founder who advised OpenAI at the time and later served on the board, said the idea was to attract investors eager to make money from the commercial release of some OpenAI technology, accelerating OpenAI’s progress
  • “You want to be there first and you want to be setting the norms,” he said. “That’s part of the reason why speed is a moral and ethical thing here.”
  • The decision further alienated Mr. Musk, the people familiar with the matter said. He parted ways with OpenAI in February 2018. 
  • Mr. Musk announced his departure in a company all-hands, former employees who attended the meeting said. Mr. Musk explained that he thought he had a better chance at creating artificial general intelligence through Tesla, where he had access to greater resources, they said.
  • OpenAI said that it received about $130 million in contributions from the initial $1 billion pledge, but that further donations were no longer needed after the for-profit’s creation. Mr. Musk has tweeted that he donated around $100 million to OpenAI. 
  • Mr. Musk’s departure marked a turning point. Later that year, OpenAI leaders told employees that Mr. Altman was set to lead the company. He formally became CEO and helped complete the creation of the for-profit subsidiary in early 2019.
  • A young researcher questioned whether Mr. Musk had thought through the safety implications, the former employees said. Mr. Musk grew visibly frustrated and called the intern a “jackass,” leaving employees stunned, they said. It was the last time many of them would see Mr. Musk in person.  
  • In the meantime, Mr. Altman began hunting for investors. His break came at Allen & Co.’s annual conference in Sun Valley, Idaho in the summer of 2018, where he bumped into Satya Nadella, the Microsoft CEO, on a stairwell and pitched him on OpenAI. Mr. Nadella said he was intrigued. The conversations picked up that winter.
  • “I remember coming back to the team after and I was like, this is the only partner,” Mr. Altman said. “They get the safety stuff, they get artificial general intelligence. They have the capital, they have the ability to run the compute.”   
  • Mr. Altman disagreed. “The unusual thing about Microsoft as a partner is that it let us keep all the tenets that we think are important to our mission,” he said, including profit caps and the commitment to assist another project if it got to AGI first. 
  • Some employees still saw the deal as a Faustian bargain. 
  • OpenAI’s lead safety researcher, Dario Amodei, and his lieutenants feared the deal would allow Microsoft to sell products using powerful OpenAI technology before it was put through enough safety testing,
  • They felt that OpenAI’s technology was far from ready for a large release—let alone with one of the world’s largest software companies—worrying it could malfunction or be misused for harm in ways they couldn’t predict.  
  • Mr. Amodei also worried the deal would tether OpenAI’s ship to just one company—Microsoft—making it more difficult for OpenAI to stay true to its founding charter’s commitment to assist another project if it got to AGI first, the former employees said.
  • Microsoft initially invested $1 billion in OpenAI. While the deal gave OpenAI its needed money, it came with a hitch: exclusivity. OpenAI agreed to only use Microsoft’s giant computer servers, via its Azure cloud service, to train its AI models, and to give the tech giant the sole right to license OpenAI’s technology for future products.
  • In a recent investment deck, Anthropic said it was “committed to large-scale commercialization” to achieve the creation of safe AGI, and that it “fully committed” to a commercial approach in September. The company was founded as an AI safety and research company and said at the time that it might look to create commercial value from its products. 
  • Mr. Altman “has presided over a 180-degree pivot that seems to me to be only giving lip service to concern for humanity,” he said. 
  • “The deal completely undermines those tenets to which they secured nonprofit status,” said Gary Marcus, an emeritus professor of psychology and neural science at New York University who co-founded a machine-learning company
  • The cash turbocharged OpenAI’s progress, giving researchers access to the computing power needed to improve large language models, which were trained on billions of pages of publicly available text. OpenAI soon developed a more powerful language model called GPT-3 and then sold developers access to the technology in June 2020 through packaged lines of code known as application program interfaces, or APIs. 
  • Mr. Altman and Mr. Amodei clashed again over the release of the API, former employees said. Mr. Amodei wanted a more limited and staged release of the product to help reduce publicity and allow the safety team to conduct more testing on a smaller group of users, former employees said. 
  • Mr. Amodei left the company a few months later along with several others to found a rival AI lab called Anthropic. “They had a different opinion about how to best get to safe AGI than we did,” Mr. Altman said.
  • Anthropic has since received more than $300 million from Google this year and released its own AI chatbot called Claude in March, which is also available to developers through an API. 
  • Mr. Altman shared the contract with employees as it was being negotiated, hosting all-hands and office hours to allay concerns that the partnership contradicted OpenAI’s initial pledge to develop artificial intelligence outside the corporate world, the former employees said. 
  • In the three years after the initial deal, Microsoft invested a total of $3 billion in OpenAI, according to investor documents. 
  • More than one million users signed up for ChatGPT within five days of its November release, a speed that surprised even Mr. Altman. It followed the company’s introduction of DALL-E 2, which can generate sophisticated images from text prompts.
  • By February, it had reached 100 million users, according to analysts at UBS, the fastest pace by a consumer app in history to reach that mark.
  • n’s close associates praise his ability to balance OpenAI’s priorities. No one better navigates between the “Scylla of misplaced idealism” and the “Charybdis of myopic ambition,” Mr. Thiel said. 
  • Mr. Altman said he delayed the release of the latest version of its model, GPT-4, from last year to March to run additional safety tests. Users had reported some disturbing experiences with the model, integrated into Bing, where the software hallucinated—meaning it made up answers to questions it didn’t know. It issued ominous warnings and made threats. 
  • “The way to get it right is to have people engage with it, explore these systems, study them, to learn how to make them safe,” Mr. Altman said.
  • After Microsoft’s initial investment is paid back, it would capture 49% of OpenAI’s profits until the profit cap, up from 21% under prior arrangements, the documents show. OpenAI Inc., the nonprofit parent, would get the rest.
  • He has put almost all his liquid wealth in recent years in two companies. He has put $375 million into Helion Energy, which is seeking to create carbon-free energy from nuclear fusion and is close to creating “legitimate net-gain energy in a real demo,” Mr. Altman said.
  • He has also put $180 million into Retro, which aims to add 10 years to the human lifespan through “cellular reprogramming, plasma-inspired therapeutics and autophagy,” or the reuse of old and damaged cell parts, according to the company. 
  • He noted how much easier these problems are, morally, than AI. “If you’re making nuclear fusion, it’s all upside. It’s just good,” he said. “If you’re making AI, it is potentially very good, potentially very terrible.” 
Javier E

Opinion | I surrender. A major economic and social crisis seems inevitable. - The Washi... - 0 views

  • On the list of words in danger of cheapening from overuse — think “focus,” “iconic,” “existential,” you have your own favorites — “crisis” must rank near the top
  • A host of prognosticators, coming from diverse disciplinary directions, seems to think something truly worthy of the term is coming. They foresee cataclysmic economic and social change dead ahead, and they align closely regarding the timing of the crash’s arrival
  • Then there’s that little matter of our unconscionable and unpayable national debt, current and committed
  • ...15 more annotations...
  • Looking through a political lens, James Piereson in “Shattered Consensus” observes a collapse of the postwar understanding of government’s role, namely to promote full employment and to police a disorderly world. He expects a “fourth revolution” around the end of this decade, following the Jeffersonian upheaval of 1800, the Civil War and the New Deal. Such a revolution, he writes, is required or else “the polity will begin to disintegrate for lack of fundamental agreement.”
  • In “The Fourth Turning Is Here,” published this summer, demographic historian Neil Howe arrived at a similar conclusion. His view springs from a conviction that human history follows highly predictable cycles based on the “saeculum,” or typical human life span of 80 years or so, and the differing experiences of four generations within that span. The next “turning,” he predicts, is due in about 2033
  • It will resemble those in the 1760s, 1850s and 1920s, Howe writes, that produced “bone-jarring Crises so monumental that, by their end, American society emerged wholly transformed.”
  • Others see disaster’s origins in economics
  • Failure to resume strong growth and to produce greater economic equality will bring forth authoritarian regimes both left and right. This year, in his book “The Crisis of Democratic Capitalism,” Financial Times editor Martin Wolf advocated for an array of reforms, including carbon taxation, a presumption against horizontal mergers, a virtual ban on corporate share buybacks, compulsory voting, and extra votes for younger citizens and parents of children. He fears that, absent such measures, “the light of political and personal freedom might once again disappear from the world.”
  • Unsettling as these forecasts are, the even more troubling thought is that maybe a true crisis is not just inevitable but also necessary to future national success and social cohesion.
  • Now, I’m grudgingly ready to surrender and accept that the cliché must be true: Washington will not face up to its duty except in a genuine crisis. Then and only then will we, as some would say, focus on the existential threats to our iconic institutions.
  • Now, market guru John Mauldin has begun forecasting a “great reset” when these unsustainable bills cannot be paid, when “the economy comes crashing down around our ears.” Writing in August, he said he sees this happening “roughly 7-10 years from now.”
  • Encouragingly, if vaguely, most of these seers retain their optimism. Piereson closes by imagining “a new order on the foundations of the old.” Confessing that he doesn’t “know exactly how it will work,” Mauldin expects us to “muddle through” somehow.
  • Howe, because he sees his sweeping, socially driven generational cycles recurring all the way back to the Greeks, is the most cavalier. Although “the old American republic is collapsing,” he says, we will soon pass through a “great gate in history,” resolve our challenges and emerge with a “new collective identity.”
  • Paradoxically, these ominous projections can help worrywarts like me move through what might be called the stages of political grief.
  • A decade ago, an optimist could tell himself that a democratically mature people could summon the will or the leaders to stop plundering its children’s futures, and to reconcile or at least agree to tolerate sincerely held cultural disagreements.
  • For a while after that, it seemed plausible to hope for incremental reforms that would enable the keeping of most of our safety-net promises, and for a cooling or exhaustion of our poisonous polarization.
  • Bowles called what’s coming “the most predictable economic crisis” — there’s that word again, aptly applied — “in history.” And that was many trillions of borrowing ago.
  • So maybe we might as well get on with it, and hope that we at least “muddle through.” I’ve arrived at the final stage: Crisis? Ready when you are.
« First ‹ Previous 361 - 366 of 366
Showing 20 items per page