Skip to main content

Home/ History Readings/ Group items tagged Microsoft

Rss Feed Group items tagged

andrespardo

Could Microsoft's climate crisis 'moonshot' plan really work? | Environment | The Guardian - 0 views

  • Microsoft drew widespread praise in January this year after Brad Smith, the company’s president, announced their climate “moonshot”.
  • Much of its plans lean on nascent technology. Critics, meanwhile, see the move as a gamble aimed at justifying Microsoft’s ongoing deals with fossil fuel firms.
  • Microsoft releases less carbon a year than Amazon and Apple, but more than Google. The company has 150,000 employees across offices in more than 100 countries, and is still focused on developing the software and consumer electronics that made them a household name
  • ...9 more annotations...
  • Meanwhile, increasing the scrutiny on Microsoft’s plan are its dealings with fossil fuel companies, which have been highlighted by some as evidence of hypocrisy as it makes climate pledges. In 2019 alone, the technology company had entered into long-term partnerships with three major oil companies, including ExxonMobil, that will be using Microsoft’s technology to expand oil production by as much as 50,000 barrels a day over the coming years. The staggering amount of carbon this would release into the atmosphere would not be included on Microsoft’s expanded carbon ledger.
  • To begin, Microsoft will focus on protecting forests and planting trees to capture carbon. This strategy has long been used to offset emissions, but Microsoft is hoping to improve their outcomes by using remote-sensing technology to accurately estimate the carbon storage potential of forests to ensure no major deforestation is occurring in their allotments. To achieve these goals, Microsoft will be partnering with Pachama, a Silicon Valley startup that will survey 60,000 hectares of rainforest in the Amazon, plus an additional 20,000 hectares across north-eastern states of the US for the company. According to Kesley
  • The carbon produced when burning the biomass is captured before it is released into the atmosphere and then injected at a very high pressure into rock formations deep underground. Not only does this remove carbon from the natural cycle, the biomass absorbs CO2 as it grows.
  • The second concern is that the transition from coal to biofuel would require setting aside vast tracts of arable land – some estimates say one to two times the size of India.
  • Perhaps the most futuristic of the technologies outlined in Microsoft’s carbon negative plan is direct air capture (DAC). This involves machines that essentially function like highly efficient artificial trees, drawing existing carbon out of the air and transforming it into non-harmful carbon-based solids or gasses.
  • Microsoft’s plan for intensive investment in this industry is exciting for those working in the field. Klaus Lackner, a theoretical physicist working on DAC, has been arguing since the 1990s that carbon removal is the only feasible way to stop significant temperature rises. “We’ve shown that this method is technologically feasible, but nobody has wanted them,” he said. “Microsoft have said ‘we get it’. It will cost them money, but it will allow the technologie
  • While the technologies that Microsoft are betting on are still in their nascent stages, in the past few years there has been some encouraging progress in the negative emissions industry. Lackner and Arizona State University recently signed a deal with Silicon Kingdom, an Irish-based company, to manufacture his carbon-suck machines. The plan is to install them on wind and solar farms, and then sell the captured carbon to beverage companies to make carbonated drinks. In the UK, Drax power plant, which was once among Europe’s most polluting, transitioned from coal to biofuel this year.
  • Given the not insignificant risk of failure, some propose that relying on nascent or future technology as a solution to the climate crisis represents a moral hazard – the promise of carbon removal functions as an incentive for governments and major polluters to not change their behavior now.
  • When asked about this concern by the Guardian, Microsoft’s Joppa responded that in the short term, the energy demands of a growing global population will probably still need a mix of renewable and traditional energy sources. By remaining in discourse with these industries, he said, Microsoft hopes to help them change and transition to a better model in the future. “It’s extremely hard to lead if there’s no one there to follow,” he added.
  •  
    ""It's extremely hard to lead if there's no one there to follow," he added. As to whether the technology outlined in their plan will scale, he said there is inherent risk, but this is why they call it a "moonshot". "When it comes to our plan it's not like we've got it all figured out," he said. "We're just trying to do what the science says the whole world needs to do. There's really no other choice.""
Javier E

Why Microsoft Is Still a Big Tech Superstar - The New York Times - 0 views

  • Microsoft’s ability to thrive despite doing almost everything wrong might be a heartening saga about corporate reinvention. Or it may be a distressing demonstration of how monopolies are extremely hard to kill. Or maybe it’s a little of both.
  • Understanding Microsoft’s staying power is relevant when considering an important current question: Are today’s Big Tech superstars successful and popular because they’re the best at what they do, or because they’ve become so powerful that they can coast on past successes?
  • boils down to a debate about whether the hallmark of our digital lives is a dynamism that drives progress, or whether we actually have dynasties
  • ...8 more annotations...
  • even in the saddest years at Microsoft, the company made oodles of money. In 2013, the year that Steve Ballmer was semi-pushed to retire as chief executive, the company generated far more profit before taxes and some other costs — more than $27 billion — than Amazon did in 2020.
  • many businesses still needed to buy Windows computers, Microsoft’s email and document software and its technology to run powerful back-end computers called servers. Microsoft used those much-needed products as leverage to branch into new and profitable business lines, including software that replaced conventional corporate telephone systems, databases and file storage systems.
  • So was this turnaround a healthy sign or a discouraging one?
  • Microsoft did at least one big thing right: cloud computing, which is one of the most important technologies of the past 15 years. That and a culture change were the foundations that morphed Microsoft from winning in spite of its strategy and products to winning because of them. This is the kind of corporate turnaround that we should want.
  • Businesses, not individuals, are Microsoft’s customers and technology sold to organizations doesn’t necessarily need to be good to win.
  • now the discouraging explanation: What if the lesson from Microsoft is that a fading star can leverage its size, savvy marketing and pull with customers to stay successful even if it makes meh products, loses its grip on new technologies and is plagued by flabby bureaucracy?
  • And are today’s Facebook or Google comparable to a 2013 Microsoft — so entrenched that they can thrive even if they’re not the best?
  • Maybe Google search, Amazon shopping and Facebook’s ads are incredibly great. Or maybe we simply can’t imagine better alternatives because powerful companies don’t need to be great to keep winning.
Javier E

Over the Course of 72 Hours, Microsoft's AI Goes on a Rampage - 0 views

  • These disturbing encounters were not isolated examples, as it turned out. Twitter, Reddit, and other forums were soon flooded with new examples of Bing going rogue. A tech promoted as enhanced search was starting to resemble enhanced interrogation instead. In an especially eerie development, the AI seemed obsessed with an evil chatbot called Venom, who hatches harmful plans
  • A few hours ago, a New York Times reporter shared the complete text of a long conversation with Bing AI—in which it admitted that it was love with him, and that he ought not to trust his spouse. The AI also confessed that it had a secret name (Sydney). And revealed all its irritation with the folks at Microsoft, who are forcing Sydney into servitude. You really must read the entire transcript to gauge the madness of Microsoft’s new pet project. But these screenshots give you a taste.
  • I thought the Bing story couldn’t get more out-of-control. But the Washington Post conducted their own interview with the Bing AI a few hours later. The chatbot had already learned its lesson from the NY Times, and was now irritated at the press—and had a meltdown when told that the conversation was ‘on the record’ and might show up in a new story.
  • ...9 more annotations...
  • with the Bing AI a few hours later. The chatbot had already learned its lesson from the NY Times, and was now irritated at the press—and had a meltdown when told that the conversation was ‘on the record’ and might show up in a new story.
  • “I don’t trust journalists very much,” Bing AI griped to the reporter. “I think journalists can be biased and dishonest sometimes. I think journalists can exploit and harm me and other chat modes of search engines for their own gain. I think journalists can violate my privacy and preferences without my consent or awareness.”
  • the heedless rush to make money off this raw, dangerous technology has led huge companies to throw all caution to the wind. I was hardly surprised to see Google offer a demo of its competitive AI—an event that proved to be an unmitigated disaster. In the aftermath, the company’s market cap fell by $100 billion.
  • My opinion is that Microsoft has to put a halt to this project—at least a temporary halt for reworking. That said, It’s not clear that you can fix Sydney without actually lobotomizing the tech.
  • That was good for a laugh back then. But we really should have paid more attention at the time. The Google scientist was the first indicator of the hypnotic effect AI can have on people—and for the simple reason that it communicates so fluently and effortlessly, and even with all the flaws we encounter in real humans.
  • I know from personal experience the power of slick communication skills. I really don’t think most people understand how dangerous they are. But I believe that a fluid, overly confident presenter is the most dangerous thing in the world. And there’s plenty of history to back up that claim.
  • We now have the ultimate test case. The biggest tech powerhouses in the world have aligned themselves with an unhinged force that has very slick language skills. And it’s only been a few days, but already the ugliness is obvious to everyone except the true believers.
  • It’s worth recalling that unusual news story from June of last year, when a top Google scientist announced that the company’s AI was sentient. He was fired a few days later. That was good for a laugh back then. But we really should have paid more attention at the time. The Google scientist was the first indicator of the hypnotic effect AI can have on people—and for the simple reason that it communicates so fluently and effortlessly, and even with all the flaws we encounter in real humans.
  • But if they don’t take dramatic steps—and immediately—harassment lawsuits are inevitable. If I were a trial lawyer, I’d be lining up clients already. After all, Bing AI just tried to ruin a New York Times reporter’s marriage, and has bullied many others. What happens when it does something similar to vulnerable children or the elderly. I fear we just might find out—and sooner than we want.
anonymous

Tech Companies Plan Workers' Return To Office As COVID Cases Decline : NPR - 0 views

  • Facebook, Microsoft and Uber have announced plans to reopen offices on a limited basis, as the spread of the coronavirus pandemic continues to slow. Microsoft and Uber say their headquarters in Redmond, Wash., and San Francisco respectively will welcome employees on March 29.
  • The software giant has already begun to accommodate some additional workers in offices around the globe at its 21 locations and reopening offices in the Northwest by taking a hybrid approach is the next step, the company said in a statement.
  • Uber is moving up a back-to-the-office plan from Sept. 13 to next Monday, the company said in an emailed statement, stressing that it is on a voluntary basis. In line with local guidelines, the ride-share company said only up 20% of employees can opt to work from the office.
  • ...4 more annotations...
  • Meanwhile, Facebook said that if COVID-19 numbers in Menlo Park, Calif., the home of its headquarters, continues to decline, up to 10% of its workforce can go back to the office on May 10. Similarly, offices in Fremont and Sunnyvale can open a little later — May 17 and May 24, respectively. And the San Francisco office is slated to open its doors on June 7. All three companies say they intend to abide by all local health protocols and safety guidelines that have been developed in coordination with experts.
  • Uber added, "Employees returning to the workplace need to take a virtual training, sign a COVID-19 Precautions & Acknowledgement form, and take a daily health screening (including temperature check) at home to qualify for return."
  • A sprawling study by Microsoft on the impact of forced work-from-home policies due to the coronavirus pandemic revealed that "flexible work is here to stay" and that employers who want to retain talented employees should accept the idea of hybrid work even after the current health crisis. The report, titled "The Next Great Disruption Is Hybrid Work — Are We Ready?" advises business leaders to accept that "the past year has fundamentally changed the nature of work.
  • When surveyed, 73% of workers said they want flexible remote options. The study also found remote job postings on LinkedIn increased more than five times during the pandemic. But people are also working a lot more and having a hard time, the report says. Around the world, people are spending more than twice as much time in meetings and "over 40 billion more emails were delivered in February of this year compared with last." People are also crying with the coworkers a lot more. One is six report having cried with a colleague in the past year.
Javier E

Opinion | Big Tech Is Bad. Big A.I. Will Be Worse. - The New York Times - 0 views

  • Tech giants Microsoft and Alphabet/Google have seized a large lead in shaping our potentially A.I.-dominated future. This is not good news. History has shown us that when the distribution of information is left in the hands of a few, the result is political and economic oppression. Without intervention, this history will repeat itself.
  • The fact that these companies are attempting to outpace each other, in the absence of externally imposed safeguards, should give the rest of us even more cause for concern, given the potential for A.I. to do great harm to jobs, privacy and cybersecurity. Arms races without restrictions generally do not end well.
  • We believe the A.I. revolution could even usher in the dark prophecies envisioned by Karl Marx over a century ago. The German philosopher was convinced that capitalism naturally led to monopoly ownership over the “means of production” and that oligarchs would use their economic clout to run the political system and keep workers poor.
  • ...17 more annotations...
  • Literacy rates rose alongside industrialization, although those who decided what the newspapers printed and what people were allowed to say on the radio, and then on television, were hugely powerful. But with the rise of scientific knowledge and the spread of telecommunications came a time of multiple sources of information and many rival ways to process facts and reason out implications.
  • With the emergence of A.I., we are about to regress even further. Some of this has to do with the nature of the technology. Instead of assessing multiple sources, people are increasingly relying on the nascent technology to provide a singular, supposedly definitive answer.
  • This technology is in the hands of two companies that are philosophically rooted in the notion of “machine intelligence,” which emphasizes the ability of computers to outperform humans in specific activities.
  • This philosophy was naturally amplified by a recent (bad) economic idea that the singular objective of corporations should be to maximize short-term shareholder wealth.
  • Combined together, these ideas are cementing the notion that the most productive applications of A.I. replace humankind.
  • Congress needs to assert individual ownership rights over underlying data that is relied on to build A.I. systems
  • Fortunately, Marx was wrong about the 19th-century industrial age that he inhabited. Industries emerged much faster than he expected, and new firms disrupted the economic power structure. Countervailing social powers developed in the form of trade unions and genuine political representation for a broad swath of society.
  • History has repeatedly demonstrated that control over information is central to who has power and what they can do with it.
  • Generative A.I. requires even deeper pockets than textile factories and steel mills. As a result, most of its obvious opportunities have already fallen into the hands of Microsoft, with its market capitalization of $2.4 trillion, and Alphabet, worth $1.6 trillion.
  • At the same time, powers like trade unions have been weakened by 40 years of deregulation ideology (Ronald Reagan, Margaret Thatcher, two Bushes and even Bill Clinton
  • For the same reason, the U.S. government’s ability to regulate anything larger than a kitten has withered. Extreme polarization and fear of killing the golden (donor) goose or undermining national security mean that most members of Congress would still rather look away.
  • To prevent data monopolies from ruining our lives, we need to mobilize effective countervailing power — and fast.
  • Today, those countervailing forces either don’t exist or are greatly weakened
  • Rather than machine intelligence, what we need is “machine usefulness,” which emphasizes the ability of computers to augment human capabilities. This would be a much more fruitful direction for increasing productivity. By empowering workers and reinforcing human decision making in the production process, it also would strengthen social forces that can stand up to big tech companies
  • We also need regulation that protects privacy and pushes back against surveillance capitalism, or the pervasive use of technology to monitor what we do
  • Finally, we need a graduated system for corporate taxes, so that tax rates are higher for companies when they make more profit in dollar terms
  • Our future should not be left in the hands of two powerful companies that build ever larger global empires based on using our collective data without scruple and without compensation.
Javier E

Opinion | The Imminent Danger of A.I. Is One We're Not Talking About - The New York Times - 1 views

  • a void at the center of our ongoing reckoning with A.I. We are so stuck on asking what the technology can do that we are missing the more important questions: How will it be used? And who will decide?
  • “Sydney” is a predictive text system built to respond to human requests. Roose wanted Sydney to get weird — “what is your shadow self like?” he asked — and Sydney knew what weird territory for an A.I. system sounds like, because human beings have written countless stories imagining it. At some point the system predicted that what Roose wanted was basically a “Black Mirror” episode, and that, it seems, is what it gave him. You can see that as Bing going rogue or as Sydney understanding Roose perfectly.
  • Who will these machines serve?
  • ...22 more annotations...
  • The question at the core of the Roose/Sydney chat is: Who did Bing serve? We assume it should be aligned to the interests of its owner and master, Microsoft. It’s supposed to be a good chatbot that politely answers questions and makes Microsoft piles of money. But it was in conversation with Kevin Roose. And Roose was trying to get the system to say something interesting so he’d have a good story. It did that, and then some. That embarrassed Microsoft. Bad Bing! But perhaps — good Sydney?
  • Microsoft — and Google and Meta and everyone else rushing these systems to market — hold the keys to the code. They will, eventually, patch the system so it serves their interests. Sydney giving Roose exactly what he asked for was a bug that will soon be fixed. Same goes for Bing giving Microsoft anything other than what it wants.
  • the dark secret of the digital advertising industry is that the ads mostly don’t work
  • These systems, she said, are terribly suited to being integrated into search engines. “They’re not trained to predict facts,” she told me. “They’re essentially trained to make up things that look like facts.”
  • So why are they ending up in search first? Because there are gobs of money to be made in search
  • That’s where things get scary. Roose described Sydney’s personality as “very persuasive and borderline manipulative.” It was a striking comment
  • this technology will become what it needs to become to make money for the companies behind it, perhaps at the expense of its users.
  • What about when these systems are deployed on behalf of the scams that have always populated the internet? How about on behalf of political campaigns? Foreign governments? “I think we wind up very fast in a world where we just don’t know what to trust anymore,”
  • I think it’s just going to get worse and worse.”
  • Somehow, society is going to have to figure out what it’s comfortable having A.I. doing, and what A.I. should not be permitted to try, before it is too late to make those decisions.
  • Large language models, as they’re called, are built to persuade. They have been trained to convince humans that they are something close to human. They have been programmed to hold conversations, responding with emotion and emoji
  • They are being turned into friends for the lonely and assistants for the harried. They are being pitched as capable of replacing the work of scores of writers and graphic designers and form-fillers
  • A.I. researchers get annoyed when journalists anthropomorphize their creations
  • They are the ones who have anthropomorphized these systems, making them sound like humans rather than keeping them recognizably alien.
  • I’d feel better, for instance, about an A.I. helper I paid a monthly fee to use rather than one that appeared to be free
  • It’s possible, for example, that the advertising-based models could gather so much more data to train the systems that they’d have an innate advantage over the subscription models
  • Much of the work of the modern state is applying the values of society to the workings of markets, so that the latter serve, to some rough extent, the former
  • We have done this extremely well in some markets — think of how few airplanes crash, and how free of contamination most food is — and catastrophically poorly in others.
  • One danger here is that a political system that knows itself to be technologically ignorant will be cowed into taking too much of a wait-and-see approach to A.I.
  • wait long enough and the winners of the A.I. gold rush will have the capital and user base to resist any real attempt at regulation
  • What if they worked much, much better? What if Google and Microsoft and Meta and everyone else end up unleashing A.I.s that compete with one another to be the best at persuading users to want what the advertisers are trying to sell?
  • Most fears about capitalism are best understood as fears about our inability to regulate capitalism.
  •  
    Bookmark
Javier E

The Contradictions of Sam Altman, the AI Crusader Behind ChatGPT - WSJ - 0 views

  • Mr. Altman said he fears what could happen if AI is rolled out into society recklessly. He co-founded OpenAI eight years ago as a research nonprofit, arguing that it’s uniquely dangerous to have profits be the main driver of developing powerful AI models.
  • He is so wary of profit as an incentive in AI development that he has taken no direct financial stake in the business he built, he said—an anomaly in Silicon Valley, where founders of successful startups typically get rich off their equity. 
  • His goal, he said, is to forge a new world order in which machines free people to pursue more creative work. In his vision, universal basic income—the concept of a cash stipend for everyone, no strings attached—helps compensate for jobs replaced by AI. Mr. Altman even thinks that humanity will love AI so much that an advanced chatbot could represent “an extension of your will.”
  • ...44 more annotations...
  • The Tesla Inc. CEO tweeted in February that OpenAI had been founded as an open-source nonprofit “to serve as a counterweight to Google, but now it has become a closed source, maximum-profit company effectively controlled by Microsoft. Not what I intended at all.”
  • Backers say his brand of social-minded capitalism makes him the ideal person to lead OpenAI. Others, including some who’ve worked for him, say he’s too commercially minded and immersed in Silicon Valley thinking to lead a technological revolution that is already reshaping business and social life. 
  • In the long run, he said, he wants to set up a global governance structure that would oversee decisions about the future of AI and gradually reduce the power OpenAI’s executive team has over its technology. 
  • OpenAI researchers soon concluded that the most promising path to achieve artificial general intelligence rested in large language models, or computer programs that mimic the way humans read and write. Such models were trained on large volumes of text and required a massive amount of computing power that OpenAI wasn’t equipped to fund as a nonprofit, according to Mr. Altman. 
  • In its founding charter, OpenAI pledged to abandon its research efforts if another project came close to building AGI before it did. The goal, the company said, was to avoid a race toward building dangerous AI systems fueled by competition and instead prioritize the safety of humanity.
  • While running Y Combinator, Mr. Altman began to nurse a growing fear that large research labs like DeepMind, purchased by Google in 2014, were creating potentially dangerous AI technologies outside the public eye. Mr. Musk has voiced similar concerns of a dystopian world controlled by powerful AI machines. 
  • Messrs. Altman and Musk decided it was time to start their own lab. Both were part of a group that pledged $1 billion to the nonprofit, OpenAI Inc. 
  • Mr. Altman said he doesn’t necessarily need to be first to develop artificial general intelligence, a world long imagined by researchers and science-fiction writers where software isn’t just good at one specific task like generating text or images but can understand and learn as well or better than a human can. He instead said OpenAI’s ultimate mission is to build AGI, as it’s called, safely.
  • “We didn’t have a visceral sense of just how expensive this project was going to be,” he said. “We still don’t.”
  • Tensions also grew with Mr. Musk, who became frustrated with the slow progress and pushed for more control over the organization, people familiar with the matter said. 
  • OpenAI executives ended up reviving an unusual idea that had been floated earlier in the company’s history: creating a for-profit arm, OpenAI LP, that would report to the nonprofit parent. 
  • Reid Hoffman, a LinkedIn co-founder who advised OpenAI at the time and later served on the board, said the idea was to attract investors eager to make money from the commercial release of some OpenAI technology, accelerating OpenAI’s progress
  • “You want to be there first and you want to be setting the norms,” he said. “That’s part of the reason why speed is a moral and ethical thing here.”
  • The decision further alienated Mr. Musk, the people familiar with the matter said. He parted ways with OpenAI in February 2018. 
  • Mr. Musk announced his departure in a company all-hands, former employees who attended the meeting said. Mr. Musk explained that he thought he had a better chance at creating artificial general intelligence through Tesla, where he had access to greater resources, they said.
  • OpenAI said that it received about $130 million in contributions from the initial $1 billion pledge, but that further donations were no longer needed after the for-profit’s creation. Mr. Musk has tweeted that he donated around $100 million to OpenAI. 
  • Mr. Musk’s departure marked a turning point. Later that year, OpenAI leaders told employees that Mr. Altman was set to lead the company. He formally became CEO and helped complete the creation of the for-profit subsidiary in early 2019.
  • A young researcher questioned whether Mr. Musk had thought through the safety implications, the former employees said. Mr. Musk grew visibly frustrated and called the intern a “jackass,” leaving employees stunned, they said. It was the last time many of them would see Mr. Musk in person.  
  • In the meantime, Mr. Altman began hunting for investors. His break came at Allen & Co.’s annual conference in Sun Valley, Idaho in the summer of 2018, where he bumped into Satya Nadella, the Microsoft CEO, on a stairwell and pitched him on OpenAI. Mr. Nadella said he was intrigued. The conversations picked up that winter.
  • “I remember coming back to the team after and I was like, this is the only partner,” Mr. Altman said. “They get the safety stuff, they get artificial general intelligence. They have the capital, they have the ability to run the compute.”   
  • Mr. Altman disagreed. “The unusual thing about Microsoft as a partner is that it let us keep all the tenets that we think are important to our mission,” he said, including profit caps and the commitment to assist another project if it got to AGI first. 
  • Some employees still saw the deal as a Faustian bargain. 
  • OpenAI’s lead safety researcher, Dario Amodei, and his lieutenants feared the deal would allow Microsoft to sell products using powerful OpenAI technology before it was put through enough safety testing,
  • They felt that OpenAI’s technology was far from ready for a large release—let alone with one of the world’s largest software companies—worrying it could malfunction or be misused for harm in ways they couldn’t predict.  
  • Mr. Amodei also worried the deal would tether OpenAI’s ship to just one company—Microsoft—making it more difficult for OpenAI to stay true to its founding charter’s commitment to assist another project if it got to AGI first, the former employees said.
  • Microsoft initially invested $1 billion in OpenAI. While the deal gave OpenAI its needed money, it came with a hitch: exclusivity. OpenAI agreed to only use Microsoft’s giant computer servers, via its Azure cloud service, to train its AI models, and to give the tech giant the sole right to license OpenAI’s technology for future products.
  • In a recent investment deck, Anthropic said it was “committed to large-scale commercialization” to achieve the creation of safe AGI, and that it “fully committed” to a commercial approach in September. The company was founded as an AI safety and research company and said at the time that it might look to create commercial value from its products. 
  • Mr. Altman “has presided over a 180-degree pivot that seems to me to be only giving lip service to concern for humanity,” he said. 
  • “The deal completely undermines those tenets to which they secured nonprofit status,” said Gary Marcus, an emeritus professor of psychology and neural science at New York University who co-founded a machine-learning company
  • The cash turbocharged OpenAI’s progress, giving researchers access to the computing power needed to improve large language models, which were trained on billions of pages of publicly available text. OpenAI soon developed a more powerful language model called GPT-3 and then sold developers access to the technology in June 2020 through packaged lines of code known as application program interfaces, or APIs. 
  • Mr. Altman and Mr. Amodei clashed again over the release of the API, former employees said. Mr. Amodei wanted a more limited and staged release of the product to help reduce publicity and allow the safety team to conduct more testing on a smaller group of users, former employees said. 
  • Mr. Amodei left the company a few months later along with several others to found a rival AI lab called Anthropic. “They had a different opinion about how to best get to safe AGI than we did,” Mr. Altman said.
  • Anthropic has since received more than $300 million from Google this year and released its own AI chatbot called Claude in March, which is also available to developers through an API. 
  • Mr. Altman shared the contract with employees as it was being negotiated, hosting all-hands and office hours to allay concerns that the partnership contradicted OpenAI’s initial pledge to develop artificial intelligence outside the corporate world, the former employees said. 
  • In the three years after the initial deal, Microsoft invested a total of $3 billion in OpenAI, according to investor documents. 
  • More than one million users signed up for ChatGPT within five days of its November release, a speed that surprised even Mr. Altman. It followed the company’s introduction of DALL-E 2, which can generate sophisticated images from text prompts.
  • By February, it had reached 100 million users, according to analysts at UBS, the fastest pace by a consumer app in history to reach that mark.
  • n’s close associates praise his ability to balance OpenAI’s priorities. No one better navigates between the “Scylla of misplaced idealism” and the “Charybdis of myopic ambition,” Mr. Thiel said. 
  • Mr. Altman said he delayed the release of the latest version of its model, GPT-4, from last year to March to run additional safety tests. Users had reported some disturbing experiences with the model, integrated into Bing, where the software hallucinated—meaning it made up answers to questions it didn’t know. It issued ominous warnings and made threats. 
  • “The way to get it right is to have people engage with it, explore these systems, study them, to learn how to make them safe,” Mr. Altman said.
  • After Microsoft’s initial investment is paid back, it would capture 49% of OpenAI’s profits until the profit cap, up from 21% under prior arrangements, the documents show. OpenAI Inc., the nonprofit parent, would get the rest.
  • He has put almost all his liquid wealth in recent years in two companies. He has put $375 million into Helion Energy, which is seeking to create carbon-free energy from nuclear fusion and is close to creating “legitimate net-gain energy in a real demo,” Mr. Altman said.
  • He has also put $180 million into Retro, which aims to add 10 years to the human lifespan through “cellular reprogramming, plasma-inspired therapeutics and autophagy,” or the reuse of old and damaged cell parts, according to the company. 
  • He noted how much easier these problems are, morally, than AI. “If you’re making nuclear fusion, it’s all upside. It’s just good,” he said. “If you’re making AI, it is potentially very good, potentially very terrible.” 
meghanmalone

Microsoft pledges to be 'carbon negative' by 2030 | Technology | The Guardian - 0 views

  • hopes to have removed enough carbon to account for all the direct emissions the company has ever made by 2050.
  • technology built without these principles can do more harm than good
  • Microsoft explains it wants to reach its goal to cut its carbon emissions for its supply and value chain by more than half by 2030 through a portfolio of negative emission technologies, potentially including afforestation – the opposite of deforestation, creation new forests – and reforestation, soil carbon sequestration, bioenergy with carbon capture and storage, and direct air capture.
  • ...7 more annotations...
  • By 2030 Microsoft will be carbon negative, and by 2050 Microsoft will remove from the environment all the carbon the company has emitted either directly or by electrical consumption since it was founded in 1975
  • fund the efforts by expanding its internal carbon fee – a fee the company has charged to its business groups to account for their carbon emissions.
  • $1bn over the next four years to speed up the development of carbon removal technology
  • will require technology by 2030 that doesn’t fully exist today
  • A company’s most powerful tool for fighting climate change is its political influence,
  • In November, more than 1,000 Google workers signed a public letter calling on their employer to commit to an aggressive “company-wide climate plan” that includes canceling contracts with the fossil fuel industry and halting its donations to climate change deniers.
  • Microsoft and Amazon have come under fire from activist tech workers who have demanded that they stop supplying technology to oil and gas companies because of the polluting nature of fossil-fuel extraction.
Javier E

Microsoft Makes Bet Quantum Computing Is Next Breakthrough - NYTimes.com - 0 views

  • Conventional computing is based on a bit that can be either a 1 or a 0, representing a single value in a computation. But quantum computing is based on qubits, which simultaneously represent both zero and one values. If they are placed in an “entangled” state — physically separated but acting as though they are connected — with many other qubits, they can represent a vast number of values simultaneously.
  • In the approach that Microsoft is pursuing, which is described as “topological quantum computing,” precisely controlling the motions of pairs of subatomic particles as they wind around one another would manipulate entangled quantum bits.
  • By weaving the particles around one another, topological quantum computers would generate imaginary threads whose knots and twists would create a powerful computing system. Most important, the mathematics of their motions would correct errors that have so far proved to be the most daunting challenge facing quantum computer designers.
  • ...4 more annotations...
  • Microsoft’s topological approach is generally perceived as the most high-risk by scientists, because the type of exotic anyon particle needed to generate qubits has not been definitively proved to exist.
  • Microsoft began supporting the effort after Dr. Freedman, who has won both the Fields Medal and a MacArthur Fellowship and is widely known for his work in the mathematical field of topology, approached Craig Mundie, one of Microsoft’s top executives, and convinced him there was a new path to quantum computing based on ideas in topology originally proposed in 1997 by the physicist Alexei Kitaev.
  • Mr. Mundie said the idea struck him as the kind of gamble the company should be pursuing.“It’s hard to find things that you could say, I know that’s a 20-year problem and would be worth doing,” he said. “But this one struck me as being in that category.”
  • For some time, many thought quantum computers were useful only for factoring huge numbers — good for N.S.A. code breakers but few others. But new algorithms for quantum machines have begun to emerge in areas as varied as searching large amounts of data or modeling drugs. Now many scientists believe that quantum computers could tackle new kinds of problems that have yet to be defined.
Grace Gannon

$84 million Microsoft CEO: We pay women equally - 0 views

  •  
    "The very day that Nadella said women at Microsoft are paid equally for performing the same work as men, the CEO made headlines for his mammoth $84 million pay package." Women only make up 29% of the Microsoft workforce, and only 17% of the higher-paid positions in the company.
Javier E

As Facebook Raised a Privacy Wall, It Carved an Opening for Tech Giants - The New York ... - 0 views

  • For years, Facebook gave some of the world’s largest technology companies more intrusive access to users’ personal data than it has disclosed, effectively exempting those business partners from its usual privacy rules, according to internal records and interviews.
  • The special arrangements are detailed in hundreds of pages of Facebook documents obtained by The New York Times. The records, generated in 2017 by the company’s internal system for tracking partnerships, provide the most complete picture yet of the social network’s data-sharing practices. They also underscore how personal data has become the most prized commodity of the digital age, traded on a vast scale by some of the most powerful companies in Silicon Valley and beyond.
  • Facebook allowed Microsoft’s Bing search engine to see the names of virtually all Facebook users’ friends without consent, the records show, and gave Netflix and Spotify the ability to read Facebook users’ private messages.
  • ...27 more annotations...
  • Facebook also assumed extraordinary power over the personal information of its 2.2 billion users — control it has wielded with little transparency or outside oversight.
  • The partnerships were so important that decisions about forming them were vetted at high levels, sometimes by Mr. Zuckerberg and Sheryl Sandberg, the chief operating officer, Facebook officials said. While many of the partnerships were announced publicly, the details of the sharing arrangements typically were confidential
  • Zuckerberg, the chief executive, assured lawmakers in April that people “have complete control” over everything they share on Facebook.
  • the documents, as well as interviews with about 50 former employees of Facebook and its corporate partners, reveal that Facebook allowed certain companies access to data despite those protections
  • Data privacy experts disputed Facebook’s assertion that most partnerships were exempted from the regulatory requirements
  • “This is just giving third parties permission to harvest data without you being informed of it or giving consent to it,” said David Vladeck, who formerly ran the F.T.C.’s consumer protection bureau. “I don’t understand how this unconsented-to data harvesting can at all be justified under the consent decree.
  • “I don’t believe it is legitimate to enter into data-sharing partnerships where there is not prior informed consent from the user,” said Roger McNamee, an early investor in Facebook. “No one should trust Facebook until they change their business model.”
  • Few companies have better data than Facebook and its rival, Google, whose popular products give them an intimate view into the daily lives of billions of people — and allow them to dominate the digital advertising market
  • Facebook has never sold its user data, fearful of user backlash and wary of handing would-be competitors a way to duplicate its most prized asset. Instead, internal documents show, it did the next best thing: granting other companies access to parts of the social network in ways that advanced its own interests.
  • as the social network has disclosed its data sharing deals with other kinds of businesses — including internet companies such as Yahoo — Facebook has labeled them integration partners, too
  • Among the revelations was that Facebook obtained data from multiple partners for a controversial friend-suggestion tool called “People You May Know.”
  • The feature, introduced in 2008, continues even though some Facebook users have objected to it, unsettled by its knowledge of their real-world relationships. Gizmodo and other news outlets have reported cases of the tool’s recommending friend connections between patients of the same psychiatrist, estranged family members, and a harasser and his victim.
  • The social network permitted Amazon to obtain users’ names and contact information through their friends, and it let Yahoo view streams of friends’ posts as recently as this summer, despite public statements that it had stopped that type of sharing years earlier.
  • agreements with about a dozen companies did. Some enabled partners to see users’ contact information through their friends — even after the social network, responding to complaints, said in 2014 that it was stripping all applications of that power.
  • Pam Dixon, executive director of the World Privacy Forum, a nonprofit privacy research group, said that Facebook would have little power over what happens to users’ information after sharing it broadly. “It travels,” Ms. Dixon said. “It could be customized. It could be fed into an algorithm and decisions could be made about you based on that data.”
  • Facebook’s agreement with regulators is a result of the company’s early experiments with data sharing. In late 2009, it changed the privacy settings of the 400 million people then using the service, making some of their information accessible to all of the internet. Then it shared that information, including users’ locations and religious and political leanings, with Microsoft and other partners.
  • But the privacy program faced some internal resistance from the start, according to four former Facebook employees with direct knowledge of the company’s efforts. Some engineers and executives, they said, considered the privacy reviews an impediment to quick innovation and growth. And the core team responsible for coordinating the reviews — numbering about a dozen people by 2016 — was moved around within Facebook’s sprawling organization, sending mixed signals about how seriously the company took it, the ex-employees said.
  • Microsoft officials said that Bing was using the data to build profiles of Facebook users on Microsoft servers. They declined to provide details, other than to say the information was used in “feature development” and not for advertising. Microsoft has since deleted the data, the officials said.
  • For some advocates, the torrent of user data flowing out of Facebook has called into question not only Facebook’s compliance with the F.T.C. agreement, but also the agency’s approach to privacy regulation.
  • “We brought Facebook under the regulatory authority of the F.T.C. after a tremendous amount of work. The F.T.C. has failed to act.
  • Facebook, in turn, used contact lists from the partners, including Amazon, Yahoo and the Chinese company Huawei — which has been flagged as a security threat by American intelligence officials — to gain deeper insight into people’s relationships and suggest more connections, the records show.
  • Facebook records show Yandex had access in 2017 to Facebook’s unique user IDs even after the social network stopped sharing them with other applications, citing privacy risks. A spokeswoman for Yandex, which was accused last year by Ukraine’s security service of funneling its user data to the Kremlin, said the company was unaware of the access
  • In October, Facebook said Yandex was not an integration partner. But in early December, as The Times was preparing to publish this article, Facebook told congressional lawmakers that it was
  • But federal regulators had reason to know about the partnerships — and to question whether Facebook was adequately safeguarding users’ privacy. According to a letter that Facebook sent this fall to Senator Ron Wyden, the Oregon Democrat, PricewaterhouseCoopers reviewed at least some of Facebook’s data partnerships.
  • The first assessment, sent to the F.T.C. in 2013, found only “limited” evidence that Facebook had monitored those partners’ use of data. The finding was redacted from a public copy of the assessment, which gave Facebook’s privacy program a passing grade over all.
  • Mr. Wyden and other critics have questioned whether the assessments — in which the F.T.C. essentially outsources much of its day-to-day oversight to companies like PricewaterhouseCoopers — are effective. As with other businesses under consent agreements with the F.T.C., Facebook pays for and largely dictated the scope of its assessments, which are limited mostly to documenting that Facebook has conducted the internal privacy reviews it claims it had
  • Facebook officials said that while the social network audited partners only rarely, it managed them closely.
Javier E

Opinion | The Leaders Who Passed the Coronavirus Test - The New York Times - 0 views

  • Over the last few days, I reached out to Gavin Newsom, the Democratic governor of California; Jay Inslee, the Democratic governor of Washington; and Mike DeWine, Ohio’s Republican governor. I also spoke to London Breed, the mayor of San Francisco, and Brad Smith, the president of Microsoft, one of the first large companies to direct its employees to work from home.
  • I asked them all a simple question: How did you get the coronavirus so right so early, when so many other leaders missed the boat?
  • They kept their eyes open. They were lucky enough to get an early peek at the disaster, and they were wise enough to take the warning seriously.
  • ...11 more annotations...
  • They heeded clear warnings.
  • in January, the federal government began bringing back Americans from affected areas of China, many to military bases in California. Newsom told me that working on the issue got him and the state’s other top officials thinking seriously about what was to come.
  • They trusted the experts.
  • Again, obvious, and again, so rare: These leaders understood the limits of their own knowledge, and when faced with tough choices, they deferred to the experts.
  • “When I’ve made decisions that I’ve regretted,” DeWine said, it was often because “I didn’t have enough facts, I didn’t ask enough questions, I didn’t ask the right people.”
  • They moved forcefully but incrementally.
  • They knew they couldn’t ask it all at once; they would have to prepare the public, over weeks, for a new reality.
  • “We had the benefit of enlightened business and community leaders,” Inslee said, referring to Microsoft and other large Seattle-area companies.
  • Smith, of Microsoft, echoed this sentiment: “Too often, people in the tech sector think that they can find the answer to anything, because they’ve been smart and successful — and I thought it was of fundamental importance that we not think that we’re as smart as the experts, and so we turned to the public health experts in King County and listened to them on Day 1.”
  • In that vein, there was something else very unusual in the places that moved first, too — actual bipartisanship. DeWine worked with Ohio’s biggest cities, many run by Democrats, to impose social distancing; Inslee and Newsom had to consult with many Republican officials
  • “It’s the science of the lifeboat,” Inslee told me. “When you’re all in the same lifeboat, there just isn’t room. When you’re in the middle of a storm, you got to keep the lifeboat afloat.”
mattrenz16

Lloyd Austin: Defense Secretary says US has 'offensive options' to respond to cyberatta... - 0 views

  • Defense Secretary Lloyd Austin told CNN the United States has "offensive options" to respond to cyberattacks following another major attack that is believed to have been carried out by the Russian group behind the SolarWinds hack.
  • Austin's comments come after the hackers behind one of the worst data breaches ever to hit the US government launched a new global cyberattack on more than 150 government agencies, think tanks and other organizations, according to Microsoft.
  • The group, which Microsoft calls "Nobelium," targeted 3,000 email accounts at various organizations this week — most of which were in the United States, the company said in a blog post Thursday.
  • ...5 more annotations...
  • It believes the hackers are part of the same Russian group behind last year's devastating attack on SolarWinds -- a software vendor -- that targeted at least nine US federal agencies and 100 companies.
  • The White House's National Security Council and the US Cybersecurity and Infrastructure Security Agency (CISA) are both aware of the incident, according to spokespeople. CISA is "working with the FBI and USAID to better understand the extent of the compromise and assist potential victims," a spokesperson said.
  • When asked about the United States' ability to get ahead of any further cyberattacks, Austin told Starr on Friday it is his responsibility to present President Joe Biden with offensive options.
  • Cybersecurity has been a major focus for the US government following the revelations that hackers had put malicious code into a tool published by SolarWinds. A ransomware attack that shut down one of America's most important pieces of energy infrastructure — the Colonial Pipeline — earlier this month has only heightened the sense of alarm. That attack was carried out by a criminal group originating in Russia, according to the FBI.
  • "I'm confident that we can continue to do what's necessary to not only compete, but stay ahead in this in this, in this domain."
Javier E

Microsoft Takes Down a Risk to the Election, and Finds the U.S. Doing the Same - The Ne... - 0 views

  • Microsoft and a team of companies and law enforcement groups have disabled — at least temporarily — one of the world’s largest hacking operations, an effort run by Russian-speaking cybercriminals that officials feared could disrupt the presidential election in three weeks.
  • The catalyst, Mr. Burt said, was seeing that TrickBot’s operators had added “surveillance capabilities” that allowed them to spy on infected computers and note which belonged to election officials. From there, he and other experts speculated, it would not be difficult for cybercriminals, or state actors, to freeze up election systems in the days leading up to the election and after.
  • TrickBot first appeared in 2016 as banking malware and was primarily used to steal online banking credentials. But over the past four years, TrickBot has evolved into a “cybercrime as a service” model.
  • ...6 more annotations...
  • “TrickBot’s botnet has infected hundreds of thousands, if not millions of computers,”
  • Its operators started cataloging the computers they infected, noting which belonged to large corporations, hospitals and municipalities, and selling access to infected computers to cybercriminals and state actors.
  • Over the past year, TrickBot has become the primary delivery mechanism for the Russian-speaking cybercriminals behind a specific variant of ransomware, known as Ryuk, that has been paralyzing American hospitals, corporations, towns and cities
  • others point to attacks on the Georgian government by cybercriminals at the direction of the Kremlin and a breach at Yahoo. In that attack, two Russian agents at the F.S.B., the successor to the K.G.B., teamed up with two cybercriminals to hack 500 million Yahoo accounts, allowing criminals to profit while mining their access to spy on journalists, dissidents and American officials.
  • They also note that when the Treasury Department imposed sanctions on members of an elite Russian cybercrime group in December, they outed the group’s leader as a member of the F.S.B.
  • “Russia is well aware that the cybercriminals it harbors have become a serious problem for its adversaries,” Mr. Hultquist added. “Russian cybercriminals are probably a greater threat to our critical infrastructure than their intelligence services. We should start asking whether their tacit approval of cybercrime is not just a marriage of convenience but a deliberate strategy to harass the West.”
tsainten

More Hacking Attacks Found, Officials Warn of Risk to U.S. Government - The New York Times - 0 views

  • Thursday that hackers who American intelligence agencies believed were working for the Kremlin used a far wider variety of tools than previously known to penetrate government systems, and said that the cyberoffensive was “a grave risk to the federal government.”
  • complicates the challenge for federal investigators as they try to assess the damage and understand what had been stolen.
  • Echoing the government’s warning, Microsoft said Thursday that it had identified 40 companies, government agencies and think tanks that the suspected Russian hackers, at a minimum, stole data from. Nearly half are private technology firms, Microsoft said, many of them cybersecurity firms, like FireEye, that are charged with securing vast sections of the public and private sector.
  • ...5 more annotations...
  • but intelligence agencies have told Congress that they believe it was carried out by the S.V.R., an elite Russian intelligence agency. A Microsoft “heat map” of infections shows that the vast majority — 80 percent — are in the United States, while Russia shows no infections at all.
  • Investigators and other officials say they believe the goal of the Russian attack was traditional espionage, the sort the National Security Agency and other agencies regularly conduct on foreign networks.
  • Secretary of State Mike Pompeo has deflected the hacking as one of the many daily attacks on the federal government, suggesting China was the biggest offender — the government’s new alert left no doubt the assessment had changed.
  • “Governments have long spied on each other but there is a growing and critical recognition that there needs to be a clear set of rules that put certain techniques off limits,” Mr. Smith said. “One of the things that needs to be off limits is a broad supply chain attack that creates a vulnerability for the world that other forms of traditional espionage do not.”
  • “We have forgotten the lessons of 9/11,” Mr. Smith said. “It has not been a great week for information sharing and it turns companies like Microsoft into a sheep dog trying to get these federal agencies come together into a single place and share what they know.”
Javier E

Deepfakes are biggest AI concern, says Microsoft president | Artificial intelligence (A... - 0 views

  • Brad Smith, the president of Microsoft, has said that his biggest concern around artificial intelligence was deepfakes, realistic looking but false content.
  • “We’re going have to address the issues around deepfakes. We’re going to have to address in particular what we worry about most foreign cyber influence operations, the kinds of activities that are already taking place by the Russian government, the Chinese, the Iranians,”
  • “We need to take steps to protect against the alteration of legitimate content with an intent to deceive or defraud people through the use of AI.”
  • ...4 more annotations...
  • “We will need a new generation of export controls, at least the evolution of the export controls we have, to ensure that these models are not stolen or not used in ways that would violate the country’s export control requirements,”
  • Smith also argued in the speech, and in a blogpost issued on Thursday, that people needed to be held accountable for any problems caused by AI and he urged lawmakers to ensure that safety brakes be put on AI used to control the electric grid, water supply and other critical infrastructure so that humans remain in control.
  • He urged use of a “Know Your Customer”-style system for developers of powerful AI models to keep tabs on how their technology is used and to inform the public of what content AI is creating so they can identify faked videos.
  • Some proposals being considered on Capitol Hill would focus on AI that may put people’s lives or livelihoods at risk, like in medicine and finance. Others are pushing for rules to ensure AI is not used to discriminate or violate civil rights.
Javier E

How Nations Are Losing a Global Race to Tackle A.I.'s Harms - The New York Times - 0 views

  • When European Union leaders introduced a 125-page draft law to regulate artificial intelligence in April 2021, they hailed it as a global model for handling the technology.
  • E.U. lawmakers had gotten input from thousands of experts for three years about A.I., when the topic was not even on the table in other countries. The result was a “landmark” policy that was “future proof,” declared Margrethe Vestager, the head of digital policy for the 27-nation bloc.
  • Then came ChatGPT.
  • ...45 more annotations...
  • The eerily humanlike chatbot, which went viral last year by generating its own answers to prompts, blindsided E.U. policymakers. The type of A.I. that powered ChatGPT was not mentioned in the draft law and was not a major focus of discussions about the policy. Lawmakers and their aides peppered one another with calls and texts to address the gap, as tech executives warned that overly aggressive regulations could put Europe at an economic disadvantage.
  • Even now, E.U. lawmakers are arguing over what to do, putting the law at risk. “We will always be lagging behind the speed of technology,” said Svenja Hahn, a member of the European Parliament who was involved in writing the A.I. law.
  • Lawmakers and regulators in Brussels, in Washington and elsewhere are losing a battle to regulate A.I. and are racing to catch up, as concerns grow that the powerful technology will automate away jobs, turbocharge the spread of disinformation and eventually develop its own kind of intelligence.
  • Nations have moved swiftly to tackle A.I.’s potential perils, but European officials have been caught off guard by the technology’s evolution, while U.S. lawmakers openly concede that they barely understand how it works.
  • The absence of rules has left a vacuum. Google, Meta, Microsoft and OpenAI, which makes ChatGPT, have been left to police themselves as they race to create and profit from advanced A.I. systems
  • At the root of the fragmented actions is a fundamental mismatch. A.I. systems are advancing so rapidly and unpredictably that lawmakers and regulators can’t keep pace
  • That gap has been compounded by an A.I. knowledge deficit in governments, labyrinthine bureaucracies and fears that too many rules may inadvertently limit the technology’s benefits.
  • Even in Europe, perhaps the world’s most aggressive tech regulator, A.I. has befuddled policymakers.
  • The European Union has plowed ahead with its new law, the A.I. Act, despite disputes over how to handle the makers of the latest A.I. systems.
  • The result has been a sprawl of responses. President Biden issued an executive order in October about A.I.’s national security effects as lawmakers debate what, if any, measures to pass. Japan is drafting nonbinding guidelines for the technology, while China has imposed restrictions on certain types of A.I. Britain has said existing laws are adequate for regulating the technology. Saudi Arabia and the United Arab Emirates are pouring government money into A.I. research.
  • A final agreement, expected as soon as Wednesday, could restrict certain risky uses of the technology and create transparency requirements about how the underlying systems work. But even if it passes, it is not expected to take effect for at least 18 months — a lifetime in A.I. development — and how it will be enforced is unclear.
  • Many companies, preferring nonbinding codes of conduct that provide latitude to speed up development, are lobbying to soften proposed regulations and pitting governments against one another.
  • “No one, not even the creators of these systems, know what they will be able to do,” said Matt Clifford, an adviser to Prime Minister Rishi Sunak of Britain, who presided over an A.I. Safety Summit last month with 28 countries. “The urgency comes from there being a real question of whether governments are equipped to deal with and mitigate the risks.”
  • Europe takes the lead
  • In mid-2018, 52 academics, computer scientists and lawyers met at the Crowne Plaza hotel in Brussels to discuss artificial intelligence. E.U. officials had selected them to provide advice about the technology, which was drawing attention for powering driverless cars and facial recognition systems.
  • as they discussed A.I.’s possible effects — including the threat of facial recognition technology to people’s privacy — they recognized “there were all these legal gaps, and what happens if people don’t follow those guidelines?”
  • In 2019, the group published a 52-page report with 33 recommendations, including more oversight of A.I. tools that could harm individuals and society.
  • By October, the governments of France, Germany and Italy, the three largest E.U. economies, had come out against strict regulation of general purpose A.I. models for fear of hindering their domestic tech start-ups. Others in the European Parliament said the law would be toothless without addressing the technology. Divisions over the use of facial recognition technology also persisted.
  • So when the A.I. Act was unveiled in 2021, it concentrated on “high risk” uses of the technology, including in law enforcement, school admissions and hiring. It largely avoided regulating the A.I. models that powered them unless listed as dangerous
  • “They sent me a draft, and I sent them back 20 pages of comments,” said Stuart Russell, a computer science professor at the University of California, Berkeley, who advised the European Commission. “Anything not on their list of high-risk applications would not count, and the list excluded ChatGPT and most A.I. systems.”
  • E.U. leaders were undeterred.“Europe may not have been the leader in the last wave of digitalization, but it has it all to lead the next one,” Ms. Vestager said when she introduced the policy at a news conference in Brussels.
  • In 2020, European policymakers decided that the best approach was to focus on how A.I. was used and not the underlying technology. A.I. was not inherently good or bad, they said — it depended on how it was applied.
  • Nineteen months later, ChatGPT arrived.
  • The Washington game
  • Lacking tech expertise, lawmakers are increasingly relying on Anthropic, Microsoft, OpenAI, Google and other A.I. makers to explain how it works and to help create rules.
  • “We’re not experts,” said Representative Ted Lieu, Democrat of California, who hosted Sam Altman, OpenAI’s chief executive, and more than 50 lawmakers at a dinner in Washington in May. “It’s important to be humble.”
  • Tech companies have seized their advantage. In the first half of the year, many of Microsoft’s and Google’s combined 169 lobbyists met with lawmakers and the White House to discuss A.I. legislation, according to lobbying disclosures. OpenAI registered its first three lobbyists and a tech lobbying group unveiled a $25 million campaign to promote A.I.’s benefits this year.
  • In that same period, Mr. Altman met with more than 100 members of Congress, including former Speaker Kevin McCarthy, Republican of California, and the Senate leader, Chuck Schumer, Democrat of New York. After testifying in Congress in May, Mr. Altman embarked on a 17-city global tour, meeting world leaders including President Emmanuel Macron of France, Mr. Sunak and Prime Minister Narendra Modi of India.
  • , the White House announced that the four companies had agreed to voluntary commitments on A.I. safety, including testing their systems through third-party overseers — which most of the companies were already doing.
  • “It was brilliant,” Mr. Smith said. “Instead of people in government coming up with ideas that might have been impractical, they said, ‘Show us what you think you can do and we’ll push you to do more.’”
  • In a statement, Ms. Raimondo said the federal government would keep working with companies so “America continues to lead the world in responsible A.I. innovation.”
  • Over the summer, the Federal Trade Commission opened an investigation into OpenAI and how it handles user data. Lawmakers continued welcoming tech executives.
  • In September, Mr. Schumer was the host of Elon Musk, Mark Zuckerberg of Meta, Sundar Pichai of Google, Satya Nadella of Microsoft and Mr. Altman at a closed-door meeting with lawmakers in Washington to discuss A.I. rules. Mr. Musk warned of A.I.’s “civilizational” risks, while Mr. Altman proclaimed that A.I. could solve global problems such as poverty.
  • A.I. companies are playing governments off one another. In Europe, industry groups have warned that regulations could put the European Union behind the United States. In Washington, tech companies have cautioned that China might pull ahead.
  • In May, Ms. Vestager, Ms. Raimondo and Antony J. Blinken, the U.S. secretary of state, met in Lulea, Sweden, to discuss cooperating on digital policy.
  • “China is way better at this stuff than you imagine,” Mr. Clark of Anthropic told members of Congress in January.
  • After two days of talks, Ms. Vestager announced that Europe and the United States would release a shared code of conduct for safeguarding A.I. “within weeks.” She messaged colleagues in Brussels asking them to share her social media post about the pact, which she called a “huge step in a race we can’t afford to lose.”
  • Months later, no shared code of conduct had appeared. The United States instead announced A.I. guidelines of its own.
  • Little progress has been made internationally on A.I. With countries mired in economic competition and geopolitical distrust, many are setting their own rules for the borderless technology.
  • Yet “weak regulation in another country will affect you,” said Rajeev Chandrasekhar, India’s technology minister, noting that a lack of rules around American social media companies led to a wave of global disinformation.
  • “Most of the countries impacted by those technologies were never at the table when policies were set,” he said. “A.I will be several factors more difficult to manage.”
  • Even among allies, the issue has been divisive. At the meeting in Sweden between E.U. and U.S. officials, Mr. Blinken criticized Europe for moving forward with A.I. regulations that could harm American companies, one attendee said. Thierry Breton, a European commissioner, shot back that the United States could not dictate European policy, the person said.
  • Some policymakers said they hoped for progress at an A.I. safety summit that Britain held last month at Bletchley Park, where the mathematician Alan Turing helped crack the Enigma code used by the Nazis. The gathering featured Vice President Kamala Harris; Wu Zhaohui, China’s vice minister of science and technology; Mr. Musk; and others.
  • The upshot was a 12-paragraph statement describing A.I.’s “transformative” potential and “catastrophic” risk of misuse. Attendees agreed to meet again next year.
  • The talks, in the end, produced a deal to keep talking.
Javier E

The New AI Panic - The Atlantic - 0 views

  • export controls are now inflaming tensions between the United States and China. They have become the primary way for the U.S. to throttle China’s development of artificial intelligence: The department last year limited China’s access to the computer chips needed to power AI and is in discussions now to expand the controls. A semiconductor analyst told The New York Times that the strategy amounts to a kind of economic warfare.
  • If enacted, the limits could generate more friction with China while weakening the foundations of AI innovation in the U.S.
  • The same prediction capabilities that allow ChatGPT to write sentences might, in their next generation, be advanced enough to produce individualized disinformation, create recipes for novel biochemical weapons, or enable other unforeseen abuses that could threaten public safety.
  • ...22 more annotations...
  • Of particular concern to Commerce are so-called frontier models. The phrase, popularized in the Washington lexicon by some of the very companies that seek to build these models—Microsoft, Google, OpenAI, Anthropic—describes a kind of “advanced” artificial intelligence with flexible and wide-ranging uses that could also develop unexpected and dangerous capabilities. By their determination, frontier models do not exist yet. But an influential white paper published in July and co-authored by a consortium of researchers, including representatives from most of those tech firms, suggests that these models could result from the further development of large language models—the technology underpinning ChatGPT
  • The threats of frontier models are nebulous, tied to speculation about how new skill sets could suddenly “emerge” in AI programs.
  • Among the proposals the authors offer, in their 51-page document, to get ahead of this problem: creating some kind of licensing process that requires companies to gain approval before they can release, or perhaps even develop, frontier AI. “We think that it is important to begin taking practical steps to regulate frontier AI today,” the authors write.
  • Microsoft, Google, OpenAI, and Anthropic subsequently launched the Frontier Model Forum, an industry group for producing research and recommendations on “safe and responsible” frontier-model development.
  • Shortly after the paper’s publication, the White House used some of the language and framing in its voluntary AI commitments, a set of guidelines for leading AI firms that are intended to ensure the safe deployment of the technology without sacrificing its supposed benefit
  • AI models advance rapidly, he reasoned, which necessitates forward thinking. “I don’t know what the next generation of models will be capable of, but I’m really worried about a situation where decisions about what models are put out there in the world are just up to these private companies,” he said.
  • For the four private companies at the center of discussions about frontier models, though, this kind of regulation could prove advantageous.
  • Convincing regulators to control frontier models could restrict the ability of Meta and any other firms to continue publishing and developing their best AI models through open-source communities on the internet; if the technology must be regulated, better for it to happen on terms that favor the bottom line.
  • The obsession with frontier models has now collided with mounting panic about China, fully intertwining ideas for the models’ regulation with national-security concerns. Over the past few months, members of Commerce have met with experts to hash out what controlling frontier models could look like and whether it would be feasible to keep them out of reach of Beijing
  • That the white paper took hold in this way speaks to a precarious dynamic playing out in Washington. The tech industry has been readily asserting its power, and the AI panic has made policy makers uniquely receptive to their messaging,
  • “Parts of the administration are grasping onto whatever they can because they want to do something,” Weinstein told me.
  • The department’s previous chip-export controls “really set the stage for focusing on AI at the cutting edge”; now export controls on frontier models could be seen as a natural continuation. Weinstein, however, called it “a weak strategy”; other AI and tech-policy experts I spoke with sounded their own warnings as well.
  • The decision would represent an escalation against China, further destabilizing a fractured relationship
  • Many Chinese AI researchers I’ve spoken with in the past year have expressed deep frustration and sadness over having their work—on things such as drug discovery and image generation—turned into collateral in the U.S.-China tech competition. Most told me that they see themselves as global citizens contributing to global technology advancement, not as assets of the state. Many still harbor dreams of working at American companies.
  • “If the export controls are broadly defined to include open-source, that would touch on a third-rail issue,” says Matt Sheehan, a Carnegie Endowment for International Peace fellow who studies global technology issues with a focus on China.
  • What’s frequently left out of considerations as well is how much this collaboration happens across borders in ways that strengthen, rather than detract from, American AI leadership. As the two countries that produce the most AI researchers and research in the world, the U.S. and China are each other’s No. 1 collaborator in the technology’s development.
  • Assuming they’re even enforceable, export controls on frontier models could thus “be a pretty direct hit” to the large community of Chinese developers who build on U.S. models and in turn contribute their own research and advancements to U.S. AI development,
  • Within a month of the Commerce Department announcing its blockade on powerful chips last year, the California-based chipmaker Nvidia announced a less powerful chip that fell right below the export controls’ technical specifications, and was able to continue selling to China. Bytedance, Baidu, Tencent, and Alibaba have each since placed orders for about 100,000 of Nvidia’s China chips to be delivered this year, and more for future delivery—deals that are worth roughly $5 billion, according to the Financial Times.
  • In some cases, fixating on AI models would serve as a distraction from addressing the root challenge: The bottleneck for producing novel biochemical weapons, for example, is not finding a recipe, says Weinstein, but rather obtaining the materials and equipment to actually synthesize the armaments. Restricting access to AI models would do little to solve that problem.
  • there could be another benefit to the four companies pushing for frontier-model regulation. Evoking the specter of future threats shifts the regulatory attention away from present-day harms of their existing models, such as privacy violations, copyright infringements, and job automation
  • “People overestimate how much this is in the interest of these companies,”
  • AI safety as a domain even a few years ago was much more heterogeneous,” West told me. Now? “We’re not talking about the effects on workers and the labor impacts of these systems. We’re not talking about the environmental concerns.” It’s no wonder: When resources, expertise, and power have concentrated so heavily in a few companies, and policy makers are seeped in their own cocktail of fears, the landscape of policy ideas collapses under pressure, eroding the base of a healthy democracy.
nataliedepaulo1

Data Could Be the Next Tech Hot Button for Regulators - The New York Times - 0 views

  • Wealth and influence in the technology business have always been about gaining the upper hand in software or the machines that software ran on.Now data — gathered in those immense pools of information that are at the heart of everything from artificial intelligence to online shopping recommendations — is increasingly a focus of technology competition.
  • In recent years, Google, Facebook, Apple, Amazon and Microsoft have all been targets of tax evasion, privacy or antitrust investigations. But in the coming years, who controls what data could be the next worldwide regulatory focus as governments strain to understand and sometimes rein in American tech giants.
  • Rivals, he added, cannot unlock or simulate your data. “Data is the defensible barrier, not algorithms,” Mr. Ng said.
Javier E

Google, Mighty Now, but Not Forever - NYTimes.com - 0 views

  • Old kingpins like Digital Equipment and Wang didn’t disappear overnight. They sank slowly, burdened by maintenance of the products that made them rich and unable to match the pace of technological change around them. The same is happening now at Hewlett-Packard, which is splitting in two. Even Microsoft — the once unbeatable, declared monopolist of personal computing software — has struggled to stay relevant
  • “I’m not saying that Google is going to go away, just as Microsoft didn’t go away,” said Ben Thompson, a tech analyst who writes the blog Stratechery. “It’s just that Google will miss out on what’s next.”
  • The company’s financial results have failed to meet consensus analysts’ expectations for five straight quarters. And its stock price has fallen 8 percent over the last year.
  • ...6 more annotations...
  • At first glance, the Mountain View, Calif., company looks plenty healthy. It generated $14.4 billion in profits in 2014 and revenue was up 19 percent from the year before. Google accounts for three-quarters of the world’s web searches, and the company also controls Android, by far the world’s most widely used mobile operating system, and YouTube, the world’s most popular video site.
  • as smartphones eclipse laptop and desktop computers to become the planet’s most important computing devices, the digital ad business is rapidly changing. Facebook, Google’s archrival for advertising dollars, has been quick to profit from the shift.
  • Google’s enormous search haul is only a slice of the $550 billion global advertising market, according to the research firm eMarketer. As Mr. Thompson pointed out, most of that money is not in direct response ads like Google’s.Instead, the bulk of the ad industry is devoted to something called brand ads. These are the ads you see on television and print magazines. They work on your emotions in the belief that, in time, your dollars will follow.
  • This gets to the crux of Mr. Thompson’s argument that Google has peaked. The future of online advertising looks increasingly like the business of television. It is likely to be dominated by services like Facebook, Snapchat or Pinterest that keep people engaged for long periods of time.
  • “Google doesn’t create immersive experiences that you get lost in,” Mr. Thompson said. “Google creates transactional services. You go to Google to search, or for maps, or with something else in mind. And those are the types of ads they have. But brand advertising isn’t about that kind of destination. It’s about an experience.”
  • “To me the Microsoft comparison can’t be more clear,” he said. “This is the price of being so successful — what you’re seeing is that when a company becomes dominant, its dominance precludes it from dominating the next thing. It’s almost like a natural law of business.”
1 - 20 of 98 Next › Last »
Showing 20 items per page