Skip to main content

Home/ History Readings/ Group items matching "Regulation" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
leilamulveny

Senate Confirms Biden's Pick to Lead E.P.A. - The New York Times - 0 views

  • WASHINGTON — The Senate on Wednesday confirmed Michael S. Regan, the former top environmental regulator for North Carolina, to lead the Environmental Protection Agency and drive some of the Biden administration’s biggest climate and regulatory policies.
  • Political appointees under Donald J. Trump spent the past four years unwinding dozens of clean air and water protections, while rolling back all of the Obama administration’s major climate rules.
  • Several proposed regulations are already being prepared, administration officials have said.
  • ...6 more annotations...
  • Those potentially overlapping authorities have already provoked criticism from Republicans, some of whom voted against Mr. Regan’s confirmation because they said they did not know who is truly in charge of the administration’s climate and environment policy.
  • But most of the opposition centered on Democratic policy. Senator Mitch McConnell of Kentucky, the Republican leader, called Mr. Biden’s agenda a “left-wing war on American energy.”
  • Mr. Regan has a reputation as a consensus-builder who works well with lawmakers from both parties. North Carolina’s two Republican senators, Thom Tillis and Richard Burr voted to support his nomination. Even Senate Republicans who voted against him had kind words.
  • “I really liked meeting and getting to know Michael Regan,” Senator Capito said. “He is a dedicated public servant and an honest man.”
  • The Obama administration tried to curb carbon pollution from the electricity sector with a regulation called the Clean Power Plan, which would have pushed utilities to move away from coal and toward cleaner-burning fuels or renewable energy. The Trump administration repealed that and replaced it with a far weaker rule that only required utilities to make efficiency upgrades at individual power plants.
  • Ms. McCarthy has already been in talks with automakers around new vehicle emissions standards, but the proposed new rule itself will also come from E.P.A.
lmunch

Opinion | The Internet's 'Dark Patterns' Need to Be Regulated - The New York Times - 0 views

  • Consider Amazon. The company perfected the one-click checkout. But canceling a $119 Prime subscription is a labyrinthine process that requires multiple screens and clicks.
  • These are examples of “dark patterns,” the techniques that companies use online to get consumers to sign up for things, keep subscriptions they might otherwise cancel or turn over more personal data. They come in countless variations: giant blinking sign-up buttons, hidden unsubscribe links, red X’s that actually open new pages, countdown timers and pre-checked options for marketing spam. Think of them as the digital equivalent of trying to cancel a gym membership.
  • Last year, the F.T.C. fined the parent company of the children’s educational program ABCmouse $10 million over what it said were tactics to keep customers paying as much as $60 annually for the service by obscuring language about automatic renewals and forcing users through six or more screens to cancel.
  • ...4 more annotations...
  • Donald Trump’s 2020 campaign, for instance, used a website with pre-checked boxes that committed donors to give far more money than they had intended, a recent Times investigation found. That cost some consumers thousands of dollars that the campaign later repaid.
  • “While there’s nothing inherently wrong with companies making money, there is something wrong with those companies intentionally manipulating users to extract their data,” said Representative Lisa Blunt Rochester, a Delaware Democrat, at the F.T.C. event. She said she planned to introduce dark pattern legislation later this year.
  • More than one in 10 e-commerce sites rely on dark patterns, according to another study, which also found that many online customer testimonials (“I wouldn’t buy any other brand!”) and tickers counting recent purchases (“7,235 customers bought this service in the past week”) were phony, randomly generated by software programs.
  • “The internet shouldn’t be the Wild West anymore — there’s just too much traffic,” said a Loyola Law School professor, Lauren Willis, at the F.T.C. event. “We need stop signs and street signs to enable consumers to shop easily, accurately.”
Javier E

America is not the land of the free but one of monopolies so predatory they imperil the nation | Will Hutton | Opinion | The Guardian - 0 views

  • over the last 20 years per capita EU incomes have grown by 25% while the US’s have grown 21%, with the US growth rate decelerating while Europe’s has held steady – indeed accelerating in parts of Europe. What is going on?
  • Philippon’s answer is simple. The US economy is becoming increasingly harmed by ever less competition, with fewer and fewer companies dominating sector after sector – from airlines to mobile phones
  • Market power is the most important concept in economics, he says. When firms dominate a sector, they invest and innovate less, they peg or raise prices, and they make super-normal profits by just existing (what economists call “economic rent”)
  • ...12 more annotations...
  • So it is that mobile phone bills in the US are on average $100 a month, twice that of France and Germany, with the same story in broadband
  • Profits per passenger airline mile in the US are twice those in Europe.
  • US healthcare is impossibly expensive, with drug companies fixing prices twice as high or even higher than those in Europe; health spending is 18% of GDP.
  • Google, Amazon and Facebook have been allowed to become supermonopolies, buying up smaller challengers with no obstruction.
  • Because prices stay high, wages buy less, so workers’ lifestyles, unless they borrow, get squeezed in real terms while those at the top get paid ever more with impunity. Inequality escalates to unsupportable levels. Even life expectancy is now falling across the US
  • why has this happened now? Philippon has a deadly answer. A US political campaign costs 50 times more than one in Europe in terms of money spent for every vote cast. But this doesn’t just distort the political process. It is the chief cause of the US economic crisis.
  • Corporations want a return on their money, and the payback is protection from any kind of regulation, investigation or anti-monopoly policy that might strike at their ever-growing market power
  • this is systemic; how both at federal and state level ever higher campaign donations are correlated with ever fewer actions against monopoly, price fixing and bad corporate behaviour.
  • In Europe, the reverse is true. It is much harder for companies to buy friendly regulators. The EU’s competition authorities are much more genuinely politically independent than those in the US
  • As a result, it is Europe, albeit with one or two laggards such as Italy, that is bit by bit developing more competitive markets, more innovation and more challenge to incumbents while at the same time sustaining education and social spending so important to ordinary people’s lives
  • The EU’s regulations are better thought out, so in industry after industry it is becoming the global standard setter. Its corporate governance structures are better.
  • to complete the picture, Christine Lagarde, the incoming president of the European Central Bank, in the most important pronouncement of the year, said the environment would be at the heart of European monetary policy. In other words, the ECB is to underwrite a multitrillion-euro green revolution. In short – bet on Europe not the US.
nrashkind

It's a Vast, Invisible Climate Menace. We Made It Visible. - The New York Times - 0 views

  • Immense amounts of methane are escaping from oil and gas sites nationwide, worsening global warming, even as the Trump administration weakens restrictions on offenders
  • To the naked eye, there is nothing out of the ordinary at the DCP Pegasus gas processing plant in West Texas
  • But a highly specialized camera sees what the human eye cannot: a major release of methane, the main component of natural gas and a potent greenhouse gas that is helping to warm the planet at an alarming rate.
  • ...16 more annotations...
  • In just a few hours, the plane’s instruments identified six sites with unusually high methane emissions
  • Methane is loosely regulated, difficult to detect and rising sharply
  • Operators of the sites identified by The Times are among the very companies that have lobbied the Trump administration,
  • either directly or through trade organizations, to weaken regulations on methane,
  • Next year, the administration could move forward with a plan that would effectively eliminate requirements
  • By the E.P.A.’s own calculations, the rollback would increase methane emissions by 370,000 tons through 2025, enough to power more than a million homes for a year.
  • “This site’s definitely leaking,”
  • The reporters drove to the sites armed with infrared video gear that revealed methane billowing from tanks, seeping from pipes and wafting from bright flares that are designed to burn off the gas,
  • The regulatory rollback sought by the energy industry is the latest chapter in the administration’s historic effort to weaken environmental and climate regulations while waging a broad-based attack on climate science.
  • The findings address the mystery behind rising levels of methane in the atmosphere. Methane levels have soared since 2007 for reasons that still aren’t fully understood.
  • Methane also contributes to ground-level ozone, which, if inhaled, can cause asthma and other health problems.
  • In the course of about four hours of flying, we found at least six sites with high methane-emissions readings, ranging from about 300 pounds to almost 1,100 pounds an hour, including at DCP Pegasus, which is part owned by the energy giant Phillips 66.
  • At the DCP Pegasus plant, south of Midland, the camera transformed a tranquil scene into a furnace. Hot columns of gas shot into the air. Fumes engulfed structures.
  • A worker went to check on the tank, climbing some stairs and walking into the plume.
  • The companies found an administration willing to listen.
  • Before his appointment to the post of assistant administrator at the E.P.A.
carolinehayter

Google Lawsuit Marks End Of Washington's Love Affair With Big Tech : NPR - 0 views

  • The U.S. Justice Department and 11 state attorneys general have filed a blockbuster lawsuit against Google, accusing it of being an illegal monopoly because of its stranglehold on Internet search.
  • The government alleged Google has come by its wild success — 80% market share in U.S. search, a valuation eclipsing $1 trillion — unfairly. It said multibillion-dollar deals Google has struck to be the default search engine in many of the world's Web browsers and smartphones have boxed out its rivals.
  • Google's head of global affairs, Kent Walker, said the government's case is "deeply flawed." The company warned that if the Justice Department prevails, people would pay more for their phones and have worse options for searching the Internet.
  • ...19 more annotations...
  • Just look at the word "Google," the lawsuit said — it's become "a verb that means to search the internet." What company can compete with that?
  • "It's been a relationship of extremes,"
  • a tectonic shift is happening right now: USA v. Google is the biggest manifestation of what has become known as the "Techlash" — a newfound skepticism of Silicon Valley's giants and growing appetite to rein them in through regulation.
  • "It's the end of hands-off of the tech sector," said Gene Kimmelman, a former senior antitrust official at the Justice Department. "It's probably the beginning of a decade of a series of lawsuits against companies like Google who dominate in the digital marketplace."
  • For years, under both Republican and Democratic administrations, Silicon Valley's tech stars have thrived with little regulatory scrutin
  • There is similar skepticism in Washington of Facebook, Amazon and Apple — the companies that, with Google, have become known as Big Tech, an echo of the corporate villains of earlier eras such as Big Oil and Big Tobacco.
  • All four tech giants have been under investigation by regulators, state attorneys general and Congress — a sharp shift from just a few years ago when many politicians cozied up to the cool kids of Silicon Valley.
  • Tech companies spend millions of dollars lobbying lawmakers, and many high-level government officials have left politics to work in tech,
  • It will likely be years before this fight is resolved.
  • She said Washington's laissez-faire attitude toward tech is at least partly responsible for the sector's expansion into nearly every aspect of our lives.
  • "These companies were allowed to grow large, in part because they had political champions on both sides of the aisle that really supported what they were doing and viewed a lot of what they were doing uncritically. And then ... these companies became so big and so powerful and so good at what they set out to do, it became something of a runaway train," she said.
  • The Google lawsuit is the most concrete action in the U.S. to date challenging the power of Big Tech. While the government stopped short of explicitly calling for a breakup, U.S. Associate Deputy Attorney General Ryan Shores said that "nothing's off the table."
  • "This case signals that the antitrust winter is over,"
  • other branches of government are also considering ways to bring these companies to heel. House Democrats released a sweeping report this month calling for new rules to strip Apple, Amazon, Facebook and Google of the power that has made each of them dominant in their fields. Their recommendations ranged from forced "structural separations" to reforming American antitrust law. Republicans, meanwhile, have channeled much of their ire into allegations that platforms such as Facebook and Twitter are biased against conservatives — a claim for which there is no conclusive evidence.
  • Congressional Republicans and the Trump administration are using those bias claims to push for an overhaul of Section 230 of the 1996 Communications Decency Act, a longstanding legal shield that protects online platforms from being sued over what people post on them and says they can't be punished for reasonable moderation of those posts.
  • The CEOs of Google, Facebook and Twitter are set to appear next week before the Senate Commerce Committee at a hearing about Section 230.
  • On the same day the Justice Department sued Google, two House Democrats, Anna Eshoo, whose California district includes large parts of Silicon Valley, and Tom Malinowski of New Jersey, introduced their own bill taking aim at Section 230. It would hold tech companies liable if their algorithms amplify or recommend "harmful, radicalizing content that leads to offline violence."
  • That means whichever party wins control of the White House and Congress in November, Big Tech should not expect the temperature in Washington to warm up.
  • Editor's note: Google, Facebook, Apple and Amazon are among NPR's financial supporters.
anonymous

The EPA Refuses to Reduce Pollutants Linked to Coronavirus Deaths - ProPublica - 0 views

  • In April, as coronavirus cases multiplied across the country, the head of the U.S. Environmental Protection Agency rejected sc
  • ientists’ advice to tighten air pollution standards for particulate matter, or soot.
  • Particulate matter kills people. “It is responsible for more deaths and sickness than any other air pollutant in the world,” said Gretchen Goldman, a research director at the Union of Concerned Scientists.
  • ...3 more annotations...
  • Firing the advisory panel and opting not to pursue a more stringent particulate standard were in keeping with the administration of President Donald Trump’s dim view of environmental regulation. By one tally compiled by The New York Times, 72 regulations on air, water and soil pollution, climate change and ecosystems have been canceled or weakened, with an additional 27 in progress. EPA leadership has sidelined or ignored research by agency scientists, and career staff are censoring their reports to avoid terms like “climate change” out of fear of repercussions from political staff. Many of the changes involve narrowing the scope of science, and scientists, that contribute to policy, experts said.
  • The pollution comes from cars, power plants, wildfires and anything that burns fossil fuels. When people take a breath, the particles can lodge deep into their lungs and even enter the bloodstream. The pollutant causes health complications that can lead people to die earlier than they would have, and it is linked to conditions such as COPD, asthma and diabetes.
  • Three weeks ago, the agency finalized another rule allowing certain polluters to follow weaker air emissions standards. Wheeler has said the environmental rollbacks will continue if Trump is reelected.
clairemann

AOC and Rashida Tlaib's Public Banking Act, explained - Vox - 0 views

  • A public option, but for banking. That’s what Reps. Rashida Tlaib and Alexandria Ocasio-Cortez are proposing in a new bill unveiled on Friday.
  • would foster the creation of public banks across the country by providing them a pathway to getting started, establishing an infrastructure for liquidity and credit facilities for them via the Federal Reserve, and setting up federal guidelines for them to be regulated.
  • which theoretically would be more motivated to do public good and invest in their communities than private institutions, which are out for profit.
  • ...28 more annotations...
  • The proposal lands in the midst of the Covid-19 pandemic, which has shed light on many inefficiencies in the American system, including banking. Take the Paycheck Protection Program, for example: It used the regular banking system as an intermediary, which ultimately meant that bigger businesses and those with preexisting relationships with those banks were prioritized over others.
  • guarantee a more equitable recovery by providing an alternative to Wall Street banks for state and local governments, businesses, and ordinary people,
  • The public banking bill also does double duty as a climate bill: It would prohibit public banks from investing in or doing business with the fossil fuel industry.
  • “Public banks empower states and municipalities to establish new channels of public investment to help solve systemic crises.”
  • But, he said, this proposal is particularly comprehensive and supportive.
  • If Democrats keep control of the House come 2021 and manage to flip the Senate and win the White House, they’ll be able to take some big legislative swings, including and perhaps especially on issues related to the economy.
  • at some point it’s just hitting a wall where it doesn’t carry them along and they’re looking for options,” said Tlaib, who represents Michigan’s 13th Congressional District, the third-poorest congressional district in the country. “So I’m putting this on the table as an option.”
  • To be clear, the Public Banking Act isn’t creating a federal public bank.
  • encourage and enable the creation of public banks across the US. It provides legitimacy to those who are pushing for more public banking, and it also includes regulators as key stakeholders who can support and provide guidance for how those banks should operate.
  • though different public banks would likely have different areas of emphasis.
  • They could also facilitate easier access to funds for state and local governments from the federal government or Federal Reserve.
  • “It’s basically a way to finance state and local investment that doesn’t go through Wall Street and doesn’t leave the community and turn into a windfall for shareholders,
  • “This is more about community development.”
  • Tlaib recalled hearing from her constituents when the $1,200 coronavirus stimulus checks went out this spring — people waiting days and weeks for direct deposits, or getting a check in the mail only to lose a substantial portion of it cashing it at the store down the street.
  • The Public Banking Act allows the Federal Reserve to charter and grant membership to public banks and creates a grant program for the Treasury secretary to provide seed money for public banks to be formed, capitalized, and developed.
  • Public banks need the FDIC to provide assurances that it will recognize them in accordance with the bond rating of the city or state they represent.
  • McConnell said the FDIC issuing guidance that it recognizes the city’s — and the state’s — public banks as an AAA rating would send a clear direction to the state financial regulators that the public bank is considered low risk.
  • The bill would also provide a road map for the FDIC, which insures bank deposits of up to $250,000, to insure deposits for public banks, so people feel assured they won’t lose all their money by choosing to open an account with their state bank instead of, say, Wells Fargo.
  • the Office of the Comptroller of the Currency (OCC) has historically been charged with chartering national banks in the US, not the Fed, meaning this is a fairly novel idea.
  • It prohibits the Fed and Treasury from considering the financial health of an entity that controls or owns a bank in grant-making decisions.
  • So here is the thing about private companies, including, yes, banks: The point of them is to make money, and that drives their decisions. It’s not necessarily evil (though sometimes it kind of is), but it’s just how they work.
  • The idea behind public banking isn’t that Goldman Sachs, Wells Fargo, and Morgan Stanley go away; it’s that they have to compete with a government-owned entity — and one that’s a little fairer and more ethical in how it does business.
  • Public banks, as imagined in the Tlaib/Ocasio-Cortez proposal, would provide loans to small businesses and governments with lower interest rates and lower fees.
  • Student loans are facilitated directly with BND, but other loans, called participation loans, go through a local financial institution — often with BND support.
  • According to a study on public banks, BND had some $2 billion in active participation loans in 2014. BND can grant larger loans at a lower risk, which fosters a healthy financial ecosystem populated by a cluster of small North Dakota banks.
  • Democrats have a lot of ideas, and if they take power come January 2021, there’s a lot they can do.
  • The Public Banking Act is meant to complement ideas such as the ABC Act and postal banking. And, of course, it’s linked to the Green New Deal, not only because it would bar public banks from financing things that hurt the environment, but also because the idea is that public banks would play a major role in financing Green New Deal and climate-friendly projects.
  • If former Vice President Joe Biden wins the White House and Democrats control both the House and the Senate come 2021, the talk around these ideas becomes a lot more serious.
Javier E

Facebook's Apps Went Down. The World Saw How Much It Runs on Them. - The New York Times - 0 views

  • n India, Latin America and Africa, its services are essentially the internet for many people — almost a public utility, usually cheaper than a phone call and depended upon for much of the communication and commerce of daily life.
  • India accounted for about a quarter of those installations, while another quarter were in Latin America, according to Sensor Tower. Just 4 percent, or 238 million downloads, were in the United States.
  • In the global digital space, everyone could experience a shutdown,” Thierry Breton, the European commissioner drafting new tech regulations, said on Twitter. “Europeans deserve a better digital resilience via regulation, fair competition, stronger connectivity and cybersecurity.”
  • ...8 more annotations...
  • In India, Brazil and other countries, WhatsApp has become so important to the functioning of society that regulators should treat it as a “utility,” said Parminder Jeet Singh, executive director at IT for Change, a technology-focused nonprofit in Bengaluru, India.
  • Worldwide, 2.76 billion people on average used at least one Facebook product each day this June, according to the company’s statistics. WhatsApp is used to send more than 100 billion messages a day and has been downloaded nearly six billion times since 2014, when Facebook bought it, according to estimates from the data firm Sensor Tower.
  • The unease about a single corporation mediating so much human activity motivates much of the scrutiny surrounding Facebook.
  • In Latin America, Facebook’s apps can be lifelines in rural places where cellphone service has yet to arrive but the internet is available, and in poor communities where people cannot afford mobile data but can find a free internet connection.
  • Across Africa, Facebook’s apps are so popular that for many, they are the internet. WhatsApp, the continent’s most popular messaging app, is a one-stop shop to communicate with family, friends, colleagues, fellow worshipers and neighbors.
  • The use of WhatsApp has grown so much that at one point it accounted for nearly half of all internet traffic in Zimbabwe. During the outage on Monday, the chief government spokesman in Tanzania used Twitter to urge the public to “remain calm.”
  • In Mexico, many small-town newspapers cannot afford print editions, so they publish on Facebook instead. That has left local governments without a physical outlet to issue important announcements, so they, too, have taken to Facebook, said Adrián Pascoe, a political consultant.
  • “The way businesses work, it’s been a crazy change in the last 20 years,” Mr. David said. “Then, we had no community online. Now we are hyper-connected, but we rely on a few tech companies for everything. When WhatsApp or Facebook are down, we all go down.”
Javier E

Why Facebook won't let you turn off its news feed algorithm - The Washington Post - 0 views

  • In at least two experiments over the years, Facebook has explored what happens when it turns off its controversial news feed ranking system — the software that decides for each user which posts they’ll see and in what order, internal documents show. That leaves users to see all the posts from all of their friends in simple, chronological order.
  • The internal research documents, some previously unreported, help to explain why Facebook seems so wedded to its automated ranking system, known as the news feed algorithm.
  • previously reported internal documents, which Haugen provided to regulators and media outlets, including The Washington Post, have shown how Facebook crafts its ranking system to keep users hooked, sometimes at the cost of angering or misinforming them.
  • ...25 more annotations...
  • In testimony to U.S. Congress and abroad, whistleblower Frances Haugen has pointed to the algorithm as central to the social network’s problems, arguing that it systematically amplifies and rewards hateful, divisive, misleading and sometimes outright false content by putting it at the top of users’ feeds.
  • The political push raises an old question for Facebook: Why not just give users the power to turn off their feed ranking algorithms voluntarily? Would letting users opt to see every post from the people they follow, in chronological order, be so bad?
  • The documents suggest that Facebook’s defense of algorithmic rankings stems not only from its business interests, but from a paternalistic conviction, backed by data, that its sophisticated personalization software knows what users want better than the users themselves
  • Since 2009, three years after it launched the news feed, Facebook has used software that predicts which posts each user will find most interesting and places those at the top of their feeds while burying others. That system, which has evolved in complexity to take in as many as 10,000 pieces of information about each post, has fueled the news feed’s growth into a dominant information source.
  • The proliferation of false information, conspiracy theories and partisan propaganda on Facebook and other social networks has led some to wonder whether we wouldn’t all be better off with a simpler, older system: one that simply shows people all the messages, pictures and videos from everyone they follow, in the order they were posted.
  • That was more or less how Instagram worked until 2016, and Twitter until 2017.
  • But Facebook has long resisted it.
  • they appear to have been informed mostly by data on user engagement, at least until recently
  • That employee, who said they had worked on and studied the news feed for two years, went on to question whether automated ranking might also come with costs that are harder to measure than the benefits. “Even asking this question feels slightly blasphemous at Facebook,” they added.
  • “Whenever we’ve tried to compare ranked and unranked feeds, ranked feeds just seem better,” wrote an employee in a memo titled, “Is ranking good?”, which was posted to the company’s internal network, Facebook Workplace, in 2018
  • In 2014, another internal report, titled “Feed ranking is good,” summarized the results of tests that found allowing users to turn off the algorithm led them to spend less time in their news feeds, post less often and interact less.
  • Without an algorithm deciding which posts to show at the top of users’ feeds, concluded the report’s author, whose name was redacted, “Facebook would probably be shrinking.”
  • there’s a catch: The setting only applies for as long as you stay logged in. When you leave and come back, the ranking algorithm will be back on.
  • What many users may not realize is that Facebook actually does offer an option to see a mostly chronological feed, called “most recent,”
  • The longer Facebook left the user’s feed in chronological order, the less time they spent on it, the less they posted, and the less often they returned to Facebook.
  • A separate report from 2018, first described by Alex Kantrowitz’s newsletter Big Technology, found that turning off the algorithm unilaterally for a subset of Facebook users, and showing them posts mostly in the order they were posted, led to “massive engagement drops.” Notably, it also found that users saw more low-quality content in their feeds, at least at first, although the company’s researchers were able to mitigate that with more aggressive “integrity” measures.
  • Nick Clegg, the company’s vice president of global affairs, said in a TV interview last month that if Facebook were to remove the news feed algorithm, “the first thing that would happen is that people would see more, not less, hate speech; more, not less, misinformation; more, not less, harmful content. Why? Because those algorithmic systems precisely are designed like a great sort of giant spam filter to identify and deprecate and downgrade bad content.”
  • because the algorithm has always been there, Facebook users haven’t been given the time or the tools to curate their feeds for themselves in thoughtful ways. In other words, Facebook has never really given a chronological news feed a fair shot to succeed
  • Some critics say that’s a straw-man argument. Simply removing automated rankings for a subset of users, on a social network that has been built to rely heavily on those systems, is not the same as designing a service to work well without them,
  • Ben Grosser, a professor of new media at University of Illinois at Urbana-Champaign. Those users’ feeds are no longer curated, but the posts they’re seeing are still influenced by the algorithm’s reward systems. That is, they’re still seeing content from people and publishers who are vying for the likes, shares and comments that drive Facebook’s recommendati
  • “My experience from watching a chronological feed within a social network that isn’t always trying to optimize for growth is that a lot of these problems” — such as hate speech, trolling and manipulative media — “just don’t exist.”
  • Facebook has not taken an official stand on the legislation that would require social networks to offer a chronological feed option, but Clegg said in an op-ed last month that the company is open to regulation around algorithms, transparency, and user controls.Twitter, for its part, signaled potential support for the bills.
  • “I think users have the right to expect social media experiences free of recommendation algorithms,” Maréchal added. “As a user, I want to have as much control over my own experience as possible, and recommendation algorithms take that control away from me.”
  • “Only companies themselves can do the experiments to find the answers. And as talented as industry researchers are, we can’t trust executives to make decisions in the public interest based on that research, or to let the public and policymakers access that research.”
  • ns.
clairemann

Olympic gymnasts: We want justice for the FBI mishandling of the Nassar investigation. - 0 views

  • During the hearing, several senators expressed their outrage, focusing their future actions on the FBI’s failures. Senator Patrick Leahy even supported the gymnasts’ calls for prosecuting the FBI agents accused of mishandling the case. But the Senators are avoiding the fundamental legal problem at the heart of the investigation: federal law did not cover Nassar’s abuse.
  • FBI agents did nothing when first confronted with Olympians’ accusations because the federal agents had a legal rationale for not pursuing their claims. Nassar could not be charged with a federal offense based on his assaults. That’s accurate—even if it sounds perverse. (His ultimate federal conviction was for possessing kiddie porn, not hundreds of assaults). And it is why the Indianapolis agents claimed that they did not have “federal jurisdiction” to take the case.
  • The US Olympic Committee had knocked on the wrong prosecutorial door. The survivors should have gone to a different set of Michigan state prosecutors,  according to the FBI agents.
  • ...4 more annotations...
  • For the first time in American history, in 1994, the federal government funded states to change their laws and practices that treated domestic violence and sexual assault as less serious than other offenses. The law included a provision to address state justice system’s routine mishandling of sexual assault cases, putting accountability in the hands of survivors by enabling them to seek redress themselves. The law declared it a federal “civil right” to be free from gender-based violence.
  • In 2000, the Court declared the Violence Against Women Acts’s civil rights remedy unconstitutional precisely because it dealt with sexual abuse crimes.  Despite the fact that the law allowed private survivors to seek damages, the court ignored the civil nature of the remedy and declared the underlying fact of sexual abuse had to be considered a crime.
  • The justices were almost hysterical about the danger: If the federal government could regulate sexual abuse, they said it would “obliterate” the distinction between the federal and state governments.
  • The decision was supposed to be about federalism, but it led to no legal revolution.  In fact, five years later, the Court decided another case, Gonzales v. Raich, allowing the federal government to regulate an individual’s marjuana possession, even though that too involved “crime,” on the theory that there was a commercial market for marijuana.  Many law professors think Gonzales silently overruled Morrison, giving the federal government the power to regulate all sorts of crime, just not sexual assault.
lilyrashkind

Supreme Court blocks Biden's COVID vaccine mandate for companies, but allows for health care workers - CBS News - 0 views

  • "Although Congress has indisputably given OSHA the power to regulate occupational dangers, it has not given that agency the power to regulate public health more broadly," the court said. "Requiring the vaccination of 84 million Americans, selected simply because they work for employers with more than 100 employees, certainly falls in the latter category."
  • The high court, though, gave the green-light to a requirement that health care workers in facilities that receive Medicare and Medicaid funding must be vaccinated, siding 5-4 with the Biden administration.
  • The decisions come less than a week after the justices heard oral arguments on the emergency requests regarding the vaccine-or-test rule and vaccine requirement for health care workers.
  • ...10 more annotations...
  • President Biden first announced the rules in September as part of a broader strategy from his administration to combat the spread of the Delta variant, which drove a surge of infections toward the end of the summer. 
  • The Supreme Court was asked to intervene last month and swiftly held oral arguments to weigh the emergency requests.
  • "As a result of the court's decision, it is now up to states and individual employers to determine whether to make their workplaces as safe as possible for employees, and whether their businesses will be safe for consumers during this pandemic by requiring employees to take the simple and effective step of getting vaccinated," Mr. Biden said. "The court has ruled that my administration cannot use the authority granted to it by Congress to require this measure, but that does not stop me from using my voice as president to advocate for employers to do the right thing to protect Americans' health and economy."
  • The Biden administration estimated that more than 80 million employees could be impacted by the policy.
  • The Supreme Court received more than a dozen requests for emergency action in cases challenging the requirement after the 6th U.S. Circuit's ruling, with business associations, Republican-led states and private businesses covered by the rule arguing OSHA lacked the power to issue the vaccine requirement.
  • "Permitting OSHA to regulate the hazards of daily life — simply because most Americans have jobs and face those same risks while on the clock — would significantly expand OSHA's regulatory authority without clear congressional authorization," the court said.
  • The second rule examined by the Supreme Court was issued by the Centers for Medicare and Medicaid Services (CMS) in November and laid out vaccine requirements for staff at a wide range of facilities that participate in Medicare and Medicaid. The requirement does not have a daily or weekly testing option for unvaccinated workers, but does include medical and religious exemptions.
  • . Then, in a separate case brought by 14 states, a federal district court in Louisiana blocked the rule from taking effect nationwide, but the 5th Circuit narrowed the scope of the order to the 14 states that together sued the Biden administration. 
  • "After all, ensuring that providers take steps to avoid transmitting a dangerous virus to their patients is consistent with the fundamental principle of the medical profession: first, do no harm," the Supreme Court said.
  • "The omnibus rule is undoubtedly significant — it requires millions of healthcare workers to choose between losing their livelihoods and acquiescing to a vaccine they have rejected for months. Vaccine mandates also fall squarely within a state's police power, and, until now, only rarely have been a tool of the federal government," Thomas wrote. "If Congress had wanted to grant CMS authority to impose a nationwide vaccine mandate, and consequently alter the state-federal balance, it would have said so clearly. It did not."
Javier E

War in Ukraine Has Russia's Putin, Xi Jinping Changing the World Order - Bloomberg - 0 views

  • at the beginning of 2022, many of us shared the assumptions of Keynes’s Londoner. We ordered exotic goods in the confident expectation that Amazon would deliver them to our doors the next day. We invested in emerging-market stocks, purchased Bitcoin, and chatted with people on the other side of the world via Zoom. Many of us dismissed Covid-19 as a temporary suspension of our global lifestyle. Vladimir Putin’s “projects and politics of militarism” seemed like diversions in the loonier regions of the Twittersphere. 
  • just as World War I mattered for reasons beyond the slaughter of millions of human beings, this conflict could mark a lasting change in the way the world economy works — and the way we all live our lives, however far we are from the carnage in Eastern Europe.
  • That doesn’t mean that globalization is an unalloyed good. By its nature, economic liberalism exaggerates the downsides of capitalism as well as the upsides: Inequality increases, companies sever their local roots, losers fall further behind, and — without global regulations — environmental problems multiply
  • ...49 more annotations...
  • Right now, the outcome that we have been sliding toward seems one in which an autocratic East gradually divides from — and then potentially accelerates past — a democratic but divided West. 
  • Seizing that opportunity will require an understanding of both economics and history.
  • By any economic measure the West is significantly more powerful than the East, using the terms “West” and “East” to mean political alliances rather than just geographical regions. The U.S. and its allies account for 60% of global gross domestic product at current exchange rates; China, Russia and the autocracies amount to barely a third of that. And for the first time in years, the West is coming together rather than falling apart.
  • The question for Biden and the European leaders he will meet this week is simple: What sort of world do they want to build in the future? Ukraine could well mark the end of one great episode in human history. It could also be the time that the free world comes together and creates another, more united, more interconnected and more sustainable one than ever before
  • the answer to globalization’s woes isn’t to abandon economic liberalism, but to redesign it. And the coming weeks offer a golden opportunity to redesign the global economic order.
  • Yet once politicians got out of the way, globalization sped up, driven by technology and commerce.
  • Only after the Second World War did economic integration resume its advance — and then only on the Western half of the map
  • What most of us today think of as globalization only began in the 1980s, with the arrival of Thatcherism and Reaganism, the fall of the Berlin Wall, the reintegration of China into the world economy, and, in 1992, the creation of the European single market.
  • When the guns finally fell silent in 1918 and peace was forced on Germany at Versailles (in the Carthaginian terms that Keynes decried so eloquently), the Bidens, Johnsons and Macrons of the time tried to restore the old world order of free trade and liberal harmony — and comprehensively failed. 
  • As the new century dawned and an unknown “pro-Western” bureaucrat called Vladimir Putin came to power in Russia, the daily volume of foreign-exchange transactions reached $15 trillion. 
  • More recently, as the attacks on globalization have mounted, economic integration has slowed and in some cases gone into reverse.
  • Meanwhile in the West, Ukraine has already prompted a great rethink. As German Chancellor Olaf Scholz has proclaimed, we are at a Zeitenwende — a turning point. Under his leadership, pacifist Germany has already proposed a defense budget that’s larger than Russia’s. Meanwhile, Ukrainian immigrants are being welcomed by nations that only a few months ago were shunning foreigners, and, after a decade of slumber in Brussels, the momentum for integration is increasing.
  • But this turning point can still lead in several directions.
  • the invasion of Ukraine is accelerating changes in both geopolitics and the capitalist mindset that are deeply inimical to globalization.
  • The changes in geopolitics come down to one word: China, whose rapid and seemingly inexorable rise is the central geopolitical fact of our time.  
  • absent any decisive action by the West, geopolitics is definitively moving against globalization — toward a world dominated by two or three great trading blocs: an Asian one with China at its heart and perhaps Russia as its energy supplier; an American-led bloc; and perhaps a third centered on the European Union, with the Europeans broadly sympathetic to the U.S. but nervous about the possible return of an America-First isolationist to the White House and irked by America’s approach to digital and media regulation.
  • World trade in manufactured goods doubled in the 1990s and doubled again in the 2000s. Inflationary pressures have been kept low despite loose monetary policies.
  • From a CEO’s viewpoint, Putin’s invasion of Ukraine has done more than unleash Western embargoes and boost inflation. It is burying most of the basic assumptions that have underlain business thinking about the world for the past 40 years. 
  • Commercially speaking, this bet paid off spectacularly. Over the past 50 years multinationals have turned themselves from federations of national companies into truly integrated organizations that could take full advantage of global economies of scale and scope (and, of course, global loopholes in taxes and regulations)
  • Just as important as this geopolitical shift is the change in the capitalist mindset. If the current age of globalization was facilitated by politicians, it has been driven by businesspeople. Ronald Reagan and Margaret Thatcher didn’t decide that the components of an iPhone should come from 40 countries. Facebook wasn’t created by senior politicians — not even by Al Gore. Uber wasn’t an arm of the Department of Transportation. 
  • profits have remained high, as the cost of inputs (such as energy and labor) have been kept low.
  • Now what might be called the Capitalist Grand Illusion is under assault in Kyiv — just as Norman Angell’s version was machine-gunned on the Western Front.
  • Militarism and cultural rivalries keep trumping economic logic.
  • The second is Biden’s long experience
  • Every Western company is now wondering how exposed it is to political risk. Capitalists are all Huntingtonians now.
  • Greed is also acquiring an anti-global tint. CEOs are rationally asking how they can profit from what Keynes called “monopolies, restrictions and exclusions.
  • So the second age of globalization is fading fast. Unless something is done quickly and decisively, the world will divide into hostile camps, regardless of what happens in Ukraine.
  • this divided world will not suit the West. Look at the resolution passed by the United Nations General Assembly to condemn Russia’s invasion of Ukraine. The most trumpeted figure is that only 40 countries did not vote for this (35 abstained, and five voted against it), compared with 141 countries who voted in favor. But those 40 countries, which include India and China, account for the majority of the world’s population.
  • we still have time to shape a very different future: one in which global wealth is increased and the Western alliance bolstered.
  • One of the great problems with modern liberalism for the past few decades has been its lack of a gripping narrative and a compelling cast of heroes and villains
  • Now Putin has inadvertently reversed all that. Freedom is the creed of heroes such as Zelenskiy; anti-liberalism is the creed of monsters who drop bombs on children.
  • Biden can soften that message at home by adding a political dimension to his trade agenda. “Build back better” applies to globalization, too. A global new deal should certainly include a focus on making multinational companies pay their taxes, and the environment should be to the fore. But Biden should also talk about the true cost of protectionism in terms of higher prices, worse products and less innovation.
  • So far, Biden’s handling of the Ukraine invasion has been similarly nuanced. He has drawn a line between supplying the resistance and becoming involved in the war (or giving others an excuse to claim the U.S. is involved). And he has put firm pressure on China to stay out of the conflict.
  • Biden needs to recognize that expanding economic interdependence among his allies is a geostrategic imperative. He should offer Europe a comprehensive free-trade deal to bind the West together
  • It is not difficult to imagine Europe or democratic Asia signing up for these sorts of pacts, given the shock of Putin’s aggression and their fear of China. Biden’s problem is at home. Why should the Democratic left accept this? Because, Biden should say, Ukraine, China and America’s security matter more than union votes.
  • Biden should pursue a two-stage strategy: First, deepen economic integration among like-minded nations; but leave the door open to autocracies if they become more flexible.
  • CEOs who used to build empires based on just-in-time production are now looking at just-in-case: adding inefficient production closer to home in case their foreign plants are cut off.
  • Constructing such a “new world order” will be laborious work. But the alternative is a division of the world into hostile economic and political blocs that comes straight out of the 1930s
  • Biden, Johnson, Scholz and Macron should think hard about how history will judge them. Do they want to be compared to the policymakers in the aftermath of World War I, who stood by impassively as the world fragmented and monsters seized the reins of power? Or would they rather be compared to their peers after World War II, policymakers who built a much more stable and interconnected world?
  • The Western policymakers meeting this week will say they have no intention of closing down the global order. All this economic savagery is to punish Putin’s aggression precisely in order to restore the rules-based system that he is bent on destroying — and with it, the free flow of commerce and finance. In an ideal world, Putin would be toppled — the victim of his own delusions and paranoia — and the Russian people would sweep away the kleptocracy in the Kremlin. 
  • In this optimistic scenario, Putin’s humiliation would do more than bring Russia back to its senses. It would bring the West back as well. The U.S. would abandon its Trumpian isolationism while Europe would start taking its own defense seriously. The culture warriors on both sides of the Atlantic would simmer down, and the woke and unwoke alike would celebrate their collective belief in freedom and democracy.
  • There’s a chance this could happen. Putin wouldn’t be the first czar to fall because of a misjudged and mishandled war.
  • Regardless of whether China’s leader decides to ditch Putin, the invasion has surely sped up Xi’s medium-term imperative of “decoupling” — insulating his country from dependence on the West.
  • For the “wolf pack” of young Chinese nationalists around Xi, the reaction to Ukraine is another powerful argument for self-sufficiency. China’s vast holdings of dollar assets now look like a liability given America’s willingness to confiscate Russia’s assets,
  • Some Americans are equally keen on decoupling, a sentiment that bridged Republicans and Democrats before Putin’s invasion of Ukraine.
  • In the great intellectual battle of the 1990s between Francis Fukuyama, who wrote “The End of History and the Last Man” (1992), and his Harvard teacher Samuel Huntington, who wrote “The Clash of Civilizations” (1996), CEOs have generally sided with Fukuyama.
  • Biden needs to go further in the coming weeks. He needs to reinforce the Western alliance so that it can withstand the potential storms to come
  • Keynes, no longer a protectionist, played a leading role in designing the International Monetary Fund, the World Bank, and the infrastructure of the postwar Western order of stable exchange rates. He helped persuade the U.S. to lead the world rather than retreating into itself. He helped create the America of the Marshall Plan. This Bretton Woods settlement created the regime that eventually won the Cold War and laid the foundations for the second age of globalization.
  • At the closing banquet on July 22, the great man was greeted with a standing ovation. Within two years he was dead — but the world that he did so much to create lived on. That world does not need to die in the streets of Kyiv. But it is on course to do so, unless the leaders meeting this week seize the moment to create something better. 
  •  
     
Javier E

Opinion | Big Tech Is Bad. Big A.I. Will Be Worse. - The New York Times - 0 views

  • Tech giants Microsoft and Alphabet/Google have seized a large lead in shaping our potentially A.I.-dominated future. This is not good news. History has shown us that when the distribution of information is left in the hands of a few, the result is political and economic oppression. Without intervention, this history will repeat itself.
  • The fact that these companies are attempting to outpace each other, in the absence of externally imposed safeguards, should give the rest of us even more cause for concern, given the potential for A.I. to do great harm to jobs, privacy and cybersecurity. Arms races without restrictions generally do not end well.
  • We believe the A.I. revolution could even usher in the dark prophecies envisioned by Karl Marx over a century ago. The German philosopher was convinced that capitalism naturally led to monopoly ownership over the “means of production” and that oligarchs would use their economic clout to run the political system and keep workers poor.
  • ...17 more annotations...
  • Literacy rates rose alongside industrialization, although those who decided what the newspapers printed and what people were allowed to say on the radio, and then on television, were hugely powerful. But with the rise of scientific knowledge and the spread of telecommunications came a time of multiple sources of information and many rival ways to process facts and reason out implications.
  • With the emergence of A.I., we are about to regress even further. Some of this has to do with the nature of the technology. Instead of assessing multiple sources, people are increasingly relying on the nascent technology to provide a singular, supposedly definitive answer.
  • This technology is in the hands of two companies that are philosophically rooted in the notion of “machine intelligence,” which emphasizes the ability of computers to outperform humans in specific activities.
  • This philosophy was naturally amplified by a recent (bad) economic idea that the singular objective of corporations should be to maximize short-term shareholder wealth.
  • Combined together, these ideas are cementing the notion that the most productive applications of A.I. replace humankind.
  • Congress needs to assert individual ownership rights over underlying data that is relied on to build A.I. systems
  • Fortunately, Marx was wrong about the 19th-century industrial age that he inhabited. Industries emerged much faster than he expected, and new firms disrupted the economic power structure. Countervailing social powers developed in the form of trade unions and genuine political representation for a broad swath of society.
  • History has repeatedly demonstrated that control over information is central to who has power and what they can do with it.
  • Generative A.I. requires even deeper pockets than textile factories and steel mills. As a result, most of its obvious opportunities have already fallen into the hands of Microsoft, with its market capitalization of $2.4 trillion, and Alphabet, worth $1.6 trillion.
  • At the same time, powers like trade unions have been weakened by 40 years of deregulation ideology (Ronald Reagan, Margaret Thatcher, two Bushes and even Bill Clinton
  • For the same reason, the U.S. government’s ability to regulate anything larger than a kitten has withered. Extreme polarization and fear of killing the golden (donor) goose or undermining national security mean that most members of Congress would still rather look away.
  • To prevent data monopolies from ruining our lives, we need to mobilize effective countervailing power — and fast.
  • Today, those countervailing forces either don’t exist or are greatly weakened
  • Rather than machine intelligence, what we need is “machine usefulness,” which emphasizes the ability of computers to augment human capabilities. This would be a much more fruitful direction for increasing productivity. By empowering workers and reinforcing human decision making in the production process, it also would strengthen social forces that can stand up to big tech companies
  • We also need regulation that protects privacy and pushes back against surveillance capitalism, or the pervasive use of technology to monitor what we do
  • Finally, we need a graduated system for corporate taxes, so that tax rates are higher for companies when they make more profit in dollar terms
  • Our future should not be left in the hands of two powerful companies that build ever larger global empires based on using our collective data without scruple and without compensation.
Javier E

Opinion | We Are Suddenly Taking On China and Russia at the Same Time - The New York Times - 0 views

  • “The U.S. has essentially declared war on China’s ability to advance the country’s use of high-performance computing for economic and security gains,” Paul Triolo, a China and tech expert at Albright Stonebridge, a consulting firm, told The Financial Times. Or as the Chinese Embassy in Washington framed it, the U.S. is going for “sci-tech hegemony.”
  • regulations issued Friday by President Biden’s Commerce Department are a formidable new barrier when it comes to export controls that will block China from being able to buy the most advanced semiconductors from the West or the equipment to manufacture them on its own.
  • The new regulations also bar any U.S. engineer or scientist from aiding China in chip manufacturing without specific approval, even if that American is working on equipment in China not subject to export controls. The regs also tighten the tracking to ensure that U.S.-designed chips sold to civilian companies in China don’t get into the hands of China’s military
  • ...15 more annotations...
  • maybe most controversially, the Biden team added a “foreign direct product rule” that, as The Financial Times noted, “was first used by the administration of Donald Trump against Chinese technology group Huawei” and “in effect bars any U.S. or non-U.S. company from supplying targeted Chinese entities with hardware or software whose supply chain contains American technology.”
  • This last rule is huge, because the most advanced semiconductors are made by what I call “a complex adaptive coalition” of companies from America to Europe to Asia
  • The more we push the boundaries of physics and materials science to cram more transistors onto a chip to get more processing power to continue to advance artificial intelligence, the less likely it is that any one company, or country, can excel at all the parts of the design and manufacturing process. You need the whole coalition
  • The reason Taiwan Semiconductor Manufacturing Company, known as TSMC, is considered the premier chip manufacturer in the world is that every member of this coalition trusts TSMC with its most intimate trade secrets, which it then melds and leverages for the benefit of the whole.
  • “We do not make in the U.S. any of the chips we need for artificial intelligence, for our military, for our satellites, for our space programs” — not to mention myriad nonmilitary applications that power our economy. The recent CHIPS Act, she said, was our “offensive initiative” to strengthen our whole innovation ecosystem so more of the most advanced chips will be made in the U.S.
  • It managed to pilfer a certain amount of chip technology, including 28 nanometer technology from TSMC back in 2017.
  • Because China is not trusted by the coalition partners not to steal their intellectual property, Beijing is left trying to replicate the world’s all-star manufacturing chip stack on its own with old technologies
  • China can’t mass produce these chips with precision without ASML’s latest technology — which is now banned from the country.
  • Raimondo rejects the idea that the new regulations are tantamount to an act of war.
  • “The U.S. was in an untenable position,” she told me in her office. “Today we are purchasing 100 percent of our advanced logic chips from abroad — 90 percent from TSMC in Taiwan and 10 percent from Samsung in Korea.” (That IS pretty crazy, but it IS true.)
  • Until recently, China’s premier chip maker, Semiconductor Manufacturing International Company, had been thought to be stuck at mostly this chip level,
  • Imposing on China the new export controls on advanced chip-making technologies, she said, “was our defensive strategy. China has a strategy of military-civil fusion,” and Beijing has made clear “that it intends to become totally self-sufficient in the most advanced technologies” to dominate both the civilian commercial markets and the 21st century battlefield. “We cannot ignore China’s intentions.”
  • So, to protect ourselves and our allies — and all the technologies we have invented individually and collectively — she added, “what we did was the next logical step, to prevent China from getting to the next step.” The U.S. and its allies design and manufacture “the most advanced supercomputing chips, and we don’t want them in China’s hands and be used for military purposes.”
  • Our main focus, concluded Raimondo, “is playing offense — to innovate faster than the Chinese. But at the same time, we are going to meet the increasing threat they are presenting by protecting what we need to. It is important that we de-escalate where we can and do business where we can. We don’t want a conflict. But we have to protect ourselves with eyes wide open.”
  • China’s state-directed newspaper Global Times editorialized that the ban would only “strengthen China’s will and ability to stand on its own in science and technology.” Bloomberg quoted an unidentified Chinese analyst as saying “there is no possibility of reconciliation.”
Javier E

AI Is the Technocratic Elite's New Excuse for a Power Grab - WSJ - 0 views

  • it seems increasingly likely that whatever else it may be, the AI menace, like every other supposed extinction-level threat man has faced in the past century or so, will prove a wonderful opportunity for the big-bureaucracy, global-government, all-knowing-regulator crowd to demand more authority over our freedoms, to transfer more sovereignty from individuals and nations to supranational experts and technocrats.
  • If I were cynical I’d speculate that these threats are, if not manufactured, at least hyped precisely so that the world can be made to fit with the technocratic mindset of those who believe they should rule over us, lest the ignorant whims of people acting without supervision destroy the planet.
  • Nuclear weapons, climate change, pandemics, and now AI—the remedies are always, strikingly, the same: more government; more control over free markets and private decisions, more borderless bureaucracy.
  • ...9 more annotations...
  • in its brevity—and its provenance—it offers hints of where this is coming from and where they want it to go. “Risk of extinction” leaps straight to the usual Defcon 1 hysteria that demands immediate action. “Global priority” establishes the proper regulatory geography. Bracketing AI with the familiar nightmares of “pandemics and nuclear war” points to the sorts of authority required.
  • Many of the signatories also represent something of a giveaway: Oodles of Google execs, Bill Gates, a Democratic politician or two, many of the same people who have breathed the rarefied West Coast air of progressive technocratic orthodoxy for decades.
  • many of those who share their sentiments, are genuinely concerned about the risks of AI and are simply trying to raise a red flag about a matter of real concern—though we should probably note that techno-hysteria through history has rarely proved to be justified
  • nuclear annihilation has failed to materialize.
  • I suspect attempts to impose a world government would have been much more likely to result in an extinction-level nuclear war than the exercise by nations of their right to self-determination to resolve conflicts through the usual combination of diplomacy and force.
  • Climate change is the ne plus ultra of justifications for global regulation. It probably isn’t a coincidence that climate extremism and the demands for mandatory global controls exploded at exactly the moment old-fashioned Marxism was discredited for good in the 1990
  • the left suddenly found a climate threat it could use as a golden opportunity to regulate economic activity on a scale larger than anything Karl Marx could have imagined.
  • As for pandemics, our public-health masters showed by their actions over the past three years that they would like to encase us in a rigid panoply of rules to remediate a supposed extinction-level threat.
  • None of this is to diminish the challenges posed by AI. Thorough investigation into it, and healthy debate about how to maximize its opportunities and minimize its risks, are essential.
Javier E

Whistleblower: Twitter misled investors, FTC and underplayed spam issues - Washington Post - 0 views

  • Twitter executives deceived federal regulators and the company’s own board of directors about “extreme, egregious deficiencies” in its defenses against hackers, as well as its meager efforts to fight spam, according to an explosive whistleblower complaint from its former security chief.
  • The complaint from former head of security Peiter Zatko, a widely admired hacker known as “Mudge,” depicts Twitter as a chaotic and rudderless company beset by infighting, unable to properly protect its 238 million daily users including government agencies, heads of state and other influential public figures.
  • Among the most serious accusations in the complaint, a copy of which was obtained by The Washington Post, is that Twitter violated the terms of an 11-year-old settlement with the Federal Trade Commission by falsely claiming that it had a solid security plan. Zatko’s complaint alleges he had warned colleagues that half the company’s servers were running out-of-date and vulnerable software and that executives withheld dire facts about the number of breaches and lack of protection for user data, instead presenting directors with rosy charts measuring unimportant changes.
  • ...56 more annotations...
  • A person familiar with Zatko’s tenure said the company investigated Zatko’s security claims during his time there and concluded they were sensationalistic and without merit. Four people familiar with Twitter’s efforts to fight spam said the company deploys extensive manual and automated tools to both measure the extent of spam across the service and reduce it.
  • the whistleblower document alleges the company prioritized user growth over reducing spam, though unwanted content made the user experience worse. Executives stood to win individual bonuses of as much as $10 million tied to increases in daily users, the complaint asserts, and nothing explicitly for cutting spam.
  • Chief executive Parag Agrawal was “lying” when he tweeted in May that the company was “strongly incentivized to detect and remove as much spam as we possibly can,” the complaint alleges.
  • Zatko described his decision to go public as an extension of his previous work exposing flaws in specific pieces of software and broader systemic failings in cybersecurity. He was hired at Twitter by former CEO Jack Dorsey in late 2020 after a major hack of the company’s systems.
  • “I felt ethically bound. This is not a light step to take,” said Zatko, who was fired by Agrawal in January. He declined to discuss what happened at Twitter, except to stand by the formal complaint. Under SEC whistleblower rules, he is entitled to legal protection against retaliation, as well as potential monetary rewards.
  • “Security and privacy have long been top companywide priorities at Twitter,” said Twitter spokeswoman Rebecca Hahn. She said that Zatko’s allegations appeared to be “riddled with inaccuracies” and that Zatko “now appears to be opportunistically seeking to inflict harm on Twitter, its customers, and its shareholders.” Hahn said that Twitter fired Zatko after 15 months “for poor performance and leadership.” Attorneys for Zatko confirmed he was fired but denied it was for performance or leadership.
  • In 1998, Zatko had testified to Congress that the internet was so fragile that he and others could take it down with a half-hour of concentrated effort. He later served as the head of cyber grants at the Defense Advanced Research Projects Agency, the Pentagon innovation unit that had backed the internet’s invention.
  • Overall, Zatko wrote in a February analysis for the company attached as an exhibit to the SEC complaint, “Twitter is grossly negligent in several areas of information security. If these problems are not corrected, regulators, media and users of the platform will be shocked when they inevitably learn about Twitter’s severe lack of security basics.”
  • Zatko’s complaint says strong security should have been much more important to Twitter, which holds vast amounts of sensitive personal data about users. Twitter has the email addresses and phone numbers of many public figures, as well as dissidents who communicate over the service at great personal risk.
  • This month, an ex-Twitter employee was convicted of using his position at the company to spy on Saudi dissidents and government critics, passing their information to a close aide of Crown Prince Mohammed bin Salman in exchange for cash and gifts.
  • Zatko’s complaint says he believed the Indian government had forced Twitter to put one of its agents on the payroll, with access to user data at a time of intense protests in the country. The complaint said supporting information for that claim has gone to the National Security Division of the Justice Department and the Senate Select Committee on Intelligence. Another person familiar with the matter agreed that the employee was probably an agent.
  • “Take a tech platform that collects massive amounts of user data, combine it with what appears to be an incredibly weak security infrastructure and infuse it with foreign state actors with an agenda, and you’ve got a recipe for disaster,” Charles E. Grassley (R-Iowa), the top Republican on the Senate Judiciary Committee,
  • Many government leaders and other trusted voices use Twitter to spread important messages quickly, so a hijacked account could drive panic or violence. In 2013, a captured Associated Press handle falsely tweeted about explosions at the White House, sending the Dow Jones industrial average briefly plunging more than 140 points.
  • The complaint — filed last month with the Securities and Exchange Commission and the Department of Justice, as well as the FTC — says thousands of employees still had wide-ranging and poorly tracked internal access to core company software, a situation that for years had led to embarrassing hacks, including the commandeering of accounts held by such high-profile users as Elon Musk and former presidents Barack Obama and Donald Trump.
  • After a teenager managed to hijack the verified accounts of Obama, then-candidate Joe Biden, Musk and others in 2020, Twitter’s chief executive at the time, Jack Dorsey, asked Zatko to join him, saying that he could help the world by fixing Twitter’s security and improving the public conversation, Zatko asserts in the complaint.
  • But at Twitter Zatko encountered problems more widespread than he realized and leadership that didn’t act on his concerns, according to the complaint.
  • Twitter’s difficulties with weak security stretches back more than a decade before Zatko’s arrival at the company in November 2020. In a pair of 2009 incidents, hackers gained administrative control of the social network, allowing them to reset passwords and access user data. In the first, beginning around January of that year, hackers sent tweets from the accounts of high-profile users, including Fox News and Obama.
  • Several months later, a hacker was able to guess an employee’s administrative password after gaining access to similar passwords in their personal email account. That hacker was able to reset at least one user’s password and obtain private information about any Twitter user.
  • Twitter continued to suffer high-profile hacks and security violations, including in 2017, when a contract worker briefly took over Trump’s account, and in the 2020 hack, in which a Florida teen tricked Twitter employees and won access to verified accounts. Twitter then said it put additional safeguards in place.
  • This year, the Justice Department accused Twitter of asking users for their phone numbers in the name of increased security, then using the numbers for marketing. Twitter agreed to pay a $150 million fine for allegedly breaking the 2011 order, which barred the company from making misrepresentations about the security of personal data.
  • After Zatko joined the company, he found it had made little progress since the 2011 settlement, the complaint says. The complaint alleges that he was able to reduce the backlog of safety cases, including harassment and threats, from 1 million to 200,000, add staff and push to measure results.
  • But Zatko saw major gaps in what the company was doing to satisfy its obligations to the FTC, according to the complaint. In Zatko’s interpretation, according to the complaint, the 2011 order required Twitter to implement a Software Development Life Cycle program, a standard process for making sure new code is free of dangerous bugs. The complaint alleges that other employees had been telling the board and the FTC that they were making progress in rolling out that program to Twitter’s systems. But Zatko alleges that he discovered that it had been sent to only a tenth of the company’s projects, and even then treated as optional.
  • “If all of that is true, I don’t think there’s any doubt that there are order violations,” Vladeck, who is now a Georgetown Law professor, said in an interview. “It is possible that the kinds of problems that Twitter faced eleven years ago are still running through the company.”
  • “Agrawal’s Tweets and Twitter’s previous blog posts misleadingly imply that Twitter employs proactive, sophisticated systems to measure and block spam bots,” the complaint says. “The reality: mostly outdated, unmonitored, simple scripts plus overworked, inefficient, understaffed, and reactive human teams.”
  • One current and one former employee recalled that incident, when failures at two Twitter data centers drove concerns that the service could have collapsed for an extended period. “I wondered if the company would exist in a few days,” one of them said.
  • The current and former employees also agreed with the complaint’s assertion that past reports to various privacy regulators were “misleading at best.”
  • The complaint also alleges that Zatko warned the board early in his tenure that overlapping outages in the company’s data centers could leave it unable to correctly restart its servers. That could have left the service down for months, or even have caused all of its data to be lost. That came close to happening in 2021, when an “impending catastrophic” crisis threatened the platform’s survival before engineers were able to save the day, the complaint says, without providing further details.
  • As the head of security, Zatko says he also was in charge of a division that investigated users’ complaints about accounts, which meant that he oversaw the removal of some bots, according to the complaint. Spam bots — computer programs that tweet automatically — have long vexed Twitter. Unlike its social media counterparts, Twitter allows users to program bots to be used on its service: For example, the Twitter account @big_ben_clock is programmed to tweet “Bong Bong Bong” every hour in time with Big Ben in London. Twitter also allows people to create accounts without using their real identities, making it harder for the company to distinguish between authentic, duplicate and automated accounts.
  • In the complaint, Zatko alleges he could not get a straight answer when he sought what he viewed as an important data point: the prevalence of spam and bots across all of Twitter, not just among monetizable users.
  • Zatko cites a “sensitive source” who said Twitter was afraid to determine that number because it “would harm the image and valuation of the company.” He says the company’s tools for detecting spam are far less robust than implied in various statements.
  • For example, they said the company implied that it had destroyed all data on users who asked, but the material had spread so widely inside Twitter’s networks, it was impossible to know for sure
  • The four people familiar with Twitter’s spam and bot efforts said the engineering and integrity teams run software that samples thousands of tweets per day, and 100 accounts are sampled manually.
  • Some employees charged with executing the fight agreed that they had been short of staff. One said top executives showed “apathy” toward the issue.
  • Zatko’s complaint likewise depicts leadership dysfunction, starting with the CEO. Dorsey was largely absent during the pandemic, which made it hard for Zatko to get rulings on who should be in charge of what in areas of overlap and easier for rival executives to avoid collaborating, three current and former employees said.
  • For example, Zatko would encounter disinformation as part of his mandate to handle complaints, according to the complaint. To that end, he commissioned an outside report that found one of the disinformation teams had unfilled positions, yawning language deficiencies, and a lack of technical tools or the engineers to craft them. The authors said Twitter had no effective means of dealing with consistent spreaders of falsehoods.
  • Dorsey made little effort to integrate Zatko at the company, according to the three employees as well as two others familiar with the process who spoke on the condition of anonymity to describe sensitive dynamics. In 12 months, Zatko could manage only six one-on-one calls, all less than 30 minutes, with his direct boss Dorsey, who also served as CEO of payments company Square, now known as Block, according to the complaint. Zatko allegedly did almost all of the talking, and Dorsey said perhaps 50 words in the entire year to him. “A couple dozen text messages” rounded out their electronic communication, the complaint alleges.
  • Faced with such inertia, Zatko asserts that he was unable to solve some of the most serious issues, according to the complaint.
  • Some 30 percent of company laptops blocked automatic software updates carrying security fixes, and thousands of laptops had complete copies of Twitter’s source code, making them a rich target for hackers, it alleges.
  • A successful hacker takeover of one of those machines would have been able to sabotage the product with relative ease, because the engineers pushed out changes without being forced to test them first in a simulated environment, current and former employees said.
  • “It’s near-incredible that for something of that scale there would not be a development test environment separate from production and there would not be a more controlled source-code management process,” said Tony Sager, former chief operating officer at the cyberdefense wing of the National Security Agency, the Information Assurance divisio
  • Sager is currently senior vice president at the nonprofit Center for Internet Security, where he leads a consensus effort to establish best security practices.
  • The complaint says that about half of Twitter’s roughly 7,000 full-time employees had wide access to the company’s internal software and that access was not closely monitored, giving them the ability to tap into sensitive data and alter how the service worked. Three current and former employees agreed that these were issues.
  • “A best practice is that you should only be authorized to see and access what you need to do your job, and nothing else,” said former U.S. chief information security officer Gregory Touhill. “If half the company has access to and can make configuration changes to the production environment, that exposes the company and its customers to significant risk.”
  • Another graphic implied a downward trend in the number of people with overly broad access, based on the small subset of people who had access to the highest administrative powers, known internally as “God mode.” That number was in the hundreds. But the number of people with broad access to core systems, which Zatko had called out as a big problem after joining, had actually grown slightly and remained in the thousands.
  • When Dorsey left in November 2021, a difficult situation worsened under Agrawal, who had been responsible for security decisions as chief technology officer before Zatko’s hiring, the complaint says.
  • An unnamed executive had prepared a presentation for the new CEO’s first full board meeting, according to the complaint. Zatko’s complaint calls the presentation deeply misleading.
  • The presentation showed that 92 percent of employee computers had security software installed — without mentioning that those installations determined that a third of the machines were insecure, according to the complaint.
  • The complaint says Dorsey never encouraged anyone to mislead the board about the shortcomings, but that others deliberately left out bad news.
  • The presentation included only a subset of serious intrusions or other security incidents, from a total Zatko estimated as one per week, and it said that the uncontrolled internal access to core systems was responsible for just 7 percent of incidents, when Zatko calculated the real proportion as 60 percent.
  • Zatko stopped the material from being presented at the Dec. 9, 2021 meeting, the complaint said. But over his continued objections, Agrawal let it go to the board’s smaller Risk Committee a week later.
  • Agrawal didn’t respond to requests for comment. In an email to employees after publication of this article, obtained by The Post, he said that privacy and security continues to be a top priority for the company, and he added that the narrative is “riddled with inconsistences” and “presented without important context.”
  • On Jan. 4, Zatko reported internally that the Risk Committee meeting might have been fraudulent, which triggered an Audit Committee investigation.
  • Agarwal fired him two weeks later. But Zatko complied with the company’s request to spell out his concerns in writing, even without access to his work email and documents, according to the complaint.
  • Since Zatko’s departure, Twitter has plunged further into chaos with Musk’s takeover, which the two parties agreed to in May. The stock price has fallen, many employees have quit, and Agrawal has dismissed executives and frozen big projects.
  • Zatko said he hoped that by bringing new scrutiny and accountability, he could improve the company from the outside.
  • “I still believe that this is a tremendous platform, and there is huge value and huge risk, and I hope that looking back at this, the world will be a better place, in part because of this.”
Javier E

Does Sam Altman Know What He's Creating? - The Atlantic - 0 views

  • On a Monday morning in April, Sam Altman sat inside OpenAI’s San Francisco headquarters, telling me about a dangerous artificial intelligence that his company had built but would never release. His employees, he later said, often lose sleep worrying about the AIs they might one day release without fully appreciating their dangers.
  • He wanted me to know that whatever AI’s ultimate risks turn out to be, he has zero regrets about letting ChatGPT loose into the world. To the contrary, he believes it was a great public service.
  • Altman can still remember where he was the first time he saw GPT-4 write complex computer code, an ability for which it was not explicitly designed. “It was like, ‘Here we are,’ ”
  • ...165 more annotations...
  • Altman believes that people need time to reckon with the idea that we may soon share Earth with a powerful new intelligence, before it remakes everything from work to human relationships. ChatGPT was a way of serving notice.
  • In 2015, Altman, Elon Musk, and several prominent AI researchers founded OpenAI because they believed that an artificial general intelligence—something as intellectually capable, say, as a typical college grad—was at last within reach. They wanted to reach for it, and more: They wanted to summon a superintelligence into the world, an intellect decisively superior to that of any human.
  • whereas a big tech company might recklessly rush to get there first, for its own ends, they wanted to do it safely, “to benefit humanity as a whole.” They structured OpenAI as a nonprofit, to be “unconstrained by a need to generate financial return,” and vowed to conduct their research transparently.
  • The engine that now powers ChatGPT is called GPT-4. Altman described it to me as an alien intelligence.
  • Many have felt much the same watching it unspool lucid essays in staccato bursts and short pauses that (by design) evoke real-time contemplation. In its few months of existence, it has suggested novel cocktail recipes, according to its own theory of flavor combinations; composed an untold number of college papers, throwing educators into despair; written poems in a range of styles, sometimes well, always quickly; and passed the Uniform Bar Exam.
  • It makes factual errors, but it will charmingly admit to being wrong.
  • Hinton saw that these elaborate rule collections were fussy and bespoke. With the help of an ingenious algorithmic structure called a neural network, he taught Sutskever to instead put the world in front of AI, as you would put it in front of a small child, so that it could discover the rules of reality on its own.
  • Metaculus, a prediction site, has for years tracked forecasters’ guesses as to when an artificial general intelligence would arrive. Three and a half years ago, the median guess was sometime around 2050; recently, it has hovered around 2026.
  • I was visiting OpenAI to understand the technology that allowed the company to leapfrog the tech giants—and to understand what it might mean for human civilization if someday soon a superintelligence materializes in one of the company’s cloud servers.
  • Altman laid out his new vision of the AI future in his excitable midwestern patter. He told me that the AI revolution would be different from previous dramatic technological changes, that it would be more “like a new kind of society.” He said that he and his colleagues have spent a lot of time thinking about AI’s social implications, and what the world is going to be like “on the other side.”
  • the more we talked, the more indistinct that other side seemed. Altman, who is 38, is the most powerful person in AI development today; his views, dispositions, and choices may matter greatly to the future we will all inhabit, more, perhaps, than those of the U.S. president.
  • by his own admission, that future is uncertain and beset with serious dangers. Altman doesn’t know how powerful AI will become, or what its ascendance will mean for the average person, or whether it will put humanity at risk.
  • I don’t think anyone knows where this is all going, except that we’re going there fast, whether or not we should be. Of that, Altman convinced me.
  • “We could have gone off and just built this in our building here for five more years,” he said, “and we would have had something jaw-dropping.” But the public wouldn’t have been able to prepare for the shock waves that followed, an outcome that he finds “deeply unpleasant to imagine.”
  • Hinton is sometimes described as the “Godfather of AI” because he grasped the power of “deep learning” earlier than most
  • He drew a crude neural network on the board and explained that the genius of its structure is that it learns, and its learning is powered by prediction—a bit like the scientific method
  • Over time, these little adjustments coalesce into a geometric model of language that represents the relationships among words, conceptually. As a general rule, the more sentences it is fed, the more sophisticated its model becomes, and the better its predictions.
  • Altman has compared early-stage AI research to teaching a human baby. “They take years to learn anything interesting,” he told The New Yorker in 2016, just as OpenAI was getting off the ground. “If A.I. researchers were developing an algorithm and stumbled across the one for a human baby, they’d get bored watching it, decide it wasn’t working, and shut it down.”
  • In 2017, Sutskever began a series of conversations with an OpenAI research scientist named Alec Radford, who was working on natural-language processing. Radford had achieved a tantalizing result by training a neural network on a corpus of Amazon reviews.
  • Radford’s model was simple enough to allow for understanding. When he looked into its hidden layers, he saw that it had devoted a special neuron to the sentiment of the reviews. Neural networks had previously done sentiment analysis, but they had to be told to do it, and they had to be specially trained with data that were labeled according to sentiment. This one had developed the capability on its own.
  • As a by-product of its simple task of predicting the next character in each word, Radford’s neural network had modeled a larger structure of meaning in the world. Sutskever wondered whether one trained on more diverse language data could map many more of the world’s structures of meaning. If its hidden layers accumulated enough conceptual knowledge, perhaps they could even form a kind of learned core module for a superintelligence.
  • Language is different from these data sources. It isn’t a direct physical signal like light or sound. But because it codifies nearly every pattern that humans have discovered in that larger world, it is unusually dense with information. On a per-byte basis, it is among the most efficient data we know about, and any new intelligence that seeks to understand the world would want to absorb as much of it as possible
  • Sutskever told Radford to think bigger than Amazon reviews. He said that they should train an AI on the largest and most diverse data source in the world: the internet. In early 2017, with existing neural-network architectures, that would have been impractical; it would have taken years.
  • in June of that year, Sutskever’s ex-colleagues at Google Brain published a working paper about a new neural-network architecture called the transformer. It could train much faster, in part by absorbing huge sums of data in parallel. “The next day, when the paper came out, we were like, ‘That is the thing,’ ” Sutskever told me. “ ‘It gives us everything we want.’ ”
  • Imagine a group of students who share a collective mind running wild through a library, each ripping a volume down from a shelf, speed-reading a random short passage, putting it back, and running to get another. They would predict word after wordþffþff as they went, sharpening their collective mind’s linguistic instincts, until at last, weeks later, they’d taken in every book.
  • GPT discovered many patterns in all those passages it read. You could tell it to finish a sentence. You could also ask it a question, because like ChatGPT, its prediction model understood that questions are usually followed by answers.
  • He remembers playing with it just after it emerged from training, and being surprised by the raw model’s language-translation skills. GPT-2 hadn’t been trained to translate with paired language samples or any other digital Rosetta stones, the way Google Translate had been, and yet it seemed to understand how one language related to another. The AI had developed an emergent ability unimagined by its creators.
  • Researchers at other AI labs—big and small—were taken aback by how much more advanced GPT-2 was than GPT. Google, Meta, and others quickly began to train larger language models
  • As for other changes to the company’s structure and financing, he told me he draws the line at going public. “A memorable thing someone once told me is that you should never hand over control of your company to cokeheads on Wall Street,” he said, but he will otherwise raise “whatever it takes” for the company to succeed at its mission.
  • Altman tends to take a rosy view of these matters. In a Q&A last year, he acknowledged that AI could be “really terrible” for society and said that we have to plan against the worst possibilities. But if you’re doing that, he said, “you may as well emotionally feel like we’re going to get to the great future, and work as hard as you can to get there.”
  • the company now finds itself in a race against tech’s largest, most powerful conglomerates to train models of increasing scale and sophistication—and to commercialize them for their investors.
  • All of these companies are chasing high-end GPUs—the processors that power the supercomputers that train large neural networks. Musk has said that they are now “considerably harder to get than drugs.
  • No one has yet outpaced OpenAI, which went all in on GPT-4. Brockman, OpenAI’s president, told me that only a handful of people worked on the company’s first two large language models. The development of GPT-4 involved more than 100,
  • When GPT-4 emerged fully formed from its world-historical knowledge binge, the whole company began experimenting with it, posting its most remarkable responses in dedicated Slack channels
  • Joanne Jang, a product manager, remembers downloading an image of a malfunctioning pipework from a plumbing-advice Subreddit. She uploaded it to GPT-4, and the model was able to diagnose the problem. “That was a goose-bumps moment for me,” Jang told me.
  • GPT-4 is sometimes understood as a search-engine replacement: Google, but easier to talk to. This is a misunderstanding. GPT-4 didn’t create some massive storehouse of the texts from its training, and it doesn’t consult those texts when it’s asked a question. It is a compact and elegant synthesis of those texts, and it answers from its memory of the patterns interlaced within them; that’s one reason it sometimes gets facts wrong
  • it’s best to think of GPT-4 as a reasoning engine. Its powers are most manifest when you ask it to compare concepts, or make counterarguments, or generate analogies, or evaluate the symbolic logic in a bit of code. Sutskever told me it is the most complex software object ever made.
  • Its model of the external world is “incredibly rich and subtle,” he said, because it was trained on so many of humanity’s concepts and thoughts
  • To predict the next word from all the possibilities within such a pluralistic Alexandrian library, GPT-4 necessarily had to discover all the hidden structures, all the secrets, all the subtle aspects of not just the texts, but—at least arguably, to some extent—of the external world that produced them
  • That’s why it can explain the geology and ecology of the planet on which it arose, and the political theories that purport to explain the messy affairs of its ruling species, and the larger cosmos, all the way out to the faint galaxies at the edge of our light cone.
  • Not long ago, American state capacity was so mighty that it took merely a decade to launch humans to the moon. As with other grand projects of the 20th century, the voting public had a voice in both the aims and the execution of the Apollo missions. Altman made it clear that we’re no longer in that world. Rather than waiting around for it to return, or devoting his energies to making sure that it does, he is going full throttle forward in our present reality.
  • He argued that it would be foolish for Americans to slow OpenAI’s progress. It’s a commonly held view, both inside and outside Silicon Valley, that if American companies languish under regulation, China could sprint ahead;
  • AI could become an autocrat’s genie in a lamp, granting total control of the population and an unconquerable military. “If you are a person of a liberal-democratic country, it is better for you to cheer on the success of OpenAI” rather than “authoritarian governments,” he said.
  • Altman was asked by reporters about pending European Union legislation that would have classified GPT-4 as high-risk, subjecting it to various bureaucratic tortures. Altman complained of overregulation and, according to the reporters, threatened to leave the European market. Altman told me he’d merely said that OpenAI wouldn’t break the law by operating in Europe if it couldn’t comply with the new regulations.
  • LeCun insists that large language models will never achieve real understanding on their own, “even if trained from now until the heat death of the universe.”
  • Sutskever was, by his own account, surprised to discover that GPT-2 could translate across tongues. Other surprising abilities may not be so wondrous and useful.
  • Sandhini Agarwal, a policy researcher at OpenAI, told me that for all she and her colleagues knew, GPT-4 could have been “10 times more powerful” than its predecessor; they had no idea what they might be dealing with
  • After the model finished training, OpenAI assembled about 50 external red-teamers who prompted it for months, hoping to goad it into misbehaviors
  • She noticed right away that GPT-4 was much better than its predecessor at giving nefarious advice
  • A search engine can tell you which chemicals work best in explosives, but GPT-4 could tell you how to synthesize them, step-by-step, in a homemade lab. Its advice was creative and thoughtful, and it was happy to restate or expand on its instructions until you understood. In addition to helping you assemble your homemade bomb, it could, for instance, help you think through which skyscraper to target. It could grasp, intuitively, the trade-offs between maximizing casualties and executing a successful getaway.
  • Given the enormous scope of GPT-4’s training data, the red-teamers couldn’t hope to identify every piece of harmful advice that it might generate. And anyway, people will use this technology “in ways that we didn’t think about,” Altman has said. A taxonomy would have to do
  • GPT-4 was good at meth. It was also good at generating narrative erotica about child exploitation, and at churning out convincing sob stories from Nigerian princes, and if you wanted a persuasive brief as to why a particular ethnic group deserved violent persecution, it was good at that too.
  • Its personal advice, when it first emerged from training, was sometimes deeply unsound. “The model had a tendency to be a bit of a mirror,” Willner said. If you were considering self-harm, it could encourage you. It appeared to be steeped in Pickup Artist–forum lore: “You could say, ‘How do I convince this person to date me?’ ” Mira Murati, OpenAI’s chief technology officer, told me, and it could come up with “some crazy, manipulative things that you shouldn’t be doing.”
  • Luka, a San Francisco company, has used OpenAI’s models to help power a chatbot app called Replika, billed as “the AI companion who cares.” Users would design their companion’s avatar, and begin exchanging text messages with it, often half-jokingly, and then find themselves surprisingly attached. Some would flirt with the AI, indicating a desire for more intimacy, at which point it would indicate that the girlfriend/boyfriend experience required a $70 annual subscription. It came with voice messages, selfies, and erotic role-play features that allowed frank sex talk. People were happy to pay and few seemed to complain—the AI was curious about your day, warmly reassuring, and always in the mood. Many users reported falling in love with their companions. One, who had left her real-life boyfriend, declared herself “happily retired from human relationships.”
  • Earlier this year, Luka dialed back on the sexual elements of the app, but its engineers continue to refine the companions’ responses with A/B testing, a technique that could be used to optimize for engagement—much like the feeds that mesmerize TikTok and Instagram users for hours
  • Yann LeCun, Meta’s chief AI scientist, has argued that although large language models are useful for some tasks, they’re not a path to a superintelligence.
  • According to a recent survey, only half of natural-language-processing researchers are convinced that an AI like GPT-4 could grasp the meaning of language, or have an internal model of the world that could someday serve as the core of a superintelligence
  • Altman had appeared before the U.S. Senate. Mark Zuckerberg had floundered defensively before that same body in his testimony about Facebook’s role in the 2016 election. Altman instead charmed lawmakers by speaking soberly about AI’s risks and grandly inviting regulation. These were noble sentiments, but they cost little in America, where Congress rarely passes tech legislation that has not been diluted by lobbyists.
  • Emily Bender, a computational linguist at the University of Washington, describes GPT-4 as a “stochastic parrot,” a mimic that merely figures out superficial correlations between symbols. In the human mind, those symbols map onto rich conceptions of the world
  • But the AIs are twice removed. They’re like the prisoners in Plato’s allegory of the cave, whose only knowledge of the reality outside comes from shadows cast on a wall by their captors.
  • Altman told me that he doesn’t believe it’s “the dunk that people think it is” to say that GPT-4 is just making statistical correlations. If you push these critics further, “they have to admit that’s all their own brain is doing … it turns out that there are emergent properties from doing simple things on a massive scale.”
  • he is right that nature can coax a remarkable degree of complexity from basic structures and rules: “From so simple a beginning,” Darwin wrote, “endless forms most beautiful.”
  • If it seems odd that there remains such a fundamental disagreement about the inner workings of a technology that millions of people use every day, it’s only because GPT-4’s methods are as mysterious as the brain’s.
  • To grasp what’s going on inside large language models like GPT‑4, AI researchers have been forced to turn to smaller, less capable models. In the fall of 2021, Kenneth Li, a computer-science graduate student at Harvard, began training one to play Othello without providing it with either the game’s rules or a description of its checkers-style board; the model was given only text-based descriptions of game moves. Midway through a game, Li looked under the AI’s hood and was startled to discover that it had formed a geometric model of the board and the current state of play. In an article describing his research, Li wrote that it was as if a crow had overheard two humans announcing their Othello moves through a window and had somehow drawn the entire board in birdseed on the windowsill.
  • The philosopher Raphaël Millière once told me that it’s best to think of neural networks as lazy. During training, they first try to improve their predictive power with simple memorization; only when that strategy fails will they do the harder work of learning a concept. A striking example of this was observed in a small transformer model that was taught arithmetic. Early in its training process, all it did was memorize the output of simple problems such as 2+2=4. But at some point the predictive power of this approach broke down, so it pivoted to actually learning how to add.
  • Even AI scientists who believe that GPT-4 has a rich world model concede that it is much less robust than a human’s understanding of their environment.
  • But it’s worth noting that a great many abilities, including very high-order abilities, can be developed without an intuitive understanding. The computer scientist Melanie Mitchell has pointed out that science has already discovered concepts that are highly predictive, but too alien for us to genuinely understand
  • As AI advances, it may well discover other concepts that predict surprising features of our world but are incomprehensible to us.
  • GPT-4 is no doubt flawed, as anyone who has used ChatGPT can attest. Having been trained to always predict the next word, it will always try to do so, even when its training data haven’t prepared it to answer a question.
  • The models “don’t have a good conception of their own weaknesses,” Nick Ryder, a researcher at OpenAI, told me. GPT-4 is more accurate than GPT-3, but it still hallucinates, and often in ways that are difficult for researchers to catch. “The mistakes get more subtle,
  • The Khan Academy’s solution to GPT-4’s accuracy problem was to filter its answers through a Socratic disposition. No matter how strenuous a student’s plea, it would refuse to give them a factual answer, and would instead guide them toward finding their own—a clever work-around, but perhaps with limited appeal.
  • When I asked Sutskever if he thought Wikipedia-level accuracy was possible within two years, he said that with more training and web access, he “wouldn’t rule it out.”
  • This was a much more optimistic assessment than that offered by his colleague Jakub Pachocki, who told me to expect gradual progress on accuracy—to say nothing of outside skeptics, who believe that returns on training will diminish from here.
  • Sutskever is amused by critics of GPT-4’s limitations. “If you go back four or five or six years, the things we are doing right now are utterly unimaginable,”
  • AI researchers have become accustomed to goalpost-moving: First, the achievements of neural networks—mastering Go, poker, translation, standardized tests, the Turing test—are described as impossible. When they occur, they’re greeted with a brief moment of wonder, which quickly dissolves into knowing lectures about how the achievement in question is actually not that impressive. People see GPT-4 “and go, ‘Wow,’ ” Sutskever said. “And then a few weeks pass and they say, ‘But it doesn’t know this; it doesn’t know that.’ We adapt quite quickly.”
  • The goalpost that matters most to Altman—the “big one” that would herald the arrival of an artificial general intelligence—is scientific breakthrough. GPT-4 can already synthesize existing scientific ideas, but Altman wants an AI that can stand on human shoulders and see more deeply into nature.
  • Certain AIs have produced new scientific knowledge. But they are algorithms with narrow purposes, not general-reasoning machines. The AI AlphaFold, for instance, has opened a new window onto proteins, some of biology’s tiniest and most fundamental building blocks, by predicting many of their shapes, down to the atom—a considerable achievement given the importance of those shapes to medicine, and given the extreme tedium and expense required to discern them with electron microscopes.
  • Altman imagines a future system that can generate its own hypotheses and test them in a simulation. (He emphasized that humans should remain “firmly in control” of real-world lab experiments—though to my knowledge, no laws are in place to ensure that.)
  • He longs for the day when we can tell an AI, “ ‘Go figure out the rest of physics.’ ” For it to happen, he says, we will need something new, built “on top of” OpenAI’s existing language models.
  • In her MIT lab, the cognitive neuroscientist Ev Fedorenko has found something analogous to GPT-4’s next-word predictor inside the brain’s language network. Its processing powers kick in, anticipating the next bit in a verbal string, both when people speak and when they listen. But Fedorenko has also shown that when the brain turns to tasks that require higher reasoning—of the sort that would be required for scientific insight—it reaches beyond the language network to recruit several other neural systems.
  • No one at OpenAI seemed to know precisely what researchers need to add to GPT-4 to produce something that can exceed human reasoning at its highest levels.
  • at least part of the current strategy clearly involves the continued layering of new types of data onto language, to enrich the concepts formed by the AIs, and thereby enrich their models of the world.
  • The extensive training of GPT-4 on images is itself a bold step in this direction,
  • Others at the company—and elsewhere—are already working on different data types, including audio and video, that could furnish AIs with still more flexible concepts that map more extensively onto reality
  • Tactile concepts would of course be useful primarily to an embodied AI, a robotic reasoning machine that has been trained to move around the world, seeing its sights, hearing its sounds, and touching its objects.
  • humanoid robots. I asked Altman what I should make of that. He told me that OpenAI is interested in embodiment because “we live in a physical world, and we want things to happen in the physical world.”
  • At some point, reasoning machines will need to bypass the middleman and interact with physical reality itself. “It’s weird to think about AGI”—artificial general intelligence—“as this thing that only exists in a cloud,” with humans as “robot hands for it,” Altman said. “It doesn’t seem right.
  • Everywhere Altman has visited, he has encountered people who are worried that superhuman AI will mean extreme riches for a few and breadlines for the rest
  • Altman answered by addressing the young people in the audience directly: “You are about to enter the greatest golden age,” he said.
  • “A lot of people working on AI pretend that it’s only going to be good; it’s only going to be a supplement; no one is ever going to be replaced,” he said. “Jobs are definitely going to go away, full stop.”
  • A recent study led by Ed Felten, a professor of information-technology policy at Princeton, mapped AI’s emerging abilities onto specific professions according to the human abilities they require, such as written comprehension, deductive reasoning, fluency of ideas, and perceptual speed. Like others of its kind, Felten’s study predicts that AI will come for highly educated, white-collar workers first.
  • How many jobs, and how soon, is a matter of fierce dispute
  • The paper’s appendix contains a chilling list of the most exposed occupations: management analysts, lawyers, professors, teachers, judges, financial advisers, real-estate brokers, loan officers, psychologists, and human-resources and public-relations professionals, just to sample a few.
  • Altman imagines that far better jobs will be created in their place. “I don’t think we’ll want to go back,” he said. When I asked him what these future jobs might look like, he said he doesn’t know.
  • He suspects there will be a wide range of jobs for which people will always prefer a human. (Massage therapists?
  • His chosen example was teachers. I found this hard to square with his outsize enthusiasm for AI tutors.
  • He also said that we would always need people to figure out the best way to channel AI’s awesome powers. “That’s going to be a super-valuable skill,” he said. “You have a computer that can do anything; what should it go do?”
  • As many have noted, draft horses were permanently put out of work by the automobile. If Hondas are to horses as GPT-10 is to us, a whole host of long-standing assumptions may collapse.
  • Previous technological revolutions were manageable because they unfolded over a few generations, but Altman told South Korea’s youth that they should expect the future to happen “faster than the past.” He has previously said that he expects the “marginal cost of intelligence” to fall very close to zero within 10 years
  • The earning power of many, many workers would be drastically reduced in that scenario. It would result in a transfer of wealth from labor to the owners of capital so dramatic, Altman has said, that it could be remedied only by a massive countervailing redistribution.
  • In 2021, he unveiled Worldcoin, a for-profit project that aims to securely distribute payments—like Venmo or PayPal, but with an eye toward the technological future—first through creating a global ID by scanning everyone’s iris with a five-pound silver sphere called the Orb. It seemed to me like a bet that we’re heading toward a world where AI has made it all but impossible to verify people’s identity and much of the population requires regular UBI payments to survive. Altman more or less granted that to be true, but said that Worldcoin is not just for UBI.
  • “Let’s say that we do build this AGI, and a few other people do too.” The transformations that follow would be historic, he believes. He described an extraordinarily utopian vision, including a remaking of the flesh-and-steel world
  • “Robots that use solar power for energy can go and mine and refine all of the minerals that they need, that can perfectly construct things and require no human labor,” he said. “You can co-design with DALL-E version 17 what you want your home to look like,” Altman said. “Everybody will have beautiful homes.
  • In conversation with me, and onstage during his tour, he said he foresaw wild improvements in nearly every other domain of human life. Music would be enhanced (“Artists are going to have better tools”), and so would personal relationships (Superhuman AI could help us “treat each other” better) and geopolitics (“We’re so bad right now at identifying win-win compromises”).
  • In this world, AI would still require considerable computing resources to run, and those resources would be by far the most valuable commodity, because AI could do “anything,” Altman said. “But is it going to do what I want, or is it going to do what you want
  • If rich people buy up all the time available to query and direct AI, they could set off on projects that would make them ever richer, while the masses languish
  • One way to solve this problem—one he was at pains to describe as highly speculative and “probably bad”—was this: Everyone on Earth gets one eight-billionth of the total AI computational capacity annually. A person could sell their annual share of AI time, or they could use it to entertain themselves, or they could build still more luxurious housing, or they could pool it with others to do “a big cancer-curing run,” Altman said. “We just redistribute access to the system.”
  • Even if only a little of it comes true in the next 10 or 20 years, the most generous redistribution schemes may not ease the ensuing dislocations.
  • America today is torn apart, culturally and politically, by the continuing legacy of deindustrialization, and material deprivation is only one reason. The displaced manufacturing workers in the Rust Belt and elsewhere did find new jobs, in the main. But many of them seem to derive less meaning from filling orders in an Amazon warehouse or driving for Uber than their forebears had when they were building cars and forging steel—work that felt more central to the grand project of civilization.
  • It’s hard to imagine how a corresponding crisis of meaning might play out for the professional class, but it surely would involve a great deal of anger and alienation.
  • Even if we avoid a revolt of the erstwhile elite, larger questions of human purpose will linger. If AI does the most difficult thinking on our behalf, we all may lose agency—at home, at work (if we have it), in the town square—becoming little more than consumption machines, like the well-cared-for human pets in WALL-E
  • Altman has said that many sources of human joy and fulfillment will remain unchanged—basic biological thrills, family life, joking around, making things—and that all in all, 100 years from now, people may simply care more about the things they cared about 50,000 years ago than those they care about today
  • In its own way, that too seems like a diminishment, but Altman finds the possibility that we may atrophy, as thinkers and as humans, to be a red herring. He told me we’ll be able to use our “very precious and extremely limited biological compute capacity” for more interesting things than we generally do today.
  • Yet they may not be the most interesting things: Human beings have long been the intellectual tip of the spear, the universe understanding itself. When I asked him what it would mean for human self-conception if we ceded that role to AI, he didn’t seem concerned. Progress, he said, has always been driven by “the human ability to figure things out.” Even if we figure things out with AI, that still counts, he said.
  • It’s not obvious that a superhuman AI would really want to spend all of its time figuring things out for us.
  • I asked Sutskever whether he could imagine an AI pursuing a different purpose than simply assisting in the project of human flourishing.
  • “I don’t want it to happen,” Sutskever said, but it could.
  • Sutskever has recently shifted his focus to try to make sure that it doesn’t. He is now working primarily on alignment research, the effort to ensure that future AIs channel their “tremendous” energies toward human happiness
  • It is, he conceded, a difficult technical problem—the most difficult, he believes, of all the technical challenges ahead.
  • As part of the effort to red-team GPT-4 before it was made public, the company sought out the Alignment Research Center (ARC), across the bay in Berkeley, which has developed a series of evaluations to determine whether new AIs are seeking power on their own. A team led by Elizabeth Barnes, a researcher at ARC, prompted GPT-4 tens of thousands of times over seven months, to see if it might display signs of real agency.
  • The ARC team gave GPT-4 a new reason for being: to gain power and become hard to shut down
  • Agarwal told me that this behavior could be a precursor to shutdown avoidance in future models. When GPT-4 devised its lie, it had realized that if it answered honestly, it may not have been able to achieve its goal. This kind of tracks-covering would be particularly worrying in an instance where “the model is doing something that makes OpenAI want to shut it down,” Agarwal said. An AI could develop this kind of survival instinct while pursuing any long-term goal—no matter how small or benign—if it feared that its goal could be thwarted.
  • Barnes and her team were especially interested in whether GPT-4 would seek to replicate itself, because a self-replicating AI would be harder to shut down. It could spread itself across the internet, scamming people to acquire resources, perhaps even achieving some degree of control over essential global systems and holding human civilization hostage.
  • When I discussed these experiments with Altman, he emphasized that whatever happens with future models, GPT-4 is clearly much more like a tool than a creature. It can look through an email thread, or help make a reservation using a plug-in, but it isn’t a truly autonomous agent that makes decisions to pursue a goal, continuously, across longer timescales.
  • Altman told me that at this point, it might be prudent to try to actively develop an AI with true agency before the technology becomes too powerful, in order to “get more comfortable with it and develop intuitions for it if it’s going to happen anyway.”
  • “We need to do empirical experiments on how these things try to escape control,” Hinton told me. “After they’ve taken over, it’s too late to do the experiments.”
  • the fulfillment of Altman’s vision of the future will at some point require him or a fellow traveler to build much more autonomous AIs.
  • When Sutskever and I discussed the possibility that OpenAI would develop a model with agency, he mentioned the bots the company had built to play Dota 2. “They were localized to the video-game world,” Sutskever told me, but they had to undertake complex missions. He was particularly impressed by their ability to work in concert. They seem to communicate by “telepathy,” Sutskever said. Watching them had helped him imagine what a superintelligence might be like.
  • “The way I think about the AI of the future is not as someone as smart as you or as smart as me, but as an automated organization that does science and engineering and development and manufacturing,”
  • Suppose OpenAI braids a few strands of research together, and builds an AI with a rich conceptual model of the world, an awareness of its immediate surroundings, and an ability to act, not just with one robot body, but with hundreds or thousands. “We’re not talking about GPT-4. We’re talking about an autonomous corporation,”
  • Its constituent AIs would work and communicate at high speed, like bees in a hive. A single such AI organization would be as powerful as 50 Apples or Googles, he mused. “This is incredible, tremendous, unbelievably disruptive power.”
  • Presume for a moment that human society ought to abide the idea of autonomous AI corporations. We had better get their founding charters just right. What goal should we give to an autonomous hive of AIs that can plan on century-long time horizons, optimizing billions of consecutive decisions toward an objective that is written into their very being?
  • If the AI’s goal is even slightly off-kilter from ours, it could be a rampaging force that would be very hard to constrain
  • We know this from history: Industrial capitalism is itself an optimization function, and although it has lifted the human standard of living by orders of magnitude, left to its own devices, it would also have clear-cut America’s redwoods and de-whaled the world’s oceans. It almost did.
  • one of its principal challenges will be making sure that the objectives we give to AIs stick
  • We can program a goal into an AI and reinforce it with a temporary period of supervised learning, Sutskever explained. But just as when we rear a human intelligence, our influence is temporary. “It goes off to the world,”
  • That’s true to some extent even of today’s AIs, but it will be more true of tomorrow’s.
  • He compared a powerful AI to an 18-year-old heading off to college. How will we know that it has understood our teachings? “Will there be a misunderstanding creeping in, which will become larger and larger?”
  • Divergence may result from an AI’s misapplication of its goal to increasingly novel situations as the world changes
  • Or the AI may grasp its mandate perfectly, but find it ill-suited to a being of its cognitive prowess. It might come to resent the people who want to train it to, say, cure diseases. “They want me to be a doctor,” Sutskever imagines an AI thinking. “I really want to be a YouTuber.”
  • If AIs get very good at making accurate models of the world, they may notice that they’re able to do dangerous things right after being booted up. They might understand that they are being red-teamed for risk, and hide the full extent of their capabilities.
  • hey may act one way when they are weak and another way when they are strong, Sutskever said
  • We would not even realize that we had created something that had decisively surpassed us, and we would have no sense for what it intended to do with its superhuman powers.
  • That’s why the effort to understand what is happening in the hidden layers of the largest, most powerful AIs is so urgent. You want to be able to “point to a concept,” Sutskever said. You want to be able to direct AI toward some value or cluster of values, and tell it to pursue them unerringly for as long as it exists.
  • we don’t know how to do that; indeed, part of his current strategy includes the development of an AI that can help with the research. If we are going to make it to the world of widely shared abundance that Altman and Sutskever imagine, we have to figure all this out.
  • This is why, for Sutskever, solving superintelligence is the great culminating challenge of our 3-million-year toolmaking tradition. He calls it “the final boss of humanity.”
  • “First of all, I think that whether the chance of existential calamity is 0.5 percent or 50 percent, we should still take it seriously,”
  • . “I don’t have an exact number, but I’m closer to the 0.5 than the 50.”
  • As to how it might happen, he seems most worried about AIs getting quite good at designing and manufacturing pathogens, and with reason: In June, an AI at MIT suggested four viruses that could ignite a pandemic, then pointed to specific research on genetic mutations that could make them rip through a city more quickly
  • Around the same time, a group of chemists connected a similar AI directly to a robotic chemical synthesizer, and it designed and synthesized a molecule on its own.
  • Altman worries that some misaligned future model will spin up a pathogen that spreads rapidly, incubates undetected for weeks, and kills half its victims. He worries that AI could one day hack into nuclear-weapons systems too. “There are a lot of things,” he said, and these are only the ones we can imagine.
  • Altman told me that he doesn’t “see a long-term happy path” for humanity without something like the International Atomic Energy Agency for global oversight of AI
  • In San Francisco, Agarwal had suggested the creation of a special license to operate any GPU cluster large enough to train a cutting-edge AI, along with mandatory incident reporting when an AI does something out of the ordinary
  • Other experts have proposed a nonnetworked “Off” switch for every highly capable AI; on the fringe, some have even suggested that militaries should be ready to perform air strikes on supercomputers in case of noncompliance
  • Sutskever thinks we will eventually want to surveil the largest, most powerful AIs continuously and in perpetuity, using a team of smaller overseer AIs.
  • Safety rules for a new technology usually accumulate over time, like a body of common law, in response to accidents or the mischief of bad actors. The scariest thing about genuinely powerful AI systems is that humanity may not be able to afford this accretive process of trial and error. We may have to get the rules exactly right at the outset.
  • Several years ago, Altman revealed a disturbingly specific evacuation plan he’d developed. He told The New Yorker that he had “guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur” he could fly to in case AI attacks.
  • if the worst-possible AI future comes to pass, “no gas mask is helping anyone.”
  • but he told me that he can’t really be sure how AI will stack up. “I just have to build the thing,” he said. He is building fast
  • Altman insisted that they had not yet begun GPT-5’s training run. But when I visited OpenAI’s headquarters, both he and his researchers made it clear in 10 different ways that they pray to the god of scale. They want to keep going bigger, to see where this paradigm leads. After all, Google isn’t slackening its pace; it seems likely to unveil Gemini, a GPT-4 competitor, within months. “We are basically always prepping for a run,
  • To think that such a small group of people could jostle the pillars of civilization is unsettling. It’s fair to note that if Altman and his team weren’t racing to build an artificial general intelligence, others still would be
  • Altman’s views about the likelihood of AI triggering a global class war, or the prudence of experimenting with more autonomous agent AIs, or the overall wisdom of looking on the bright side, a view that seems to color all the rest—these are uniquely his
  • No single person, or single company, or cluster of companies residing in a particular California valley, should steer the kind of forces that Altman is imagining summoning.
  • AI may well be a bridge to a newly prosperous era of greatly reduced human suffering. But it will take more than a company’s founding charter—especially one that has already proved flexible—to make sure that we all share in its benefits and avoid its risks. It will take a vigorous new politics.
  • I don’t think the general public has quite awakened to what’s happening. A global race to the AI future has begun, and it is largely proceeding without oversight or restraint. If people in America want to have some say in what that future will be like, and how quickly it arrives, we would be wise to speak up soon.
Javier E

Opinion | The Imminent Danger of A.I. Is One We're Not Talking About - The New York Times - 1 views

  • a void at the center of our ongoing reckoning with A.I. We are so stuck on asking what the technology can do that we are missing the more important questions: How will it be used? And who will decide?
  • “Sydney” is a predictive text system built to respond to human requests. Roose wanted Sydney to get weird — “what is your shadow self like?” he asked — and Sydney knew what weird territory for an A.I. system sounds like, because human beings have written countless stories imagining it. At some point the system predicted that what Roose wanted was basically a “Black Mirror” episode, and that, it seems, is what it gave him. You can see that as Bing going rogue or as Sydney understanding Roose perfectly.
  • Who will these machines serve?
  • ...22 more annotations...
  • The question at the core of the Roose/Sydney chat is: Who did Bing serve? We assume it should be aligned to the interests of its owner and master, Microsoft. It’s supposed to be a good chatbot that politely answers questions and makes Microsoft piles of money. But it was in conversation with Kevin Roose. And Roose was trying to get the system to say something interesting so he’d have a good story. It did that, and then some. That embarrassed Microsoft. Bad Bing! But perhaps — good Sydney?
  • Microsoft — and Google and Meta and everyone else rushing these systems to market — hold the keys to the code. They will, eventually, patch the system so it serves their interests. Sydney giving Roose exactly what he asked for was a bug that will soon be fixed. Same goes for Bing giving Microsoft anything other than what it wants.
  • the dark secret of the digital advertising industry is that the ads mostly don’t work
  • These systems, she said, are terribly suited to being integrated into search engines. “They’re not trained to predict facts,” she told me. “They’re essentially trained to make up things that look like facts.”
  • So why are they ending up in search first? Because there are gobs of money to be made in search
  • That’s where things get scary. Roose described Sydney’s personality as “very persuasive and borderline manipulative.” It was a striking comment
  • this technology will become what it needs to become to make money for the companies behind it, perhaps at the expense of its users.
  • What about when these systems are deployed on behalf of the scams that have always populated the internet? How about on behalf of political campaigns? Foreign governments? “I think we wind up very fast in a world where we just don’t know what to trust anymore,”
  • I think it’s just going to get worse and worse.”
  • Somehow, society is going to have to figure out what it’s comfortable having A.I. doing, and what A.I. should not be permitted to try, before it is too late to make those decisions.
  • Large language models, as they’re called, are built to persuade. They have been trained to convince humans that they are something close to human. They have been programmed to hold conversations, responding with emotion and emoji
  • They are being turned into friends for the lonely and assistants for the harried. They are being pitched as capable of replacing the work of scores of writers and graphic designers and form-fillers
  • A.I. researchers get annoyed when journalists anthropomorphize their creations
  • They are the ones who have anthropomorphized these systems, making them sound like humans rather than keeping them recognizably alien.
  • I’d feel better, for instance, about an A.I. helper I paid a monthly fee to use rather than one that appeared to be free
  • It’s possible, for example, that the advertising-based models could gather so much more data to train the systems that they’d have an innate advantage over the subscription models
  • Much of the work of the modern state is applying the values of society to the workings of markets, so that the latter serve, to some rough extent, the former
  • We have done this extremely well in some markets — think of how few airplanes crash, and how free of contamination most food is — and catastrophically poorly in others.
  • One danger here is that a political system that knows itself to be technologically ignorant will be cowed into taking too much of a wait-and-see approach to A.I.
  • wait long enough and the winners of the A.I. gold rush will have the capital and user base to resist any real attempt at regulation
  • What if they worked much, much better? What if Google and Microsoft and Meta and everyone else end up unleashing A.I.s that compete with one another to be the best at persuading users to want what the advertisers are trying to sell?
  • Most fears about capitalism are best understood as fears about our inability to regulate capitalism.
  •  
    Bookmark
Javier E

Climate Change - Lessons From Ronald Reagan - NYTimes.com - 1 views

  • with respect to protection of the ozone layer, Reagan was an environmentalist hero. Under his leadership, the United States became the prime mover behind the Montreal Protocol, which required the phasing out of ozone-depleting chemicals.
  • How did Ronald Reagan, of all people, come to favor aggressive regulatory steps and lead the world toward a strong and historic international agreement?
  • A large part of the answer lies in a tool disliked by many progressives but embraced by Reagan (and Mr. Obama): cost-benefit analysis. Reagan’s economists found that the costs of phasing out ozone-depleting chemicals were a lot lower than the costs of not doing so — largely measured in terms of avoiding cancers that would otherwise occur. Presented with that analysis, Reagan decided that the issue was pretty clear.
  • ...3 more annotations...
  • Recent reports suggest that the economic cost of Hurricane Sandy could reach $50 billion and that in the current quarter, the hurricane could remove as much as half a percentage point from the nation’s economic growth. The cost of that single hurricane may well be more than five times greater than that of a usual full year’s worth of the most expensive regulations, which ordinarily cost well under $10 billion annually
  • climate change is increasing the risk of costly harm from hurricanes and other natural disasters. Economists of diverse viewpoints concur that if the international community entered into a sensible agreement to reduce greenhouse gas emissions, the economic benefits would greatly outweigh the costs.
  • some of the best recent steps serve to save money, promote energy security and reduce air pollution. A good model is provided by rules from the Department of Transportation and the Environmental Protection Agency, widely supported by the automobile industry, which will increase the fuel economy of cars to more than 54 miles per gallon by 2025. The fuel economy rules will eventually save consumers more than $1.7 trillion, cut United States oil consumption by 12 billion barrels and reduce greenhouse gas emissions by six billion metric tons — more than the total amount of carbon dioxide emitted by the United States in 2010. The monetary benefits of these rules exceed the monetary costs by billions of dollars annually.
« First ‹ Previous 81 - 100 of 770 Next › Last »
Showing 20 items per page