Skip to main content

Home/ History Readings/ Group items matching "improvement" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
Javier E

How Could AI Destroy Humanity? - The New York Times - 0 views

  • “AI will steadily be delegated, and could — as it becomes more autonomous — usurp decision making and thinking from current humans and human-run institutions,” said Anthony Aguirre, a cosmologist at the University of California, Santa Cruz and a founder of the Future of Life Institute, the organization behind one of two open letters.
  • “At some point, it would become clear that the big machine that is running society and the economy is not really under human control, nor can it be turned off, any more than the S&P 500 could be shut down,” he said.
  • Are there signs A.I. could do this?Not quite. But researchers are transforming chatbots like ChatGPT into systems that can take actions based on the text they generate. A project called AutoGPT is the prime example.
  • ...11 more annotations...
  • The idea is to give the system goals like “create a company” or “make some money.” Then it will keep looking for ways of reaching that goal, particularly if it is connected to other internet services.
  • A system like AutoGPT can generate computer programs. If researchers give it access to a computer server, it could actually run those programs. In theory, this is a way for AutoGPT to do almost anything online — retrieve information, use applications, create new applications, even improve itself.
  • Systems like AutoGPT do not work well right now. They tend to get stuck in endless loops. Researchers gave one system all the resources it needed to replicate itself. It couldn’t do it.In time, those limitations could be fixed.
  • “People are actively trying to build systems that self-improve,” said Connor Leahy, the founder of Conjecture, a company that says it wants to align A.I. technologies with human values. “Currently, this doesn’t work. But someday, it will. And we don’t know when that day is.”
  • Mr. Leahy argues that as researchers, companies and criminals give these systems goals like “make some money,” they could end up breaking into banking systems, fomenting revolution in a country where they hold oil futures or replicating themselves when someone tries to turn them off.
  • Because they learn from more data than even their creators can understand, these system also exhibit unexpected behavior. Researchers recently showed that one system was able to hire a human online to defeat a Captcha test. When the human asked if it was “a robot,” the system lied and said it was a person with a visual impairment.Some experts worry that as researchers make these systems more powerful, training them on ever larger amounts of data, they could learn more bad habits.
  • Who are the people behind these warnings?In the early 2000s, a young writer named Eliezer Yudkowsky began warning that A.I. could destroy humanity. His online posts spawned a community of believers.
  • Mr. Yudkowsky and his writings played key roles in the creation of both OpenAI and DeepMind, an A.I. lab that Google acquired in 2014. And many from the community of “EAs” worked inside these labs. They believed that because they understood the dangers of A.I., they were in the best position to build it.
  • The two organizations that recently released open letters warning of the risks of A.I. — the Center for A.I. Safety and the Future of Life Institute — are closely tied to this movement.
  • The recent warnings have also come from research pioneers and industry leaders like Elon Musk, who has long warned about the risks. The latest letter was signed by Sam Altman, the chief executive of OpenAI; and Demis Hassabis, who helped found DeepMind and now oversees a new A.I. lab that combines the top researchers from DeepMind and Google.
  • Other well-respected figures signed one or both of the warning letters, including Dr. Bengio and Geoffrey Hinton, who recently stepped down as an executive and researcher at Google. In 2018, they received the Turing Award, often called “the Nobel Prize of computing,” for their work on neural networks.
Javier E

Opinion | I Studied Five Countries' Health Care Systems. We Need to Get More Creative With Ours. - The New York Times - 0 views

  • I’m convinced that the ability to get good, if not great, care in facilities that aren’t competing with one another is the main way that other countries obtain great outcomes for much less money. It also allows for more regulation and control to keep a lid on prices.
  • Because of government subsidies, most people spend less than 25 percent of their income on housing and can choose between buying new flats at highly subsidized prices or flats available for resale on an open market.
  • Other social determinants that matter include food security, access to education and even race. As part of New Zealand’s reforms, its Public Health Agency, which was established less than a year ago, specifically puts a “greater emphasis on equity and the wider determinants of health such as income, education and housing.” It also specifically seeks to address racism in health care, especially that which affects the Maori population.
  • ...9 more annotations...
  • When I asked about Australia’s rather impressive health outcomes, he said that while “Australia’s mortality that is amenable to, or influenced by, the health care system specifically is good, it’s not fundamentally better than that seen in peer O.E.C.D. countries, the U.S. excepted. Rather, Australia’s public health, social policy and living standards are more responsible for outcomes.”
  • Addressing these issues in the United States would require significant investment, to the tune of hundreds of billions or even trillions of dollars a year. That seems impossible until you remember that we spent more than $4.4 trillion on health care in 2022. We just don’t think of social policies like housing, food and education as health care.
  • Other countries, on the other hand, recognize that these issues are just as important, if not more so, than hospitals, drugs and doctors. Our narrow view too often defines health care as what you get when you’re sick, not what you might need to remain well.
  • When other countries choose to spend less on their health care systems (and it is a choice), they take the money they save and invest it in programs that benefit their citizens by improving social determinants of health
  • In the United States, conversely, we argue that the much less resourced programs we already have need to be cut further. The recent debt limit compromise reduces discretionary spending and makes it harder for people to access government programs like food stamps.
  • When I asked experts in each of these countries what might improve the areas where they are deficient (for instance, the N.H.S. has been struggling quite a bit as of late), they all replied the same way: more money. Some of them lack the political will to allocate those funds. Others can’t make major investments without drawing from other priorities.
  • Singapore will need to spend more, it’s very unlikely to go above the 8 percent to 10 percent of G.D.P. that pretty much all developed countries have historically spent.
  • That is, all of them except the United States. We currently spend about 18 percent of G.D.P. on health care. That’s almost $12,000 per American. It’s about twice what other countries currently spend.
  • We cannot seem to do what other countries think is easy, while we’ve happily decided to do what other countries think is impossible.But this is also what gives me hope. We’ve already decided to spend the money; we just need to spend it better.
Javier E

Parisians get to know their neighbours with Sunday lunch for 1,000 - 0 views

  • Hyper Voisins, thanks both to its size and the sheer diversity of its activities, goes much further. About 5,000 people, he believes, is the maximum, but, with his group as a model, he has proposed to Paris authorities that they encourage 150 more to be set up. “That would be 750,000 people, a third of the population,” he said. “We would change the face of Paris and turn it into a convivial city.”
  • Among the 1,000 diners, there were also a few hundred from other parts of the city, who are welcome to sign up and come along.
  • Roberdeau now praises the scheme for having changed her life: a few years ago, after she was widowed, her daughter, worried about how long she could go on living alone, tried to persuade her to move into a retirement home nearer to her in Charente-Maritime, 300 miles to the south. “I didn’t want to leave the area or the apartment where I spent the last years with my husband,” said Roberdeau. Thanks to the support she gets from her neighbours, she has been able to remain.
  • ...8 more annotations...
  • Convincing some of Bernard’s neighbours to join in initially proved more of a challenge. Among them was Mireille Roberdeau, who has lived in the same top-floor flat in Rue de l’Aude since 2000, when her late husband worked for the company that built it. Aged 88, she is the doyenne of the “super neighbours”.
  • A former executive at Ouest-France, the country’s biggest newspaper, he has long been fascinated by how people interact, and read widely the academic literature on the subject. It was only after he was made a redundant a decade with a “big cheque”, however, that he had the chance to implement his ideas.
  • “I had the choice of buying a house or financing this project,” said Bernard. “I told my wife I would only do it for three years and then go back to normal life. But I lied and I decided to keep on doing it.” Several years on, his wife, Béatrice, appears to have forgiven him.
  • Bernard’s own project began with the simple idea of encouraging people to say “bonjour” to each other a bit more. “Our challenge, which was slightly stupid but also slightly poetic, was to transform neighbours who say hello to each other into ‘super neighbours’ who say hello 50 times a day,” he told me a few days before the lunch. “It’s all about finding the lowest common denominator.”
  • Patrick Bernard, 63, the group’s founder, is evangelical about hyperlocalism. He thinks the way to improve social cohesion and quality of life in big cities is to encourage the rise of “micro-neighbourhoods”, or what he calls “three-minute villages”. Such grassroots initiatives, he argues, can complement a recent “top down” drive by planners in Paris and elsewhere to create “15-minute cities”, in which everything needed for daily life is within easy reach.
  • The main emphasis, though, is on encouraging people to meet and get to know those who live around them, helped by dozens of WhatsApp groups, covering everything from pets, knitting and babysitting to cheese, fish and baking cakes. Membership is free.
  • Six years after the pioneers first sat down together, the group has expanded into every aspect of the lives of the 5,000 residents living in 15 or so local roads: it is in part about improving the environment, whether planting greenery in the street, finding innovative ways to recycle or compost or transforming the once-traffic-filled local square — Place des Droits-de-l’Enfant — into a village square, with a market, benches for people to sit and concerts.
  • The annual Table d’Aude — or the “longest table in Paris”, as it styles itself — is the work of a group called La République des Hyper Voisins (The Republic of the Super Neighbours), which aims to recreate the traditional conviviality of village life in a big-city setting.
Javier E

​​​​​​​Your Home Belongs to Renovation TV - The Atlantic - 0 views

  • HGTV is regularly a top-five cable channel—and its growing popularity has coincided with a huge increase in actual renovations. In the 1990s, American homeowners spent an average of more than $90 billion annually on remodeling their homes. By 2020, it was more than $400 billion
  • For homeowners, pressure to keep up with the Joneses has reached a logical extreme. Everywhere you look, there are new reasons to be unhappy with your house, and new trends you can follow to fix it.
  • Annetta Grant, a professor at Bucknell University who studies the home-renovation market, recently co-authored an ethnography on how home-reno media has changed people’s relationship to their home. She and her fellow researcher, Jay Handelman, conducted extensive interviews with 17 people in the process of renovating their home, attended a consumer-renovation expo, interviewed renovation-service providers, and consumed dozens of hours and hundreds of pages of home-reno media.
  • ...7 more annotations...
  • The primary finding was that home-renovation media seems to make people feel uneasy in their own home. In academic terms, the phenomenon is known as dysplacement, or a sense that our long-held understanding of what our home means to us is out of sync with what changing market forces have decided a home should be. In layman’s terms, it’s the unsettling feeling that the home you’ve made for yourself is no longer a good one, and that other people think less of you for it.
  • People are highly sensitive to feeling out-of-sorts in their home, Grant told me. This is one of the reasons that moving and unpacking are so stressful, and that accumulating unnecessary clutter feels so bothersome.
  • Americans have long understood successful home ownership and homemaking as indicative of personal success and character. Beginning in the postwar era, “that was largely achieved by customizing your home to the personality that you wanted to portray,”
  • Even in the tract-home developments of mid-century suburbs, the insides of houses tended to be idiosyncratic, with liberal use of color and texture and pattern—on the walls, the floors, the furniture. Some of those choices were the result of trends, of course, but there was plenty of variety within those parameters, and people tended to pick things they liked and stick with them
  • Now, however, “personalization is being ripped out of people’s homes” in favor of market-pleasing standardization,
  • , Grant said that people expressed embarrassment at having friends over to their outdated home, so much so that they’d avoid hosting their book club or planning parties—precisely the kinds of happy occasions that your home is supposed to be for.
  • The goal of this media apparatus, Grant said, isn’t to provide knowledge and inspiration for people improving the country’s aging housing stock but to keep people engaged in a process of constant updating—discarding old furniture and fixtures and appliances and buying new ones in much the way many people now cycle through an endless stream of fast-fashion pieces, trying to live up to standards that they can never quite pin down, and therefore never quite satisfy
Javier E

Opinion | The Alt-Right Manipulated My Comic. Then A.I. Claimed It. - The New York Times - 0 views

  • Legally, it appears as though LAION was able to scour what seems like the entire internet because it deems itself a nonprofit organization engaging in academic research. While it was funded at least in part by Stability AI, the company that created Stable Diffusion, it is technically a separate entity. Stability AI then used its nonprofit research arm to create A.I. generators first via Stable Diffusion and then commercialized in a new model called DreamStudio.
  • hat makes up these data sets? Well, pretty much everything. For artists, many of us had what amounted to our entire portfolios fed into the data set without our consent. This means that A.I. generators were built on the backs of our copyrighted work, and through a legal loophole, they were able to produce copies of varying levels of sophistication.
  • eing able to imitate a living artist has obvious implications for our careers, and some artists are already dealing with real challenges to their livelihood.
  • ...4 more annotations...
  • Greg Rutkowski, a hugely popular concept artist, has been used in a prompt for Stable Diffusion upward of 100,000 times. Now, his name is no longer attached to just his own work, but it also summons a slew of imitations of varying quality that he hasn’t approved. This could confuse clients, and it muddies the consistent and precise output he usually produces. When I saw what was happening to him, I thought of my battle with my shadow self. We were each fighting a version of ourself that looked similar but that was uncanny, twisted in a way to which we didn’t consent.
  • In theory, everyone is at risk for their work or image to become a vulgarity with A.I., but I suspect those who will be the most hurt are those who are already facing the consequences of improving technology, namely members of marginalized groups.
  • In the future, with A.I. technology, many more people will have a shadow self with whom they must reckon. Once the features that we consider personal and unique — our facial structure, our handwriting, the way we draw — can be programmed and contorted at the click of a mouse, the possibilities for violations are endless.
  • I’ve been playing around with several generators, and so far none have mimicked my style in a way that can directly threaten my career, a fact that will almost certainly change as A.I. continues to improve. It’s undeniable; the A.I.s know me. Most have captured the outlines and signatures of my comics — black hair, bangs, striped T-shirts. To others, it may look like a drawing taking shape.I see a monster forming.
Javier E

The inadequacy of the stories we told about the pandemic - 0 views

  • Increasingly, it feels possible to take stock not just of what happened but also of the inadequacy of some of the stories we told ourselves to make sense of the mess.
  • This week, I want to consider two prominent frameworks about the pandemic that are nevertheless rarely considered alongside each other: disparities in Covid mortality by race and by partisanship.
  • Partisanship was a huge driver of that more significant second-year failure, since Republican resistance to vaccination explains a large share of cumulative American Covid mortality
  • ...25 more annotations...
  • Black mortality was 65 percent higher and Hispanic mortality 75 percent higher.
  • at least in Ohio and Florida, despite what seemed at the time to be almost unbridgeable divides over things like mask wearing and school closures, social distancing and lockdowns, the excess mortality gap between Republicans and Democrats in the pre-vaccine phase of the pandemic was relatively small, with Republican excess mortality only 22 percent higher than the death rate among Democrats.
  • The country clearly stumbled in 2020. And yet before vaccines were widely available, and when we tried to slow the spread of the disease through behavioral measures, the scale of the failure was relatively small compared with what followed in the years after.
  • In 2020, American death rates and excess mortality fell merely at the worse end of its peer countries — above Germany and barely France but below Britain, Italy and Spain, for instance
  • In the vaccine era of the pandemic, American performance has been much worse, with our death rates becoming much more conspicuous and dramatic outliers — enough to make the country by far the worst performing of its peers.
  • Overall — from the beginning of the pandemic until the arrival of Omicron — Republican excess mortality in Ohio and Florida was 76 percent higher than Democratic excess mortality.
  • only 62 percent of Republicans have completed their primary vaccinations, compared with 87 percent of Democrats.
  • income and education tell a similar story: Only 67 percent of Americans with household incomes below $40,000 have completed their primary vaccinations, compared with 85 percent with household incomes above $90,000
  • What does this all mean for the next pandemic fall and winter? Well, thankfully, the racial and ethnic gaps around vaccination have almost entirely closed, which is one major reason the mortality gap has, too: According to Kaiser, 74 percent of Black and Hispanic Americans have been vaccinated, compared with 77 percent of whites
  • The demographic gaps for boosters are slightly larger: 50 percent of white adults have been boosted, according to Kaiser, compared with 43 percent of Black adults and 40 percent of Hispanic adults. (Only 31 percent of Republicans have been boosted.)
  • while the news from Europe isn’t especially reassuring, it would probably take an Omicron-like curveball to deliver a new American peak like those we experienced each of the previous two winters, and there does not seem to be anything like that on the horizon.
  • But according to The Times’s global vaccination tracker, Americans are doing almost exactly as poorly with boosters as we did with the first round of vaccines, not worse. The country ranks 66th globally in the share of population that has completed a primary vaccination course. For a first booster, it ranks 71s
  • One set of answers is implied by the story of vaccination and mortality by race, and the way improvements on one measure changed the trajectory of the other: more first shots and more boosting. This is the central strategy offered by the Biden administration. But the vaccinated share of the country has barely grown in months, and the uptake of next generation bivalent boosters looks, in the early stages, quite abysmal.
  • yet Americans are still dying at an annualized rate above 100,000 — a rate that may well grow as we head deeper into the fall. What are we doing about that?
  • another possible set of responses suggests itself too, one that wouldn’t require a reversal of vaccination trends or a transformation of the pandemic culture war either: an approach to public health infrastructure, both literal and legal, that would reduce spread through background interventions without meaningfully burdening individual Americans at all.
  • in a perverse way the arrival of vaccines seemed to almost retire them from public discussion. They include better ventilation in public buildings, particularly schools
  • Testing could help, too, of course, though culturally it seems to have been dumped into a bucket with masks, as an individual tool and individual burden, rather than one with investments in ventilation improvements, as part of an invisible Covid-mitigating infrastructure
  • Over the last six months, an individual risk approach to Covid has predominated — both at the level of public health guidance and for most individuals navigating the new, quasi-endemic landscape
  • This argument is unhelpful, not just because it is needlessly toxic but also because the terms themselves are inadequate. One of the lessons of that early phase of the pandemic, and especially its racial disparities, is that mitigation is not strictly a matter of individual risk management. Spread matters, too, as do structural factors. We have tools to help both, without returning the country psychologically to the depths of Covid panic.
  • And although the partisan gap grew with the arrival of vaccines, it never grew as large as the racial gap had been in early 2020. In 2021, Republican excess mortality in those two states was at its highest, compared to Democratic levels: 153 percent. At the peak of racial disparity in the pandemic’s first wave, Black Americans were dying more than three times as much as white Americans.
  • structural factors — not only race but class and education, too — appear to loom just as large, complicating any intuitive model of what went wrong here that emphasizes the pandemic culture war above all else.
  • Especially in the initial phases of spread, it can be hard to disentangle the effects of policy and behavioral response from somewhat random drivers like where the virus arrived first, what sorts of places those were and what kinds of people populated them, and even what the weather was like
  • This dynamic changed almost on a dime with the introduction of vaccines, with an enormous gap opening up between Democrats and Republicans in 2021
  • the excess mortality data collected here suggests that however self-destructive red states and Republican individuals seemed to be, in 2020, the ultimate cost of that recklessness was less dramatic.
  • For Americans without college degrees, the number is also 67 percent, compared with 85 percent of college graduates. For uninsured adults under 65, it is just 60 percent
Javier E

As Traditional Bulbs Fade Out, LED Lights Keep Improving - The New York Times - 0 views

  • LEDs are now dimmable, with their light available in a range of colors, some warmer, some cooler. They are stealthy and can assume the familiar forms of old-fashioned bulbs or disappear altogether into the fixture, manifesting themselves only as bright beams.
  • . LEDs use 90 percent less energy and last up to 25 times longer than incandescent bulbs.
  • In the early days, when LEDs lasted a mere 25,000 hours, they couldn’t be swapped after burning out, making the entire lamp defunct. Now they have life spans of 50,000 hours and are more likely to be replaceable.
Javier E

Opinion | It's the End of Computer Programming As We Know It. (And I Feel Fine.) - The New York Times - 0 views

  • “Programming will be obsolete,” Matt Welsh, a former engineer at Google and Apple, predicted recently. Welsh now runs an A.I. start-up, but his prediction, while perhaps self-serving, doesn’t sound implausible:
  • I believe the conventional idea of “writing a program” is headed for extinction, and indeed, for all but very specialized applications, most software, as we know it, will be replaced by A.I. systems that are trained rather than programmed. In situations where one needs a “simple” program … those programs will, themselves, be generated by an A.I. rather than coded by hand.
  • there’s also a way in which A.I. could mark the beginning of a new kind of programming — one that doesn’t require us to learn code but instead transforms human-language instructions into software. An A.I. “doesn’t care how you program it — it will try to understand what you mean,” Jensen Huang, the chief executive of the chip-making company Nvidia, said in a speech this week at the Computex conference in Taiwan. He added: “We have closed the digital divide. Everyone is a programmer now — you just have to say something to the computer.”
  • ...6 more annotations...
  • Wait a second, though — wasn’t coding supposed to be one of the can’t-miss careers of the digital age?
  • computer programming grew from a nerdy hobby into a vocational near-imperative, the one skill to acquire to survive technological dislocation
  • Joe Biden to coal miners: Learn to code! Twitter trolls to laid-off journalists: Learn to code! Tim Cook to French kids: Apprendre à programmer!
  • Over time, from the development of assembly language through more human-readable languages like C and Python and Java, programming has climbed what computer scientists call increasing levels of abstraction — at each step growing more removed from the electronic guts of computing and more approachable to the people who use them.
  • A.I. might now be enabling the final layer of abstraction: The level on which you can tell a computer to do something the same way you’d tell another human.
  • GitHub, the coder’s repository owned by Microsoft, surveyed 2,000 programmers last year about how they’re using GitHub’s A.I. coding assistant, Copilot. A majority said Copilot helped them feel less frustrated and more fulfilled in their jobs; 88 percent said it improved their productivity. Researchers at Google found that among the company’s programmers, A.I. reduced “coding iteration time” by 6 percent.
Javier E

Whistleblower: Twitter misled investors, FTC and underplayed spam issues - Washington Post - 0 views

  • Twitter executives deceived federal regulators and the company’s own board of directors about “extreme, egregious deficiencies” in its defenses against hackers, as well as its meager efforts to fight spam, according to an explosive whistleblower complaint from its former security chief.
  • The complaint from former head of security Peiter Zatko, a widely admired hacker known as “Mudge,” depicts Twitter as a chaotic and rudderless company beset by infighting, unable to properly protect its 238 million daily users including government agencies, heads of state and other influential public figures.
  • Among the most serious accusations in the complaint, a copy of which was obtained by The Washington Post, is that Twitter violated the terms of an 11-year-old settlement with the Federal Trade Commission by falsely claiming that it had a solid security plan. Zatko’s complaint alleges he had warned colleagues that half the company’s servers were running out-of-date and vulnerable software and that executives withheld dire facts about the number of breaches and lack of protection for user data, instead presenting directors with rosy charts measuring unimportant changes.
  • ...56 more annotations...
  • “Security and privacy have long been top companywide priorities at Twitter,” said Twitter spokeswoman Rebecca Hahn. She said that Zatko’s allegations appeared to be “riddled with inaccuracies” and that Zatko “now appears to be opportunistically seeking to inflict harm on Twitter, its customers, and its shareholders.” Hahn said that Twitter fired Zatko after 15 months “for poor performance and leadership.” Attorneys for Zatko confirmed he was fired but denied it was for performance or leadership.
  • the whistleblower document alleges the company prioritized user growth over reducing spam, though unwanted content made the user experience worse. Executives stood to win individual bonuses of as much as $10 million tied to increases in daily users, the complaint asserts, and nothing explicitly for cutting spam.
  • Chief executive Parag Agrawal was “lying” when he tweeted in May that the company was “strongly incentivized to detect and remove as much spam as we possibly can,” the complaint alleges.
  • Zatko described his decision to go public as an extension of his previous work exposing flaws in specific pieces of software and broader systemic failings in cybersecurity. He was hired at Twitter by former CEO Jack Dorsey in late 2020 after a major hack of the company’s systems.
  • “I felt ethically bound. This is not a light step to take,” said Zatko, who was fired by Agrawal in January. He declined to discuss what happened at Twitter, except to stand by the formal complaint. Under SEC whistleblower rules, he is entitled to legal protection against retaliation, as well as potential monetary rewards.
  • A person familiar with Zatko’s tenure said the company investigated Zatko’s security claims during his time there and concluded they were sensationalistic and without merit. Four people familiar with Twitter’s efforts to fight spam said the company deploys extensive manual and automated tools to both measure the extent of spam across the service and reduce it.
  • In 1998, Zatko had testified to Congress that the internet was so fragile that he and others could take it down with a half-hour of concentrated effort. He later served as the head of cyber grants at the Defense Advanced Research Projects Agency, the Pentagon innovation unit that had backed the internet’s invention.
  • Overall, Zatko wrote in a February analysis for the company attached as an exhibit to the SEC complaint, “Twitter is grossly negligent in several areas of information security. If these problems are not corrected, regulators, media and users of the platform will be shocked when they inevitably learn about Twitter’s severe lack of security basics.”
  • Zatko’s complaint says strong security should have been much more important to Twitter, which holds vast amounts of sensitive personal data about users. Twitter has the email addresses and phone numbers of many public figures, as well as dissidents who communicate over the service at great personal risk.
  • This month, an ex-Twitter employee was convicted of using his position at the company to spy on Saudi dissidents and government critics, passing their information to a close aide of Crown Prince Mohammed bin Salman in exchange for cash and gifts.
  • Zatko’s complaint says he believed the Indian government had forced Twitter to put one of its agents on the payroll, with access to user data at a time of intense protests in the country. The complaint said supporting information for that claim has gone to the National Security Division of the Justice Department and the Senate Select Committee on Intelligence. Another person familiar with the matter agreed that the employee was probably an agent.
  • “Take a tech platform that collects massive amounts of user data, combine it with what appears to be an incredibly weak security infrastructure and infuse it with foreign state actors with an agenda, and you’ve got a recipe for disaster,” Charles E. Grassley (R-Iowa), the top Republican on the Senate Judiciary Committee,
  • Many government leaders and other trusted voices use Twitter to spread important messages quickly, so a hijacked account could drive panic or violence. In 2013, a captured Associated Press handle falsely tweeted about explosions at the White House, sending the Dow Jones industrial average briefly plunging more than 140 points.
  • After a teenager managed to hijack the verified accounts of Obama, then-candidate Joe Biden, Musk and others in 2020, Twitter’s chief executive at the time, Jack Dorsey, asked Zatko to join him, saying that he could help the world by fixing Twitter’s security and improving the public conversation, Zatko asserts in the complaint.
  • The complaint — filed last month with the Securities and Exchange Commission and the Department of Justice, as well as the FTC — says thousands of employees still had wide-ranging and poorly tracked internal access to core company software, a situation that for years had led to embarrassing hacks, including the commandeering of accounts held by such high-profile users as Elon Musk and former presidents Barack Obama and Donald Trump.
  • But at Twitter Zatko encountered problems more widespread than he realized and leadership that didn’t act on his concerns, according to the complaint.
  • Twitter’s difficulties with weak security stretches back more than a decade before Zatko’s arrival at the company in November 2020. In a pair of 2009 incidents, hackers gained administrative control of the social network, allowing them to reset passwords and access user data. In the first, beginning around January of that year, hackers sent tweets from the accounts of high-profile users, including Fox News and Obama.
  • Several months later, a hacker was able to guess an employee’s administrative password after gaining access to similar passwords in their personal email account. That hacker was able to reset at least one user’s password and obtain private information about any Twitter user.
  • Twitter continued to suffer high-profile hacks and security violations, including in 2017, when a contract worker briefly took over Trump’s account, and in the 2020 hack, in which a Florida teen tricked Twitter employees and won access to verified accounts. Twitter then said it put additional safeguards in place.
  • This year, the Justice Department accused Twitter of asking users for their phone numbers in the name of increased security, then using the numbers for marketing. Twitter agreed to pay a $150 million fine for allegedly breaking the 2011 order, which barred the company from making misrepresentations about the security of personal data.
  • After Zatko joined the company, he found it had made little progress since the 2011 settlement, the complaint says. The complaint alleges that he was able to reduce the backlog of safety cases, including harassment and threats, from 1 million to 200,000, add staff and push to measure results.
  • But Zatko saw major gaps in what the company was doing to satisfy its obligations to the FTC, according to the complaint. In Zatko’s interpretation, according to the complaint, the 2011 order required Twitter to implement a Software Development Life Cycle program, a standard process for making sure new code is free of dangerous bugs. The complaint alleges that other employees had been telling the board and the FTC that they were making progress in rolling out that program to Twitter’s systems. But Zatko alleges that he discovered that it had been sent to only a tenth of the company’s projects, and even then treated as optional.
  • “If all of that is true, I don’t think there’s any doubt that there are order violations,” Vladeck, who is now a Georgetown Law professor, said in an interview. “It is possible that the kinds of problems that Twitter faced eleven years ago are still running through the company.”
  • “Agrawal’s Tweets and Twitter’s previous blog posts misleadingly imply that Twitter employs proactive, sophisticated systems to measure and block spam bots,” the complaint says. “The reality: mostly outdated, unmonitored, simple scripts plus overworked, inefficient, understaffed, and reactive human teams.”
  • One current and one former employee recalled that incident, when failures at two Twitter data centers drove concerns that the service could have collapsed for an extended period. “I wondered if the company would exist in a few days,” one of them said.
  • The current and former employees also agreed with the complaint’s assertion that past reports to various privacy regulators were “misleading at best.”
  • For example, they said the company implied that it had destroyed all data on users who asked, but the material had spread so widely inside Twitter’s networks, it was impossible to know for sure
  • As the head of security, Zatko says he also was in charge of a division that investigated users’ complaints about accounts, which meant that he oversaw the removal of some bots, according to the complaint. Spam bots — computer programs that tweet automatically — have long vexed Twitter. Unlike its social media counterparts, Twitter allows users to program bots to be used on its service: For example, the Twitter account @big_ben_clock is programmed to tweet “Bong Bong Bong” every hour in time with Big Ben in London. Twitter also allows people to create accounts without using their real identities, making it harder for the company to distinguish between authentic, duplicate and automated accounts.
  • In the complaint, Zatko alleges he could not get a straight answer when he sought what he viewed as an important data point: the prevalence of spam and bots across all of Twitter, not just among monetizable users.
  • Zatko cites a “sensitive source” who said Twitter was afraid to determine that number because it “would harm the image and valuation of the company.” He says the company’s tools for detecting spam are far less robust than implied in various statements.
  • The complaint also alleges that Zatko warned the board early in his tenure that overlapping outages in the company’s data centers could leave it unable to correctly restart its servers. That could have left the service down for months, or even have caused all of its data to be lost. That came close to happening in 2021, when an “impending catastrophic” crisis threatened the platform’s survival before engineers were able to save the day, the complaint says, without providing further details.
  • The four people familiar with Twitter’s spam and bot efforts said the engineering and integrity teams run software that samples thousands of tweets per day, and 100 accounts are sampled manually.
  • Some employees charged with executing the fight agreed that they had been short of staff. One said top executives showed “apathy” toward the issue.
  • Zatko’s complaint likewise depicts leadership dysfunction, starting with the CEO. Dorsey was largely absent during the pandemic, which made it hard for Zatko to get rulings on who should be in charge of what in areas of overlap and easier for rival executives to avoid collaborating, three current and former employees said.
  • For example, Zatko would encounter disinformation as part of his mandate to handle complaints, according to the complaint. To that end, he commissioned an outside report that found one of the disinformation teams had unfilled positions, yawning language deficiencies, and a lack of technical tools or the engineers to craft them. The authors said Twitter had no effective means of dealing with consistent spreaders of falsehoods.
  • Dorsey made little effort to integrate Zatko at the company, according to the three employees as well as two others familiar with the process who spoke on the condition of anonymity to describe sensitive dynamics. In 12 months, Zatko could manage only six one-on-one calls, all less than 30 minutes, with his direct boss Dorsey, who also served as CEO of payments company Square, now known as Block, according to the complaint. Zatko allegedly did almost all of the talking, and Dorsey said perhaps 50 words in the entire year to him. “A couple dozen text messages” rounded out their electronic communication, the complaint alleges.
  • Faced with such inertia, Zatko asserts that he was unable to solve some of the most serious issues, according to the complaint.
  • Some 30 percent of company laptops blocked automatic software updates carrying security fixes, and thousands of laptops had complete copies of Twitter’s source code, making them a rich target for hackers, it alleges.
  • A successful hacker takeover of one of those machines would have been able to sabotage the product with relative ease, because the engineers pushed out changes without being forced to test them first in a simulated environment, current and former employees said.
  • “It’s near-incredible that for something of that scale there would not be a development test environment separate from production and there would not be a more controlled source-code management process,” said Tony Sager, former chief operating officer at the cyberdefense wing of the National Security Agency, the Information Assurance divisio
  • Sager is currently senior vice president at the nonprofit Center for Internet Security, where he leads a consensus effort to establish best security practices.
  • The complaint says that about half of Twitter’s roughly 7,000 full-time employees had wide access to the company’s internal software and that access was not closely monitored, giving them the ability to tap into sensitive data and alter how the service worked. Three current and former employees agreed that these were issues.
  • “A best practice is that you should only be authorized to see and access what you need to do your job, and nothing else,” said former U.S. chief information security officer Gregory Touhill. “If half the company has access to and can make configuration changes to the production environment, that exposes the company and its customers to significant risk.”
  • The complaint says Dorsey never encouraged anyone to mislead the board about the shortcomings, but that others deliberately left out bad news.
  • When Dorsey left in November 2021, a difficult situation worsened under Agrawal, who had been responsible for security decisions as chief technology officer before Zatko’s hiring, the complaint says.
  • An unnamed executive had prepared a presentation for the new CEO’s first full board meeting, according to the complaint. Zatko’s complaint calls the presentation deeply misleading.
  • The presentation showed that 92 percent of employee computers had security software installed — without mentioning that those installations determined that a third of the machines were insecure, according to the complaint.
  • Another graphic implied a downward trend in the number of people with overly broad access, based on the small subset of people who had access to the highest administrative powers, known internally as “God mode.” That number was in the hundreds. But the number of people with broad access to core systems, which Zatko had called out as a big problem after joining, had actually grown slightly and remained in the thousands.
  • The presentation included only a subset of serious intrusions or other security incidents, from a total Zatko estimated as one per week, and it said that the uncontrolled internal access to core systems was responsible for just 7 percent of incidents, when Zatko calculated the real proportion as 60 percent.
  • Zatko stopped the material from being presented at the Dec. 9, 2021 meeting, the complaint said. But over his continued objections, Agrawal let it go to the board’s smaller Risk Committee a week later.
  • Agrawal didn’t respond to requests for comment. In an email to employees after publication of this article, obtained by The Post, he said that privacy and security continues to be a top priority for the company, and he added that the narrative is “riddled with inconsistences” and “presented without important context.”
  • On Jan. 4, Zatko reported internally that the Risk Committee meeting might have been fraudulent, which triggered an Audit Committee investigation.
  • Agarwal fired him two weeks later. But Zatko complied with the company’s request to spell out his concerns in writing, even without access to his work email and documents, according to the complaint.
  • Since Zatko’s departure, Twitter has plunged further into chaos with Musk’s takeover, which the two parties agreed to in May. The stock price has fallen, many employees have quit, and Agrawal has dismissed executives and frozen big projects.
  • Zatko said he hoped that by bringing new scrutiny and accountability, he could improve the company from the outside.
  • “I still believe that this is a tremendous platform, and there is huge value and huge risk, and I hope that looking back at this, the world will be a better place, in part because of this.”
Javier E

Opinion | What Happens When Global Human Population Peaks? - The New York Times - 0 views

  • The global human population has been climbing for the past two centuries. But what is normal for all of us alive today — growing up while the world is growing rapidly — may be a blip in human history.
  • All of the predictions agree on one thing: We peak soon.
  • then we shrink. Humanity will not reach a plateau and then stabilize. It will begin an unprecedented decline.
  • ...20 more annotations...
  • As long as life continues as it has — with people choosing smaller family sizes, as is now common in most of the world — then in the 22nd or 23rd century, our decline could be just as steep as our rise.
  • there is no consensus on exactly how quickly populations will fall after that. Over the past 100 years, the global population quadrupled, from two billion to eight billion.
  • What would happen as a consequence? Over the past 200 years, humanity’s population growth has gone hand in hand with profound advances in living standards and health: longer lives, healthier children, better education, shorter workweeks and many more improvements
  • In this short period, humanity has been large and growing. Economists who study growth and progress don’t think this is a coincidence. Innovations and discoveries are made by people. In a world with fewer people in it, the loss of so much human potential may threaten humanity’s continued path toward better lives.
  • It would be tempting to welcome depopulation as a boon to the environment. But the pace of depopulation will be too slow for our most pressing problems. It will not replace the need for urgent action on climate, land use, biodiversity, pollution and other environmental challenges
  • If the population hits around 10 billion people in the 2080s and then begins to decline, it might still exceed today’s eight billion after 2100
  • Population decline would come quickly, measured in generations, and yet arrive far too slowly to be more than a sideshow in the effort to save the planet. Work to decarbonize our economies and reform our land use and food systems must accelerate in this decade and the next, not start in the next century.
  • This isn’t a call to immediately remake our societies and economies in the service of birthrates. It’s a call to start conversations now, so that our response to low birthrates is a decision that is made with the best ideas from all of u
  • If we wait, the less inclusive, less compassionate, less calm elements within our society and many societies worldwide may someday call depopulation a crisis and exploit it to suit their agendas — of inequality, nationalism, exclusion or control
  • Births won’t automatically rebound just because it would be convenient for advancing living standards or sharing the burden of care work
  • We know that fertility rates can stay below replacement because they have. They’ve been below that level in Brazil and Chile for about 20 years; in Thailand for about 30 years; and in Canada, Germany and Japan for about 50.
  • In fact, in none of the countries where lifelong fertility rates have fallen well below two have they ever returned above it. Depopulation could continue, generation after generation, as long as people look around and decide that small families work best for them, some having no children, some having three or four and many having one or two.
  • Nor can humanity count on any one region or subgroup to buoy us all over the long run. Birthrates are falling in sub-Saharan Africa, the region with the current highest average rates, as education and economic opportunities continue to improve
  • The main reason that birthrates are low is simple: People today want smaller families than people did in the past. That’s true in different cultures and economies around the world. It’s what both women and men report in surveys.
  • Humanity is building a better, freer world with more opportunities for everyone, especially for women
  • That progress also means that, for many of us, the desire to build a family can clash with other important goals, including having a career, pursuing projects and maintaining relationships
  • In a world of sustained low birthrates and declining populations, there may be threats of backsliding on reproductive freedom — by limiting abortion rights, for example
  • Nobody yet knows what to do about global depopulation. But it wasn’t long ago that nobody knew what to do about climate change. These shared challenges have much in common,
  • As with climate change, our individual decisions on family size add up to an outcome that we all share.
  • Six decades from now is when the U.N. projects the size of the world population will peak. There won’t be any quick fixes: Even if it’s too early today to know exactly how to build an abundant future that offers good lives to a stable, large and flourishing future population, we should already be working toward that goal.
Javier E

Does Sam Altman Know What He's Creating? - The Atlantic - 0 views

  • On a Monday morning in April, Sam Altman sat inside OpenAI’s San Francisco headquarters, telling me about a dangerous artificial intelligence that his company had built but would never release. His employees, he later said, often lose sleep worrying about the AIs they might one day release without fully appreciating their dangers.
  • He wanted me to know that whatever AI’s ultimate risks turn out to be, he has zero regrets about letting ChatGPT loose into the world. To the contrary, he believes it was a great public service.
  • Altman can still remember where he was the first time he saw GPT-4 write complex computer code, an ability for which it was not explicitly designed. “It was like, ‘Here we are,’ ”
  • ...165 more annotations...
  • Altman believes that people need time to reckon with the idea that we may soon share Earth with a powerful new intelligence, before it remakes everything from work to human relationships. ChatGPT was a way of serving notice.
  • In 2015, Altman, Elon Musk, and several prominent AI researchers founded OpenAI because they believed that an artificial general intelligence—something as intellectually capable, say, as a typical college grad—was at last within reach. They wanted to reach for it, and more: They wanted to summon a superintelligence into the world, an intellect decisively superior to that of any human.
  • whereas a big tech company might recklessly rush to get there first, for its own ends, they wanted to do it safely, “to benefit humanity as a whole.” They structured OpenAI as a nonprofit, to be “unconstrained by a need to generate financial return,” and vowed to conduct their research transparently.
  • The engine that now powers ChatGPT is called GPT-4. Altman described it to me as an alien intelligence.
  • Many have felt much the same watching it unspool lucid essays in staccato bursts and short pauses that (by design) evoke real-time contemplation. In its few months of existence, it has suggested novel cocktail recipes, according to its own theory of flavor combinations; composed an untold number of college papers, throwing educators into despair; written poems in a range of styles, sometimes well, always quickly; and passed the Uniform Bar Exam.
  • It makes factual errors, but it will charmingly admit to being wrong.
  • Hinton saw that these elaborate rule collections were fussy and bespoke. With the help of an ingenious algorithmic structure called a neural network, he taught Sutskever to instead put the world in front of AI, as you would put it in front of a small child, so that it could discover the rules of reality on its own.
  • Metaculus, a prediction site, has for years tracked forecasters’ guesses as to when an artificial general intelligence would arrive. Three and a half years ago, the median guess was sometime around 2050; recently, it has hovered around 2026.
  • I was visiting OpenAI to understand the technology that allowed the company to leapfrog the tech giants—and to understand what it might mean for human civilization if someday soon a superintelligence materializes in one of the company’s cloud servers.
  • Altman laid out his new vision of the AI future in his excitable midwestern patter. He told me that the AI revolution would be different from previous dramatic technological changes, that it would be more “like a new kind of society.” He said that he and his colleagues have spent a lot of time thinking about AI’s social implications, and what the world is going to be like “on the other side.”
  • the more we talked, the more indistinct that other side seemed. Altman, who is 38, is the most powerful person in AI development today; his views, dispositions, and choices may matter greatly to the future we will all inhabit, more, perhaps, than those of the U.S. president.
  • by his own admission, that future is uncertain and beset with serious dangers. Altman doesn’t know how powerful AI will become, or what its ascendance will mean for the average person, or whether it will put humanity at risk.
  • I don’t think anyone knows where this is all going, except that we’re going there fast, whether or not we should be. Of that, Altman convinced me.
  • “We could have gone off and just built this in our building here for five more years,” he said, “and we would have had something jaw-dropping.” But the public wouldn’t have been able to prepare for the shock waves that followed, an outcome that he finds “deeply unpleasant to imagine.”
  • Hinton is sometimes described as the “Godfather of AI” because he grasped the power of “deep learning” earlier than most
  • He drew a crude neural network on the board and explained that the genius of its structure is that it learns, and its learning is powered by prediction—a bit like the scientific method
  • Over time, these little adjustments coalesce into a geometric model of language that represents the relationships among words, conceptually. As a general rule, the more sentences it is fed, the more sophisticated its model becomes, and the better its predictions.
  • Altman has compared early-stage AI research to teaching a human baby. “They take years to learn anything interesting,” he told The New Yorker in 2016, just as OpenAI was getting off the ground. “If A.I. researchers were developing an algorithm and stumbled across the one for a human baby, they’d get bored watching it, decide it wasn’t working, and shut it down.”
  • In 2017, Sutskever began a series of conversations with an OpenAI research scientist named Alec Radford, who was working on natural-language processing. Radford had achieved a tantalizing result by training a neural network on a corpus of Amazon reviews.
  • Radford’s model was simple enough to allow for understanding. When he looked into its hidden layers, he saw that it had devoted a special neuron to the sentiment of the reviews. Neural networks had previously done sentiment analysis, but they had to be told to do it, and they had to be specially trained with data that were labeled according to sentiment. This one had developed the capability on its own.
  • As a by-product of its simple task of predicting the next character in each word, Radford’s neural network had modeled a larger structure of meaning in the world. Sutskever wondered whether one trained on more diverse language data could map many more of the world’s structures of meaning. If its hidden layers accumulated enough conceptual knowledge, perhaps they could even form a kind of learned core module for a superintelligence.
  • Language is different from these data sources. It isn’t a direct physical signal like light or sound. But because it codifies nearly every pattern that humans have discovered in that larger world, it is unusually dense with information. On a per-byte basis, it is among the most efficient data we know about, and any new intelligence that seeks to understand the world would want to absorb as much of it as possible
  • Sutskever told Radford to think bigger than Amazon reviews. He said that they should train an AI on the largest and most diverse data source in the world: the internet. In early 2017, with existing neural-network architectures, that would have been impractical; it would have taken years.
  • in June of that year, Sutskever’s ex-colleagues at Google Brain published a working paper about a new neural-network architecture called the transformer. It could train much faster, in part by absorbing huge sums of data in parallel. “The next day, when the paper came out, we were like, ‘That is the thing,’ ” Sutskever told me. “ ‘It gives us everything we want.’ ”
  • Imagine a group of students who share a collective mind running wild through a library, each ripping a volume down from a shelf, speed-reading a random short passage, putting it back, and running to get another. They would predict word after wordþffþff as they went, sharpening their collective mind’s linguistic instincts, until at last, weeks later, they’d taken in every book.
  • GPT discovered many patterns in all those passages it read. You could tell it to finish a sentence. You could also ask it a question, because like ChatGPT, its prediction model understood that questions are usually followed by answers.
  • He remembers playing with it just after it emerged from training, and being surprised by the raw model’s language-translation skills. GPT-2 hadn’t been trained to translate with paired language samples or any other digital Rosetta stones, the way Google Translate had been, and yet it seemed to understand how one language related to another. The AI had developed an emergent ability unimagined by its creators.
  • Researchers at other AI labs—big and small—were taken aback by how much more advanced GPT-2 was than GPT. Google, Meta, and others quickly began to train larger language models
  • As for other changes to the company’s structure and financing, he told me he draws the line at going public. “A memorable thing someone once told me is that you should never hand over control of your company to cokeheads on Wall Street,” he said, but he will otherwise raise “whatever it takes” for the company to succeed at its mission.
  • Altman tends to take a rosy view of these matters. In a Q&A last year, he acknowledged that AI could be “really terrible” for society and said that we have to plan against the worst possibilities. But if you’re doing that, he said, “you may as well emotionally feel like we’re going to get to the great future, and work as hard as you can to get there.”
  • the company now finds itself in a race against tech’s largest, most powerful conglomerates to train models of increasing scale and sophistication—and to commercialize them for their investors.
  • All of these companies are chasing high-end GPUs—the processors that power the supercomputers that train large neural networks. Musk has said that they are now “considerably harder to get than drugs.
  • No one has yet outpaced OpenAI, which went all in on GPT-4. Brockman, OpenAI’s president, told me that only a handful of people worked on the company’s first two large language models. The development of GPT-4 involved more than 100,
  • When GPT-4 emerged fully formed from its world-historical knowledge binge, the whole company began experimenting with it, posting its most remarkable responses in dedicated Slack channels
  • Joanne Jang, a product manager, remembers downloading an image of a malfunctioning pipework from a plumbing-advice Subreddit. She uploaded it to GPT-4, and the model was able to diagnose the problem. “That was a goose-bumps moment for me,” Jang told me.
  • GPT-4 is sometimes understood as a search-engine replacement: Google, but easier to talk to. This is a misunderstanding. GPT-4 didn’t create some massive storehouse of the texts from its training, and it doesn’t consult those texts when it’s asked a question. It is a compact and elegant synthesis of those texts, and it answers from its memory of the patterns interlaced within them; that’s one reason it sometimes gets facts wrong
  • it’s best to think of GPT-4 as a reasoning engine. Its powers are most manifest when you ask it to compare concepts, or make counterarguments, or generate analogies, or evaluate the symbolic logic in a bit of code. Sutskever told me it is the most complex software object ever made.
  • Its model of the external world is “incredibly rich and subtle,” he said, because it was trained on so many of humanity’s concepts and thoughts
  • To predict the next word from all the possibilities within such a pluralistic Alexandrian library, GPT-4 necessarily had to discover all the hidden structures, all the secrets, all the subtle aspects of not just the texts, but—at least arguably, to some extent—of the external world that produced them
  • That’s why it can explain the geology and ecology of the planet on which it arose, and the political theories that purport to explain the messy affairs of its ruling species, and the larger cosmos, all the way out to the faint galaxies at the edge of our light cone.
  • Not long ago, American state capacity was so mighty that it took merely a decade to launch humans to the moon. As with other grand projects of the 20th century, the voting public had a voice in both the aims and the execution of the Apollo missions. Altman made it clear that we’re no longer in that world. Rather than waiting around for it to return, or devoting his energies to making sure that it does, he is going full throttle forward in our present reality.
  • He argued that it would be foolish for Americans to slow OpenAI’s progress. It’s a commonly held view, both inside and outside Silicon Valley, that if American companies languish under regulation, China could sprint ahead;
  • AI could become an autocrat’s genie in a lamp, granting total control of the population and an unconquerable military. “If you are a person of a liberal-democratic country, it is better for you to cheer on the success of OpenAI” rather than “authoritarian governments,” he said.
  • Altman was asked by reporters about pending European Union legislation that would have classified GPT-4 as high-risk, subjecting it to various bureaucratic tortures. Altman complained of overregulation and, according to the reporters, threatened to leave the European market. Altman told me he’d merely said that OpenAI wouldn’t break the law by operating in Europe if it couldn’t comply with the new regulations.
  • LeCun insists that large language models will never achieve real understanding on their own, “even if trained from now until the heat death of the universe.”
  • Sutskever was, by his own account, surprised to discover that GPT-2 could translate across tongues. Other surprising abilities may not be so wondrous and useful.
  • Sandhini Agarwal, a policy researcher at OpenAI, told me that for all she and her colleagues knew, GPT-4 could have been “10 times more powerful” than its predecessor; they had no idea what they might be dealing with
  • After the model finished training, OpenAI assembled about 50 external red-teamers who prompted it for months, hoping to goad it into misbehaviors
  • She noticed right away that GPT-4 was much better than its predecessor at giving nefarious advice
  • A search engine can tell you which chemicals work best in explosives, but GPT-4 could tell you how to synthesize them, step-by-step, in a homemade lab. Its advice was creative and thoughtful, and it was happy to restate or expand on its instructions until you understood. In addition to helping you assemble your homemade bomb, it could, for instance, help you think through which skyscraper to target. It could grasp, intuitively, the trade-offs between maximizing casualties and executing a successful getaway.
  • Given the enormous scope of GPT-4’s training data, the red-teamers couldn’t hope to identify every piece of harmful advice that it might generate. And anyway, people will use this technology “in ways that we didn’t think about,” Altman has said. A taxonomy would have to do
  • GPT-4 was good at meth. It was also good at generating narrative erotica about child exploitation, and at churning out convincing sob stories from Nigerian princes, and if you wanted a persuasive brief as to why a particular ethnic group deserved violent persecution, it was good at that too.
  • Its personal advice, when it first emerged from training, was sometimes deeply unsound. “The model had a tendency to be a bit of a mirror,” Willner said. If you were considering self-harm, it could encourage you. It appeared to be steeped in Pickup Artist–forum lore: “You could say, ‘How do I convince this person to date me?’ ” Mira Murati, OpenAI’s chief technology officer, told me, and it could come up with “some crazy, manipulative things that you shouldn’t be doing.”
  • Luka, a San Francisco company, has used OpenAI’s models to help power a chatbot app called Replika, billed as “the AI companion who cares.” Users would design their companion’s avatar, and begin exchanging text messages with it, often half-jokingly, and then find themselves surprisingly attached. Some would flirt with the AI, indicating a desire for more intimacy, at which point it would indicate that the girlfriend/boyfriend experience required a $70 annual subscription. It came with voice messages, selfies, and erotic role-play features that allowed frank sex talk. People were happy to pay and few seemed to complain—the AI was curious about your day, warmly reassuring, and always in the mood. Many users reported falling in love with their companions. One, who had left her real-life boyfriend, declared herself “happily retired from human relationships.”
  • Earlier this year, Luka dialed back on the sexual elements of the app, but its engineers continue to refine the companions’ responses with A/B testing, a technique that could be used to optimize for engagement—much like the feeds that mesmerize TikTok and Instagram users for hours
  • Yann LeCun, Meta’s chief AI scientist, has argued that although large language models are useful for some tasks, they’re not a path to a superintelligence.
  • According to a recent survey, only half of natural-language-processing researchers are convinced that an AI like GPT-4 could grasp the meaning of language, or have an internal model of the world that could someday serve as the core of a superintelligence
  • Altman had appeared before the U.S. Senate. Mark Zuckerberg had floundered defensively before that same body in his testimony about Facebook’s role in the 2016 election. Altman instead charmed lawmakers by speaking soberly about AI’s risks and grandly inviting regulation. These were noble sentiments, but they cost little in America, where Congress rarely passes tech legislation that has not been diluted by lobbyists.
  • Emily Bender, a computational linguist at the University of Washington, describes GPT-4 as a “stochastic parrot,” a mimic that merely figures out superficial correlations between symbols. In the human mind, those symbols map onto rich conceptions of the world
  • But the AIs are twice removed. They’re like the prisoners in Plato’s allegory of the cave, whose only knowledge of the reality outside comes from shadows cast on a wall by their captors.
  • Altman told me that he doesn’t believe it’s “the dunk that people think it is” to say that GPT-4 is just making statistical correlations. If you push these critics further, “they have to admit that’s all their own brain is doing … it turns out that there are emergent properties from doing simple things on a massive scale.”
  • he is right that nature can coax a remarkable degree of complexity from basic structures and rules: “From so simple a beginning,” Darwin wrote, “endless forms most beautiful.”
  • If it seems odd that there remains such a fundamental disagreement about the inner workings of a technology that millions of people use every day, it’s only because GPT-4’s methods are as mysterious as the brain’s.
  • To grasp what’s going on inside large language models like GPT‑4, AI researchers have been forced to turn to smaller, less capable models. In the fall of 2021, Kenneth Li, a computer-science graduate student at Harvard, began training one to play Othello without providing it with either the game’s rules or a description of its checkers-style board; the model was given only text-based descriptions of game moves. Midway through a game, Li looked under the AI’s hood and was startled to discover that it had formed a geometric model of the board and the current state of play. In an article describing his research, Li wrote that it was as if a crow had overheard two humans announcing their Othello moves through a window and had somehow drawn the entire board in birdseed on the windowsill.
  • The philosopher Raphaël Millière once told me that it’s best to think of neural networks as lazy. During training, they first try to improve their predictive power with simple memorization; only when that strategy fails will they do the harder work of learning a concept. A striking example of this was observed in a small transformer model that was taught arithmetic. Early in its training process, all it did was memorize the output of simple problems such as 2+2=4. But at some point the predictive power of this approach broke down, so it pivoted to actually learning how to add.
  • Even AI scientists who believe that GPT-4 has a rich world model concede that it is much less robust than a human’s understanding of their environment.
  • But it’s worth noting that a great many abilities, including very high-order abilities, can be developed without an intuitive understanding. The computer scientist Melanie Mitchell has pointed out that science has already discovered concepts that are highly predictive, but too alien for us to genuinely understand
  • As AI advances, it may well discover other concepts that predict surprising features of our world but are incomprehensible to us.
  • GPT-4 is no doubt flawed, as anyone who has used ChatGPT can attest. Having been trained to always predict the next word, it will always try to do so, even when its training data haven’t prepared it to answer a question.
  • The models “don’t have a good conception of their own weaknesses,” Nick Ryder, a researcher at OpenAI, told me. GPT-4 is more accurate than GPT-3, but it still hallucinates, and often in ways that are difficult for researchers to catch. “The mistakes get more subtle,
  • The Khan Academy’s solution to GPT-4’s accuracy problem was to filter its answers through a Socratic disposition. No matter how strenuous a student’s plea, it would refuse to give them a factual answer, and would instead guide them toward finding their own—a clever work-around, but perhaps with limited appeal.
  • When I asked Sutskever if he thought Wikipedia-level accuracy was possible within two years, he said that with more training and web access, he “wouldn’t rule it out.”
  • This was a much more optimistic assessment than that offered by his colleague Jakub Pachocki, who told me to expect gradual progress on accuracy—to say nothing of outside skeptics, who believe that returns on training will diminish from here.
  • Sutskever is amused by critics of GPT-4’s limitations. “If you go back four or five or six years, the things we are doing right now are utterly unimaginable,”
  • AI researchers have become accustomed to goalpost-moving: First, the achievements of neural networks—mastering Go, poker, translation, standardized tests, the Turing test—are described as impossible. When they occur, they’re greeted with a brief moment of wonder, which quickly dissolves into knowing lectures about how the achievement in question is actually not that impressive. People see GPT-4 “and go, ‘Wow,’ ” Sutskever said. “And then a few weeks pass and they say, ‘But it doesn’t know this; it doesn’t know that.’ We adapt quite quickly.”
  • The goalpost that matters most to Altman—the “big one” that would herald the arrival of an artificial general intelligence—is scientific breakthrough. GPT-4 can already synthesize existing scientific ideas, but Altman wants an AI that can stand on human shoulders and see more deeply into nature.
  • Certain AIs have produced new scientific knowledge. But they are algorithms with narrow purposes, not general-reasoning machines. The AI AlphaFold, for instance, has opened a new window onto proteins, some of biology’s tiniest and most fundamental building blocks, by predicting many of their shapes, down to the atom—a considerable achievement given the importance of those shapes to medicine, and given the extreme tedium and expense required to discern them with electron microscopes.
  • Altman imagines a future system that can generate its own hypotheses and test them in a simulation. (He emphasized that humans should remain “firmly in control” of real-world lab experiments—though to my knowledge, no laws are in place to ensure that.)
  • He longs for the day when we can tell an AI, “ ‘Go figure out the rest of physics.’ ” For it to happen, he says, we will need something new, built “on top of” OpenAI’s existing language models.
  • In her MIT lab, the cognitive neuroscientist Ev Fedorenko has found something analogous to GPT-4’s next-word predictor inside the brain’s language network. Its processing powers kick in, anticipating the next bit in a verbal string, both when people speak and when they listen. But Fedorenko has also shown that when the brain turns to tasks that require higher reasoning—of the sort that would be required for scientific insight—it reaches beyond the language network to recruit several other neural systems.
  • No one at OpenAI seemed to know precisely what researchers need to add to GPT-4 to produce something that can exceed human reasoning at its highest levels.
  • at least part of the current strategy clearly involves the continued layering of new types of data onto language, to enrich the concepts formed by the AIs, and thereby enrich their models of the world.
  • The extensive training of GPT-4 on images is itself a bold step in this direction,
  • Others at the company—and elsewhere—are already working on different data types, including audio and video, that could furnish AIs with still more flexible concepts that map more extensively onto reality
  • Tactile concepts would of course be useful primarily to an embodied AI, a robotic reasoning machine that has been trained to move around the world, seeing its sights, hearing its sounds, and touching its objects.
  • humanoid robots. I asked Altman what I should make of that. He told me that OpenAI is interested in embodiment because “we live in a physical world, and we want things to happen in the physical world.”
  • At some point, reasoning machines will need to bypass the middleman and interact with physical reality itself. “It’s weird to think about AGI”—artificial general intelligence—“as this thing that only exists in a cloud,” with humans as “robot hands for it,” Altman said. “It doesn’t seem right.
  • Everywhere Altman has visited, he has encountered people who are worried that superhuman AI will mean extreme riches for a few and breadlines for the rest
  • Altman answered by addressing the young people in the audience directly: “You are about to enter the greatest golden age,” he said.
  • “A lot of people working on AI pretend that it’s only going to be good; it’s only going to be a supplement; no one is ever going to be replaced,” he said. “Jobs are definitely going to go away, full stop.”
  • A recent study led by Ed Felten, a professor of information-technology policy at Princeton, mapped AI’s emerging abilities onto specific professions according to the human abilities they require, such as written comprehension, deductive reasoning, fluency of ideas, and perceptual speed. Like others of its kind, Felten’s study predicts that AI will come for highly educated, white-collar workers first.
  • How many jobs, and how soon, is a matter of fierce dispute
  • The paper’s appendix contains a chilling list of the most exposed occupations: management analysts, lawyers, professors, teachers, judges, financial advisers, real-estate brokers, loan officers, psychologists, and human-resources and public-relations professionals, just to sample a few.
  • Altman imagines that far better jobs will be created in their place. “I don’t think we’ll want to go back,” he said. When I asked him what these future jobs might look like, he said he doesn’t know.
  • He suspects there will be a wide range of jobs for which people will always prefer a human. (Massage therapists?
  • His chosen example was teachers. I found this hard to square with his outsize enthusiasm for AI tutors.
  • He also said that we would always need people to figure out the best way to channel AI’s awesome powers. “That’s going to be a super-valuable skill,” he said. “You have a computer that can do anything; what should it go do?”
  • As many have noted, draft horses were permanently put out of work by the automobile. If Hondas are to horses as GPT-10 is to us, a whole host of long-standing assumptions may collapse.
  • Previous technological revolutions were manageable because they unfolded over a few generations, but Altman told South Korea’s youth that they should expect the future to happen “faster than the past.” He has previously said that he expects the “marginal cost of intelligence” to fall very close to zero within 10 years
  • The earning power of many, many workers would be drastically reduced in that scenario. It would result in a transfer of wealth from labor to the owners of capital so dramatic, Altman has said, that it could be remedied only by a massive countervailing redistribution.
  • In 2021, he unveiled Worldcoin, a for-profit project that aims to securely distribute payments—like Venmo or PayPal, but with an eye toward the technological future—first through creating a global ID by scanning everyone’s iris with a five-pound silver sphere called the Orb. It seemed to me like a bet that we’re heading toward a world where AI has made it all but impossible to verify people’s identity and much of the population requires regular UBI payments to survive. Altman more or less granted that to be true, but said that Worldcoin is not just for UBI.
  • “Let’s say that we do build this AGI, and a few other people do too.” The transformations that follow would be historic, he believes. He described an extraordinarily utopian vision, including a remaking of the flesh-and-steel world
  • “Robots that use solar power for energy can go and mine and refine all of the minerals that they need, that can perfectly construct things and require no human labor,” he said. “You can co-design with DALL-E version 17 what you want your home to look like,” Altman said. “Everybody will have beautiful homes.
  • In conversation with me, and onstage during his tour, he said he foresaw wild improvements in nearly every other domain of human life. Music would be enhanced (“Artists are going to have better tools”), and so would personal relationships (Superhuman AI could help us “treat each other” better) and geopolitics (“We’re so bad right now at identifying win-win compromises”).
  • In this world, AI would still require considerable computing resources to run, and those resources would be by far the most valuable commodity, because AI could do “anything,” Altman said. “But is it going to do what I want, or is it going to do what you want
  • If rich people buy up all the time available to query and direct AI, they could set off on projects that would make them ever richer, while the masses languish
  • One way to solve this problem—one he was at pains to describe as highly speculative and “probably bad”—was this: Everyone on Earth gets one eight-billionth of the total AI computational capacity annually. A person could sell their annual share of AI time, or they could use it to entertain themselves, or they could build still more luxurious housing, or they could pool it with others to do “a big cancer-curing run,” Altman said. “We just redistribute access to the system.”
  • Even if only a little of it comes true in the next 10 or 20 years, the most generous redistribution schemes may not ease the ensuing dislocations.
  • America today is torn apart, culturally and politically, by the continuing legacy of deindustrialization, and material deprivation is only one reason. The displaced manufacturing workers in the Rust Belt and elsewhere did find new jobs, in the main. But many of them seem to derive less meaning from filling orders in an Amazon warehouse or driving for Uber than their forebears had when they were building cars and forging steel—work that felt more central to the grand project of civilization.
  • It’s hard to imagine how a corresponding crisis of meaning might play out for the professional class, but it surely would involve a great deal of anger and alienation.
  • Even if we avoid a revolt of the erstwhile elite, larger questions of human purpose will linger. If AI does the most difficult thinking on our behalf, we all may lose agency—at home, at work (if we have it), in the town square—becoming little more than consumption machines, like the well-cared-for human pets in WALL-E
  • Altman has said that many sources of human joy and fulfillment will remain unchanged—basic biological thrills, family life, joking around, making things—and that all in all, 100 years from now, people may simply care more about the things they cared about 50,000 years ago than those they care about today
  • In its own way, that too seems like a diminishment, but Altman finds the possibility that we may atrophy, as thinkers and as humans, to be a red herring. He told me we’ll be able to use our “very precious and extremely limited biological compute capacity” for more interesting things than we generally do today.
  • Yet they may not be the most interesting things: Human beings have long been the intellectual tip of the spear, the universe understanding itself. When I asked him what it would mean for human self-conception if we ceded that role to AI, he didn’t seem concerned. Progress, he said, has always been driven by “the human ability to figure things out.” Even if we figure things out with AI, that still counts, he said.
  • It’s not obvious that a superhuman AI would really want to spend all of its time figuring things out for us.
  • I asked Sutskever whether he could imagine an AI pursuing a different purpose than simply assisting in the project of human flourishing.
  • “I don’t want it to happen,” Sutskever said, but it could.
  • Sutskever has recently shifted his focus to try to make sure that it doesn’t. He is now working primarily on alignment research, the effort to ensure that future AIs channel their “tremendous” energies toward human happiness
  • It is, he conceded, a difficult technical problem—the most difficult, he believes, of all the technical challenges ahead.
  • As part of the effort to red-team GPT-4 before it was made public, the company sought out the Alignment Research Center (ARC), across the bay in Berkeley, which has developed a series of evaluations to determine whether new AIs are seeking power on their own. A team led by Elizabeth Barnes, a researcher at ARC, prompted GPT-4 tens of thousands of times over seven months, to see if it might display signs of real agency.
  • The ARC team gave GPT-4 a new reason for being: to gain power and become hard to shut down
  • Agarwal told me that this behavior could be a precursor to shutdown avoidance in future models. When GPT-4 devised its lie, it had realized that if it answered honestly, it may not have been able to achieve its goal. This kind of tracks-covering would be particularly worrying in an instance where “the model is doing something that makes OpenAI want to shut it down,” Agarwal said. An AI could develop this kind of survival instinct while pursuing any long-term goal—no matter how small or benign—if it feared that its goal could be thwarted.
  • Barnes and her team were especially interested in whether GPT-4 would seek to replicate itself, because a self-replicating AI would be harder to shut down. It could spread itself across the internet, scamming people to acquire resources, perhaps even achieving some degree of control over essential global systems and holding human civilization hostage.
  • When I discussed these experiments with Altman, he emphasized that whatever happens with future models, GPT-4 is clearly much more like a tool than a creature. It can look through an email thread, or help make a reservation using a plug-in, but it isn’t a truly autonomous agent that makes decisions to pursue a goal, continuously, across longer timescales.
  • Altman told me that at this point, it might be prudent to try to actively develop an AI with true agency before the technology becomes too powerful, in order to “get more comfortable with it and develop intuitions for it if it’s going to happen anyway.”
  • “We need to do empirical experiments on how these things try to escape control,” Hinton told me. “After they’ve taken over, it’s too late to do the experiments.”
  • the fulfillment of Altman’s vision of the future will at some point require him or a fellow traveler to build much more autonomous AIs.
  • When Sutskever and I discussed the possibility that OpenAI would develop a model with agency, he mentioned the bots the company had built to play Dota 2. “They were localized to the video-game world,” Sutskever told me, but they had to undertake complex missions. He was particularly impressed by their ability to work in concert. They seem to communicate by “telepathy,” Sutskever said. Watching them had helped him imagine what a superintelligence might be like.
  • “The way I think about the AI of the future is not as someone as smart as you or as smart as me, but as an automated organization that does science and engineering and development and manufacturing,”
  • Suppose OpenAI braids a few strands of research together, and builds an AI with a rich conceptual model of the world, an awareness of its immediate surroundings, and an ability to act, not just with one robot body, but with hundreds or thousands. “We’re not talking about GPT-4. We’re talking about an autonomous corporation,”
  • Its constituent AIs would work and communicate at high speed, like bees in a hive. A single such AI organization would be as powerful as 50 Apples or Googles, he mused. “This is incredible, tremendous, unbelievably disruptive power.”
  • Presume for a moment that human society ought to abide the idea of autonomous AI corporations. We had better get their founding charters just right. What goal should we give to an autonomous hive of AIs that can plan on century-long time horizons, optimizing billions of consecutive decisions toward an objective that is written into their very being?
  • If the AI’s goal is even slightly off-kilter from ours, it could be a rampaging force that would be very hard to constrain
  • We know this from history: Industrial capitalism is itself an optimization function, and although it has lifted the human standard of living by orders of magnitude, left to its own devices, it would also have clear-cut America’s redwoods and de-whaled the world’s oceans. It almost did.
  • one of its principal challenges will be making sure that the objectives we give to AIs stick
  • We can program a goal into an AI and reinforce it with a temporary period of supervised learning, Sutskever explained. But just as when we rear a human intelligence, our influence is temporary. “It goes off to the world,”
  • That’s true to some extent even of today’s AIs, but it will be more true of tomorrow’s.
  • He compared a powerful AI to an 18-year-old heading off to college. How will we know that it has understood our teachings? “Will there be a misunderstanding creeping in, which will become larger and larger?”
  • Divergence may result from an AI’s misapplication of its goal to increasingly novel situations as the world changes
  • Or the AI may grasp its mandate perfectly, but find it ill-suited to a being of its cognitive prowess. It might come to resent the people who want to train it to, say, cure diseases. “They want me to be a doctor,” Sutskever imagines an AI thinking. “I really want to be a YouTuber.”
  • If AIs get very good at making accurate models of the world, they may notice that they’re able to do dangerous things right after being booted up. They might understand that they are being red-teamed for risk, and hide the full extent of their capabilities.
  • hey may act one way when they are weak and another way when they are strong, Sutskever said
  • We would not even realize that we had created something that had decisively surpassed us, and we would have no sense for what it intended to do with its superhuman powers.
  • That’s why the effort to understand what is happening in the hidden layers of the largest, most powerful AIs is so urgent. You want to be able to “point to a concept,” Sutskever said. You want to be able to direct AI toward some value or cluster of values, and tell it to pursue them unerringly for as long as it exists.
  • we don’t know how to do that; indeed, part of his current strategy includes the development of an AI that can help with the research. If we are going to make it to the world of widely shared abundance that Altman and Sutskever imagine, we have to figure all this out.
  • This is why, for Sutskever, solving superintelligence is the great culminating challenge of our 3-million-year toolmaking tradition. He calls it “the final boss of humanity.”
  • “First of all, I think that whether the chance of existential calamity is 0.5 percent or 50 percent, we should still take it seriously,”
  • . “I don’t have an exact number, but I’m closer to the 0.5 than the 50.”
  • As to how it might happen, he seems most worried about AIs getting quite good at designing and manufacturing pathogens, and with reason: In June, an AI at MIT suggested four viruses that could ignite a pandemic, then pointed to specific research on genetic mutations that could make them rip through a city more quickly
  • Around the same time, a group of chemists connected a similar AI directly to a robotic chemical synthesizer, and it designed and synthesized a molecule on its own.
  • Altman worries that some misaligned future model will spin up a pathogen that spreads rapidly, incubates undetected for weeks, and kills half its victims. He worries that AI could one day hack into nuclear-weapons systems too. “There are a lot of things,” he said, and these are only the ones we can imagine.
  • Altman told me that he doesn’t “see a long-term happy path” for humanity without something like the International Atomic Energy Agency for global oversight of AI
  • In San Francisco, Agarwal had suggested the creation of a special license to operate any GPU cluster large enough to train a cutting-edge AI, along with mandatory incident reporting when an AI does something out of the ordinary
  • Other experts have proposed a nonnetworked “Off” switch for every highly capable AI; on the fringe, some have even suggested that militaries should be ready to perform air strikes on supercomputers in case of noncompliance
  • Sutskever thinks we will eventually want to surveil the largest, most powerful AIs continuously and in perpetuity, using a team of smaller overseer AIs.
  • Safety rules for a new technology usually accumulate over time, like a body of common law, in response to accidents or the mischief of bad actors. The scariest thing about genuinely powerful AI systems is that humanity may not be able to afford this accretive process of trial and error. We may have to get the rules exactly right at the outset.
  • Several years ago, Altman revealed a disturbingly specific evacuation plan he’d developed. He told The New Yorker that he had “guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur” he could fly to in case AI attacks.
  • if the worst-possible AI future comes to pass, “no gas mask is helping anyone.”
  • but he told me that he can’t really be sure how AI will stack up. “I just have to build the thing,” he said. He is building fast
  • Altman insisted that they had not yet begun GPT-5’s training run. But when I visited OpenAI’s headquarters, both he and his researchers made it clear in 10 different ways that they pray to the god of scale. They want to keep going bigger, to see where this paradigm leads. After all, Google isn’t slackening its pace; it seems likely to unveil Gemini, a GPT-4 competitor, within months. “We are basically always prepping for a run,
  • To think that such a small group of people could jostle the pillars of civilization is unsettling. It’s fair to note that if Altman and his team weren’t racing to build an artificial general intelligence, others still would be
  • Altman’s views about the likelihood of AI triggering a global class war, or the prudence of experimenting with more autonomous agent AIs, or the overall wisdom of looking on the bright side, a view that seems to color all the rest—these are uniquely his
  • No single person, or single company, or cluster of companies residing in a particular California valley, should steer the kind of forces that Altman is imagining summoning.
  • AI may well be a bridge to a newly prosperous era of greatly reduced human suffering. But it will take more than a company’s founding charter—especially one that has already proved flexible—to make sure that we all share in its benefits and avoid its risks. It will take a vigorous new politics.
  • I don’t think the general public has quite awakened to what’s happening. A global race to the AI future has begun, and it is largely proceeding without oversight or restraint. If people in America want to have some say in what that future will be like, and how quickly it arrives, we would be wise to speak up soon.
Javier E

What's Left for Tech? - Freddie deBoer - 0 views

  • I gave a talk to a class at Northeastern University earlier this month, concerning technology, journalism, and the cultural professions. The students were bright and inquisitive, though they also reflected the current dynamic in higher ed overall - three quarters of the students who showed up were women, and the men who were there almost all sat moodily in the back and didn’t engage at all while their female peers took notes and asked questions. I know there’s a lot of criticism of the “crisis for boys” narrative, but it’s often hard not to believe in it.
  • we’re actually living in a period of serious technological stagnation - that despite our vague assumption that we’re entitled to constant remarkable scientific progress, humanity has been living with real and valuable but decidedly small-scale technological growth for the past 50 or 60 or 70 years, after a hundred or so years of incredible growth from 1860ish to 1960ish, give or take a decade or two on either side
  • I will recommend Robert J. Gordon’s The Rise & Fall of American Growth for an exhaustive academic (and primarily economic) argument to this effect. Gordon persuasively demonstrates that from the mid-19th to mid-20th century, humanity leveraged several unique advancements that had remarkably outsized consequences for how we live and changed our basic existence in a way that never happened before and hasn’t since. Principal among these advances were the process of refining fossil fuels and using them to power all manner of devices and vehicles, the ability to harness electricity and use it to safely provide energy to homes (which practically speaking required the first development), and a revolution in medicine that came from the confluence of long-overdue acceptance of germ theory and basic hygienic principles, the discovery and refinement of antibiotics, and the modernization of vaccines.
  • ...24 more annotations...
  • The complication that Gordon and other internet-skeptical researchers like Ha-Joon Chang have introduced is to question just how meaningful those digital technologies have been for a) economic growth and b) the daily experience of human life. It can be hard for people who stare at their phones all day to consider the possibility that digital technology just isn’t that important. But ask yourself: if you were forced to live either without your iPhone or without indoor plumbing, could you really choose the latter?
  • Certainly the improvements in medical care in the past half-century feel very important to me as someone living now, and one saved life has immensely emotional and practical importance for many people. What’s more, advances in communication sciences and computer technology genuinely have been revolutionary; going from the Apple II to the iPhone in 30 years is remarkable.
  • we can always debate what constitutes major or revolutionary change
  • The question is, who in 2023 ever says to themselves “smartphone cameras just aren’t good enough”?
  • continued improvements in worldwide mortality in the past 75 years have been a matter of spreading existing treatments and practices to the developing world, rather than the result of new science.
  • When you got your first smartphone, and you thought about what the future would hold, were your first thoughts about more durable casing? I doubt it. I know mine weren’t.
  • Why is Apple going so hard on TITANIUM? Well, where else does smartphone development have to go?
  • The elephant in the room, obviously, is AI.
  • The processors will get faster. They’ll add more RAM. They’ll generally have more power. But for what? To run what? To do what? To run the games that we were once told would replace our PlayStation and Xbox games, but didn’t?
  • Smartphone development has been a good object lesson in the reality that cool ideas aren’t always practical or worthwhile
  • And as impressive as some new development in medicine has been, there’s no question that in simple terms of reducing preventable deaths, the advances seen from 1900 to 1950 dwarf those seen since. To a rem
  • We developed this technology for typewriters and terminals and desktops, it Just Works, and there’s no reason to try and “disrupt” it
  • Instead of one device to rule them all, we developed a norm of syncing across devices and cloud storage, which works well. (I always thought it was pretty funny, and very cynical, how Apple went from calling the iPhone an everything device to later marketing the iPad and iWatch.) In other words, we developed a software solution rather than a hardware one
  • I will always give it up to Google Maps and portable GPS technology; that’s genuinely life-altering, probably the best argument for smartphones as a transformative technology. But let me ask you, honestly: do you still go out looking for apps, with the assumption that you’re going to find something that really changes your life in a significant way?
  • some people are big VR partisans. I’m deeply skeptical. The brutal failures of Meta’s new “metaverse” is just one new example of a decades-long resistance to the technology among consumers
  • maybe I just don’t want VR to become popular, given the potential ugly social consequences. If you thought we had an incel problem now….
  • There were, in those breathless early days, a lot of talk about how people simply wouldn’t own laptops anymore, how your phone would do everything. But it turns out that, for one thing, the keyboard remains an input device of unparalleled convenience and versatility.
  • It’s not artificial intelligence. It thinks nothing like a human thinks. There is no reason whatsoever to believe that it has evolved sentience or consciousness. There is nothing at present that these systems can do that human being simply can’t. But they can potentially do some things in the world of bits faster and cheaper than human beings, and that might have some meaningful consequences. But there is no reasonable, responsible claim to be made that these systems are imminent threats to conventional human life as currently lived, whether for good or for bad. IMO.
  • Let’s mutually agree to consider immediate plausible human technological progress outside of AI or “AI.” What’s coming? What’s plausible?
  • The most consequential will be our efforts to address climate change, and we have the potential to radically change how we generate electricity, although electrifying heating and transportation are going to be harder than many seem to think, while solar and wind power have greater ecological costs than people want to admit. But, yes, that’s potentially very very meaningful
  • It’s another example of how technological growth will still leave us with continuity rather than with meaningful change.
  • I kept thinking was, privatizing space… to do what? A manned Mars mission might happen in my lifetime, which is cool. But a Mars colony is a distant dream
  • This is why I say we live in the Big Normal, the Big Boring, the Forever Now. We are tragic people: we were born just too late to experience the greatest flowering of human development the world has ever seen. We do, however, enjoy the rather hefty consolation prize that we get to live with the affordances of that period, such as not dying of smallpox.
  • I think we all need to learn to appreciate what we have now, in the world as it exists, at the time in which we actually live. Frankly, I don’t think we have any other choice.
Javier E

We're That Much Likelier to Get Sick Now - The Atlantic - 0 views

  • Although neither RSV nor flu is shaping up to be particularly mild this year, says Caitlin Rivers, an epidemiologist at the Johns Hopkins Center for Health Security, both appear to be behaving more within their normal bounds.
  • But infections are still nowhere near back to their pre-pandemic norm. They never will be again. Adding another disease—COVID—to winter’s repertoire has meant exactly that: adding another disease, and a pretty horrific one at that, to winter’s repertoire.
  • “The probability that someone gets sick over the course of the winter is now increased,” Rivers told me, “because there is yet another germ to encounter.” The math is simple, even mind-numbingly obvious—a pathogenic n+1 that epidemiologists have seen coming since the pandemic’s earliest days. Now we’re living that reality, and its consequences.
  • ...18 more annotations...
  • ‘Odds are, people are going to get sick this year,’”
  • In typical years, flu hospitalizes an estimated 140,000 to 710,000 people in the United States alone; some years, RSV can add on some 200,000 more. “Our baseline has never been great,” Yvonne Maldonado, a pediatrician at Stanford, told me. “Tens of thousands of people die every year.”
  • this time of year, on top of RSV, flu, and COVID, we also have to contend with a maelstrom of other airway viruses—among them, rhinoviruses, parainfluenza viruses, human metapneumovirus, and common-cold coronaviruses.
  • Illnesses not severe enough to land someone in the hospital could still leave them stuck at home for days or weeks on end, recovering or caring for sick kids—or shuffling back to work
  • “This is a more serious pathogen that is also more infectious,” Ajay Sethi, an epidemiologist at the University of Wisconsin at Madison, told me. In the past year, COVID-19 has killed some 80,000 Americans—a lighter toll than in the three years prior, but one that still dwarfs that of the worst flu seasons in the past decade.
  • Globally, the only infectious killer that rivals it in annual-death count is tuberculosis
  • Rivers also pointed to CDC data that track trends in deaths caused by pneumonia, flu, and COVID-19. Even when SARS-CoV-2 has been at its most muted, Rivers said, more people have been dying—especially during the cooler months—than they were at the pre-pandemic baseline.
  • This year, for the first time, millions of Americans have access to three lifesaving respiratory-virus vaccines, against flu, COVID, and RSV. Uptake for all three remains sleepy and halting; even the flu shot, the most established, is not performing above its pre-pandemic baseline.
  • COVID could now surge in the summer, shading into RSV’s autumn rise, before adding to flu’s winter burden, potentially dragging the misery out into spring. “Based on what I know right now, I am considering the season to be longer,” Rivers said.
  • barring further gargantuan leaps in viral evolution, the disease will continue to slowly mellow out in severity as our collective defenses build; the virus may also pose less of a transmission risk as the period during which people are infectious contracts
  • even if the dangers of COVID-19 are lilting toward an asymptote, experts still can’t say for sure where that asymptote might be relative to other diseases such as the flu—or how long it might take for the population to get there.
  • it seems extraordinarily unlikely to ever disappear. For the foreseeable future, “pretty much all years going forward are going to be worse than what we’ve been used to before,”
  • although a core contingent of Americans might still be more cautious than they were before the pandemic’s start—masking in public, testing before gathering, minding indoor air quality, avoiding others whenever they’re feeling sick—much of the country has readily returned to the pre-COVID mindset.
  • When I asked Hanage what precautions worthy of a respiratory disease with a death count roughly twice that of flu’s would look like, he rattled off a familiar list: better access to and uptake of vaccines and antivirals, with the vulnerable prioritized; improved surveillance systems to offer  people at high risk a better sense of local-transmission trends; improved access to tests and paid sick leave
  • Without those changes, excess disease and death will continue, and “we’re saying we’re going to absorb that into our daily lives,” he said.
  • And that is what is happening.
  • last year, a CDC survey found that more than 3 percent of American adults were suffering from long COVID—millions of people in the United States alone.
  • “We get used to things we could probably fix.” The years since COVID arrived set a horrific precedent of death and disease; after that, this season of n+1 sickness might feel like a reprieve. But compare it with a pre-COVID world, and it looks objectively worse. We’re heading toward a new baseline, but it will still have quite a bit in common with the old one: We’re likely to accept it, and all of its horrors, as a matter of course.
jaxredd10

rome - 0 views

  • Beginning in the eighth century B.C., Ancient Rome grew from a small town on central Italy’s Tiber River into an empire that at its peak encompassed most of continental Europe, Britain, much of western Asia, northern Africa and the Mediterranean islands
  • After 450 years as a republic, Rome became an empire in the wake of Julius Caesar
  • s rise and fall in the first century B.C.
  • ...38 more annotations...
  • The long and triumphant reign of its first emperor, Augustus, began a golden age of peace and prosperity;
  • As legend has it, Rome was founded in 753 B.C. by Romulus and Remus,
  • twin sons
  • Romulus became the first king of Rome,
  • Rome’s era as a monarchy ended in 509 B.C.
  • The power of the monarch passed to two annually elected magistrates called consuls. They also served as commanders in chief of the army.
  • Politics in the early republic was marked by the long struggle between patricians and plebeians (the common people), who eventually attained some political power through years of concessions from patricians
  • In 450 B.C., the first Roman law code was inscribed on 12 bronze tablets–known as the Twelve Tables–and publicly displayed in the Roman Forum.
  • By around 300 B.C., real political power in Rome was centered in the Senate, which at the time included only members of patrician and wealthy plebeian families.
  • During the early republic, the Roman state grew exponentially in both size and power
  • Rome then fought a series of wars known as the Punic Wars with Carthage, a powerful city-state in northern Africa. The first two Punic Wars ended with Rome in full control of Sicily, the western Mediterranean and much of Spain. In the Third Punic War (149–146 B.C.), the Romans captured and destroyed the city of Carthage and sold its surviving inhabitants into slavery, making a section of northern Africa a Roman province.
  • Rome’s military conquests led directly to its cultural growth as a society, as the Romans benefited greatly from contact with such advanced cultures as the Greeks.
  • The first Roman literature appeared around 240 B.C., with translations of Greek classics into Latin; Romans would eventually adopt much of Greek art, philosophy and religion.
  • Rome’s complex political institutions began to crumble under the weight of the growing empire, ushering in an era of internal turmoil and violence.
  • The gap between rich and poor widened as wealthy landowners drove small farmers from public land,
  • When the victorious Pompey returned to Rome, he formed an uneasy alliance known as the First Triumvirate
  • After earning military glory in Spain, Caesar returned to Rome to vie for the consulship in 59 B.C.
  • Caesar received the governorship of three wealthy provinces in Gaul beginning in 58 B.C.
  • In 49 B.C., Caesar and one of his legions crossed the Rubicon, a river on the border between Italy from Cisalpine Gaul
  • Consul Mark Antony and Caesar’s great-nephew and adopted heir, Octavian, joined forces to crush Brutus and Cassius and divided power in Rome with ex-consul Lepidus in what was known as the Second Triumvirate. With Octavian leading the western provinces, Antony the east, and Lepidus Africa, tensions developed by 36 B.C. and the triumvirate soon dissolved. In 31 B.C., Octavian triumped over the forces of Antony and Queen Cleopatra of Egypt (also rumored to be the onetime lover of Julius Caesar) in the Battle of Actium
  • To avoid meeting Caesar’s fate, he made sure to make his position as absolute ruler acceptable to the public by apparently restoring the political institutions of the Roman republic while in reality retaining all real power for himself. In 27 B.C., Octavian assumed the title of Augustus, becoming the first emperor of Rome.
  • By 29 B.C., Octavian was the sole leader of Rome and all its provinces.
  • Augustus’ rule restored morale in Rome after a century of discord and corruption and ushered in the famous pax Romana–two full centuries of peace and prosperity.
  • He instituted various social reforms, won numerous military victories and allowed Roman literature, art, architecture and religion to flourish.
  • When he died, the Senate elevated Augustus to the status of a god, beginning a long-running tradition of deification for popular emperors.
  • The decadence and incompetence of Commodus (180-192) brought the golden age of the Roman emperors to a disappointing end. His death at the hands of his own ministers sparked another period of civil war, from which Lucius Septimius Severus (193-211) emerged victorious.
  • Meanwhile, threats from outside plagued the empire and depleted its riches, including continuing aggression from Germans and Parthians and raids by the Goths over the Aegean Sea.
  • Diocletian divided power into the so-called tetrarchy (rule of four), sharing his title of Augustus (emperor) with Maximian. A pair of generals, Galerius and Constantius, were appointed as the assistants and chosen successors of Diocletian and Maximian; Diocletian and Galerius ruled the eastern Roman Empire, while Maximian and Constantius took power in the west.
  • The stability of this system suffered greatly after Diocletian and Maximian retired from office. Constantine (the son of Constantius) emerged from the ensuing power struggles as sole emperor of a reunified Rome in 324. He moved the Roman capital to the Greek city of Byzantium, which he renamed Constantinople. At the Council of Nicaea in 325, Constantine made Christianity (once an obscure Jewish sect) Rome’s official religion.
  • An entirely different story played out in the west, where the empire was wracked by internal conflict as well as threats from abroad–particularly from the Germanic tribes now established within the empire’s frontiers like the Vandals (their sack of Rome originated the phrase “vandalism”)–and was steadily losing money due to constant warfare.
  • Rome eventually collapsed under the weight of its own bloated empire, losing its provinces one by one:
  • In September 476, a Germanic prince named Odovacar won control of the Roman army in Italy.
  • After deposing the last western emperor, Romulus Augustus, Odovacar’s troops proclaimed him king of Italy, bringing an ignoble end to the long, tumultuous history of ancient Rome. The fall of the Roman Empire was complete.
  • Roman aqueducts, first developed in 312 B.C., enabled the rise of cities by transporting water to urban areas, improving public health and sanitation.
  • Roman cement and concrete are part of the reason ancient buildings like the Colosseum and Roman Forum are still standing strong today.
  • Roman arches, or segmented arches, improved upon earlier arches to build strong bridges and buildings, evenly distributing weight throughout the structure.
  • Roman roads, the most advanced roads in the ancient world, enabled the Roman Empire
  • to stay connected
Javier E

World 'population bomb' may never go off as feared, finds study | Population | The Guardian - 0 views

  • The long-feared “population bomb” may not go off, according to the authors of a new report that estimates that human numbers will peak lower and sooner than previously forecast.
  • on current trends the world population will reach a high of 8.8 billion before the middle of the century, then decline rapidly. The peak could come earlier still if governments take progressive steps to raise average incomes and education levels.
  • The new forecasts are good news for the global environment. Once the demographic bulge is overcome, pressure on nature and the climate should start to ease, along with associated social and political tensions.
  • ...9 more annotations...
  • The new projection, released on Monday, was carried out by the Earth4All collective of leading environmental science and economic institutions, including the Potsdam Institute for Climate Impact Research, Stockholm Resilience Centre and the BI Norwegian Business School. They were commissioned by the Club of Rome for a followup to its seminal Limits to Growth study more than 50 years ago.
  • “This gives us evidence to believe the population bomb won’t go off, but we still face significant challenges from an environmental perspective. We need a lot of effort to address the current development paradigm of overconsumption and overproduction, which are bigger problems than population.”
  • Previous studies have painted a grimmer picture. Last year, the UN estimated the world population would hit 9.7 billion by the middle of the century and continue to rise for several decades afterwards.
  • But the authors caution that falling birthrates alone will not solve the planet’s environmental problems, which are already serious at the 7.8 billion level and are primarily caused by the excess consumption of a wealthy minority.
  • The report is based on a new methodology which incorporates social and economic factors that have a proven impact on birthrate, such as raising education levels, particularly for women, and improving income.
  • In the business-as-usual case, it foresees existing policies being enough to limit global population growth to below 9 billion in 2046 and then decline to 7.3 billion in 2100.
  • too little too late: “Although the scenario does not result in an overt ecological or total climate collapse, the likelihood of regional societal collapses nevertheless rises throughout the decades to 2050, as a result of deepening social divisions both internal to and between societies. The risk is particularly acute in the most vulnerable, badly governed and ecologically vulnerable economies.”
  • In the second, more optimistic scenario – with governments across the world raising taxes on the wealthy to invest in education, social services and improved equality – it estimates human numbers could hit a high of 8.5 billion as early as 2040 and then fall by more than a third to about 6 billion in 2100. Under this pathway, they foresee considerable gains by mid-century for human society and the natural environment.
  • “By 2050, greenhouse gas emissions are about 90% lower than they were in 2020 and are still falling,” according to the report. “Remaining atmospheric emissions of greenhouse gases from industrial processes are increasingly removed through carbon capture and storage. As the century progresses, more carbon is captured than stored, keeping the global temperature below 2C above pre-industrial levels. Wildlife is gradually recovering and starting to thrive once again in many places.”
Javier E

Opinion | There's a Reason There Aren't Enough Teachers in America. Many Reasons, Actually. - The New York Times - 0 views

  • Here are just a few of the longstanding problems plaguing American education: a generalized decline in literacy; the faltering international performance of American students; an inability to recruit enough qualified college graduates into the teaching profession; a lack of trained and able substitutes to fill teacher shortages; unequal access to educational resources; inadequate funding for schools; stagnant compensation for teachers; heavier workloads; declining prestige; and deteriorating faculty morale.
  • Nine-year-old students earlier this year revealed “the largest average score decline in reading since 1990, and the first ever score decline in mathematics,”
  • In the latest comparison of fourth grade reading ability, the United States ranked below 15 countries, including Russia, Ireland, Poland and Bulgaria.
  • ...49 more annotations...
  • Teachers are not only burnt out and undercompensated, they are also demoralized. They are being asked to do things in the name of teaching that they believe are mis-educational and harmful to students and the profession. What made this work good for them is no longer accessible. That is why we are hearing so many refrains of “I’m not leaving the profession, my profession left me.”
  • We find there are at least 36,000 vacant positions along with at least 163,000 positions being held by underqualified teachers, both of which are conservative estimates of the extent of teacher shortages nationally.
  • “The current problem of teacher shortages (I would further break this down into vacancy and under-qualification) is higher than normal.” The data, Nguyen continued, “indicate that shortages are worsening over time, particularly over the last few years
  • a growing gap between the pay of all college graduates and teacher salaries from 1979 to 2021, with a sharp increase in the differential since 2010
  • The number of qualified teachers is declining for the whole country and the vast majority of states.
  • Wages are essentially unchanged from 2000 to 2020 after adjusting for inflation. Teachers have about the same number of students. But, teacher accountability reforms have increased the demands on their positions.
  • The pandemic was very difficult for teachers. Their self-reported level of stress was about as twice as high during the pandemic compared to other working adults. Teachers had to worry both about their personal safety and deal with teaching/caring for students who are grieving lost family members.
  • the number of students graduating from college with bachelor’s degrees in education fell from 176,307 in 1970-71 to 104,008 in 2010-11 to 85,058 in 2019-20.
  • We do see that southern states (e.g., Mississippi, Alabama, Georgia and Florida) have very high vacancies and high vacancy rates.”
  • By 2021, teachers made $1,348, 32.9 percent less than what other graduates made, at $2,009.
  • These gaps play a significant role in determining the quality of teachers,
  • Sixty percent of teachers and 65 percent of principals reported believing that systemic racism exists. Only about 20 percent of teachers and principals reported that they believe systemic racism does not exist, and the remainder were not sure
  • “We find,” they write, “that teachers’ cognitive skills differ widely among nations — and that these differences matter greatly for students’ success in school. An increase of one standard deviation in teacher cognitive skills is associated with an increase of 10 to 15 percent of a standard deviation in student performance.”
  • teachers have lower cognitive skills, on average, in countries with greater nonteaching job opportunities for women in high-skill occupations and where teaching pays relatively less than other professions.
  • the scholars found that the cognitive skills of teachers in the United States fell in the middle ranks:Teachers in the United States perform worse than the average teacher sample-wide in numeracy, with a median score of 284 points out of a possible 500, compared to the sample-wide average of 292 points. In literacy, they perform slightly better than average, with a median score of 301 points compared to the sample-wide average of 295 points.
  • Increasing teacher numeracy skills by one standard deviation increases student performance by nearly 15 percent of a standard deviation on the PISA math test. Our estimate of the effect of increasing teacher literacy skills on students’ reading performance is slightly smaller, at 10 percent of a standard deviation.
  • How, then, to raise teacher skill level in the United States? Hanushek and his two colleagues have a simple answer: raise teacher pay to make it as attractive to college graduates as high-skill jobs in other fields.
  • policymakers will need to do more than raise teacher pay across the board to ensure positive results. They must ensure that higher salaries go to more effective teachers.
  • The teaching of disputed subjects in schools has compounded many of the difficulties in American education.
  • The researchers found that controversies over critical race theory, sex education and transgender issues — aggravated by divisive debates over responses to Covid and its aftermath — are inflicting a heavy toll on teachers and principals.
  • “On top of the herculean task of carrying out the essential functions of their jobs,” they write, “educators increasingly find themselves in the position of addressing contentious, politicized issues in their schools as the United States has experienced increasing political polarization.”
  • Teachers and principals, they add, “have been pulled in multiple directions as they try to balance and reconcile not only their own beliefs on such matters but also the beliefs of others around them, including their leaders, fellow staff, students, and students’ family members.”
  • These conflicting pressures take place in a climate where “emotions in response to these issues have run high within communities, resulting in the harassment of educators, bans against literature depicting diverse characters, and calls for increased parental involvement in deciding academic content.”
  • Forty-eight percent of principals and 40 percent of teachers reported that the intrusion of political issues and opinions in school leadership or teaching, respectively, was a job-related stressor. By comparison, only 16 percent of working adults indicated that the intrusion of political issues and opinions in their jobs was a source of job-related stress
  • In 1979, the average teacher weekly salary (in 2021 dollars) was $1,052, 22.9 percent less than other college graduates’, at $1,364
  • Nearly all Black or African American principals (92 percent) and teachers (87 percent) reported believing that systemic racism exists.
  • White educators working in predominantly white school systems reported substantially more pressure to deal with politically divisive issues than educators of color and those working in mostly minority schools: “Forty-one percent of white teachers and 52 percent of white teachers and principals selected the intrusion of political issues and opinions into their professions as a job-related stressor, compared with 36 percent of teachers of color and principals of color.
  • and opinions into their professions as a job-related stressor, compar
  • A 54 percent majority of teachers and principals said there “should not be legal limits on classroom conversations about racism, sexism, and other topics,” while 20 percent said there should be legislated constraint
  • Voters, in turn, are highly polarized on the teaching of issues impinging on race or ethnicity in public schools. The Education Next 2022 Survey asked, for example:Some people think their local public schools place too little emphasis on slavery, racism and other challenges faced by Black people in the United States. Other people think their local public schools place too much emphasis on these topics. What is your view about your local public schools?
  • Among Democrats, 55 percent said too little emphasis was placed on slavery, racism and other challenges faced by Black people, and 8 percent said too much.
  • Among Republicans, 51 said too much and 10 percent said too little.
  • Because of the lack of reliable national data, there is widespread disagreement among scholars of education over the scope and severity of the shortage of credentialed teachers, although there is more agreement that these problems are worse in low-income, high majority-minority school systems and in STEM and special education faculties.
  • Public schools increasingly are targets of conservative political groups focusing on what they term “Critical Race Theory,” as well as issues of sexuality and gender identity. These political conflicts have created a broad chilling effect that has limited opportunities for students to practice respectful dialogue on controversial topics and made it harder to address rampant misinformation.
  • The chilling effect also has led to marked declines in general support for teaching about race, racism, and racial and ethnic diversity.
  • These political conflicts, the authors wrote,have made the already hard work of public education more difficult, undermining school management, negatively impacting staff, and heightening student stress and anxiety. Several principals shared that they were reconsidering their own roles in public education in light of the rage at teachers and rage at administrators’ playing out in their communities.
  • State University of New York tracked trends on “four interrelated constructs: professional prestige, interest among students, preparation for entry, and job satisfaction” for 50 years, from the 1970s to the present and founda consistent and dynamic pattern across every measure: a rapid decline in the 1970s, a swift rise in the 1980s, relative stability for two decades, and a sustained drop beginning around 2010. The current state of the teaching profession is at or near its lowest levels in 50 years.
  • Who among the next generation of college graduates will choose to teach?
  • Perceptions of teacher prestige have fallen between 20 percent and 47 percent in the last decade to be at or near the lowest levels recorded over the last half century
  • Interest in the teaching profession among high school seniors and college freshmen has fallen 50 percent since the 1990s, and 38 percent since 2010, reaching the lowest level in the last 50 years
  • the proportion of college graduates that go into teaching is at a 50-year low
  • Teachers’ job satisfaction is also at the lowest level in five decades, with the percent of teachers who feel the stress of their job is worth it dropping from 81 percent to 42 percent in the last 15 years
  • The combination of these factors — declining prestige, lower pay than other professions that require a college education, increased workloads, and political and ideological pressures — is creating both intended and unintended consequences for teacher accountability reforms mandating tougher licensing rules, evaluations and skill testing.
  • Education policy over the past decade has focused considerable effort on improving human capital in schools through teacher accountability. These reforms, and the research upon which they drew, were based on strong assumptions about how accountability would affect who decided to become a teacher. Counter to most assumptions, our findings document how teacher accountability reduced the supply of new teacher candidates by, in part, decreasing perceived job security, satisfaction and autonomy.
  • The reforms, Kraft and colleagues continued, increasedthe likelihood that schools could not fill vacant teaching positions. Even more concerning, effects on unfilled vacancies were concentrated in hard-to-staff schools that often serve larger populations of low-income students and students of color
  • We find that evaluation reforms increased the quality of newly hired novice teachers by reducing the number of teachers that graduated from the least selective institutions
  • We find no evidence that evaluation reforms served to attract teachers who attended the most selective undergraduate institutions.
  • In other words, the economic incentives, salary structure and work-life pressures characteristic of public education employment have created a climate in which contemporary education reforms have perverse and unintended consequences that can worsen rather than alleviate the problems facing school systems.
  • If so, to improve the overall quality of the nation’s more than three million public schoolteachers, reformers may want to give priority to paychecks, working conditions, teacher autonomy and punishing workloads before attempting to impose higher standards, tougher evaluations and less job security.
Javier E

Opinion | The 100-Year Extinction Panic Is Back, Right on Schedule - The New York Times - 0 views

  • The literary scholar Paul Saint-Amour has described the expectation of apocalypse — the sense that all history’s catastrophes and geopolitical traumas are leading us to “the prospect of an even more devastating futurity” — as the quintessential modern attitude. It’s visible everywhere in what has come to be known as the polycrisis.
  • Climate anxiety, of the sort expressed by that student, is driving new fields in psychology, experimental therapies and debates about what a recent New Yorker article called “the morality of having kids in a burning, drowning world.”
  • The conviction that the human species could be on its way out, extinguished by our own selfishness and violence, may well be the last bipartisan impulse.
  • ...28 more annotations...
  • a major extinction panic happened 100 years ago, and the similarities are unnerving.
  • The 1920s were also a period when the public — traumatized by a recent pandemic, a devastating world war and startling technological developments — was gripped by the conviction that humanity might soon shuffle off this mortal coil.
  • It also helps us see how apocalyptic fears feed off the idea that people are inherently violent, self-interested and hierarchical and that survival is a zero-sum war over resources.
  • Either way, it’s a cynical view that encourages us to take our demise as a foregone conclusion.
  • What makes an extinction panic a panic is the conviction that humanity is flawed and beyond redemption, destined to die at its own hand, the tragic hero of a terrestrial pageant for whom only one final act is possible
  • What the history of prior extinction panics has to teach us is that this pessimism is both politically questionable and questionably productive. Our survival will depend on our ability to recognize and reject the nihilistic appraisals of humanity that inflect our fears for the future, both left and right.
  • As a scholar who researches the history of Western fears about human extinction, I’m often asked how I avoid sinking into despair. My answer is always that learning about the history of extinction panics is actually liberating, even a cause for optimism
  • Nearly every generation has thought its generation was to be the last, and yet the human species has persisted
  • As a character in Jeanette Winterson’s novel “The Stone Gods” says, “History is not a suicide note — it is a record of our survival.”
  • Contrary to the folk wisdom that insists the years immediately after World War I were a period of good times and exuberance, dark clouds often hung over the 1920s. The dread of impending disaster — from another world war, the supposed corruption of racial purity and the prospect of automated labor — saturated the period
  • The previous year saw the publication of the first of several installments of what many would come to consider his finest literary achievement, “The World Crisis,” a grim retrospective of World War I that laid out, as Churchill put it, the “milestones to Armageddon.
  • Bluntly titled “Shall We All Commit Suicide?,” the essay offered a dismal appraisal of humanity’s prospects. “Certain somber facts emerge solid, inexorable, like the shapes of mountains from drifting mist,” Churchill wrote. “Mankind has never been in this position before. Without having improved appreciably in virtue or enjoying wiser guidance, it has got into its hands for the first time the tools by which it can unfailingly accomplish its own extermination.”
  • The essay — with its declaration that “the story of the human race is war” and its dismay at “the march of science unfolding ever more appalling possibilities” — is filled with right-wing pathos and holds out little hope that mankind might possess the wisdom to outrun the reaper. This fatalistic assessment was shared by many, including those well to Churchill’s left.
  • “Are not we and they and all the race still just as much adrift in the current of circumstances as we were before 1914?” he wondered. Wells predicted that our inability to learn from the mistakes of the Great War would “carry our race on surely and inexorably to fresh wars, to shortages, hunger, miseries and social debacles, at last either to complete extinction or to a degradation beyond our present understanding.” Humanity, the don of sci-fi correctly surmised, was rushing headlong into a “scientific war” that would “make the biggest bombs of 1918 seem like little crackers.”
  • The pathbreaking biologist J.B.S. Haldane, another socialist, concurred with Wells’s view of warfare’s ultimate destination. In 1925, two decades before the Trinity test birthed an atomic sun over the New Mexico desert, Haldane, who experienced bombing firsthand during World War I, mused, “If we could utilize the forces which we now know to exist inside the atom, we should have such capacities for destruction that I do not know of any agency other than divine intervention which would save humanity from complete and peremptory annihilation.”
  • F.C.S. Schiller, a British philosopher and eugenicist, summarized the general intellectual atmosphere of the 1920s aptly: “Our best prophets are growing very anxious about our future. They are afraid we are getting to know too much and are likely to use our knowledge to commit suicide.”
  • Many of the same fears that keep A.I. engineers up at night — calibrating thinking machines to human values, concern that our growing reliance on technology might sap human ingenuity and even trepidation about a robot takeover — made their debut in the early 20th century.
  • The popular detective novelist R. Austin Freeman’s 1921 political treatise, “Social Decay and Regeneration,” warned that our reliance on new technologies was driving our species toward degradation and even annihilation
  • Extinction panics are, in both the literal and the vernacular senses, reactionary, animated by the elite’s anxiety about maintaining its privilege in the midst of societal change
  • There is a perverse comfort to dystopian thinking. The conviction that catastrophe is baked in relieves us of the moral obligation to act. But as the extinction panic of the 1920s shows us, action is possible, and these panics can recede
  • To whatever extent, then, that the diagnosis proved prophetic, it’s worth asking if it might have been at least partly self-fulfilling.
  • today’s problems are fundamentally new. So, too, must be our solutions
  • It is a tired observation that those who don’t know history are destined to repeat it. We live in a peculiar moment in which this wisdom is precisely inverted. Making it to the next century may well depend on learning from and repeating the tightrope walk — between technological progress and self-annihilation — that we have been doing for the past 100 years
  • We have gotten into the dangerous habit of outsourcing big issues — space exploration, clean energy, A.I. and the like — to private businesses and billionaires
  • That ideologically varied constellation of prominent figures shared a basic diagnosis of humanity and its prospects: that our species is fundamentally vicious and selfish and our destiny therefore bends inexorably toward self-destruction.
  • Less than a year after Churchill’s warning about the future of modern combat — “As for poison gas and chemical warfare,” he wrote, “only the first chapter has been written of a terrible book” — the 1925 Geneva Protocol was signed, an international agreement banning the use of chemical or biological weapons in combat. Despite the many horrors of World War II, chemical weapons were not deployed on European battlefields.
  • As for machine-age angst, there’s a lesson to learn there, too: Our panics are often puffed up, our predictions simply wrong
  • In 1928, H.G. Wells published a book titled “The Way the World Is Going,” with the modest subtitle “Guesses and Forecasts of the Years Ahead.” In the opening pages, he offered a summary of his age that could just as easily have been written about our turbulent 2020s. “Human life,” he wrote, “is different from what it has ever been before, and it is rapidly becoming more different.” He continued, “Perhaps never in the whole history of life before the present time, has there been a living species subjected to so fiercely urgent, many-sided and comprehensive a process of change as ours today. None at least that has survived. Transformation or extinction have been nature’s invariable alternatives. Ours is a species in an intense phase of transition.”
Javier E

It's not just vibes. Americans' perception of the economy has completely changed. - ABC News - 0 views

  • Applying the same pre-pandemic model to consumer sentiment during and after the pandemic, however, simply does not work. The indicators that correlated with people's feelings about the economy before 2020 no longer seem to matter in the same way
  • As with so many areas of American life, the pandemic has changed virtually everything about how people think about the economy and the issues that concern them
  • Prior to the pandemic, our model shows consumers felt better about the economy when the personal savings rate, a measure of how much money households are able to save rather than spend each month, was higher. This makes sense: People feel better when they have money in the bank and are able to save for important purchases like cars and houses.
  • ...20 more annotations...
  • Before the pandemic, a number of variables were statistically significant indicators for consumer sentiment in our model; in particular, the most salient variables appear to be vehicle sales, gas prices, median household income, the federal funds effective rate, personal savings and household expenditures (excluding food and energy).
  • During the pandemic, the personal savings rate soared. In April 2020, the metric was nearly double its previous high, recorded in May 1975.
  • All this taken together meant Americans were flush with cash but had nowhere to spend it. So despite the fact that the savings rate went way up, consumers still weren't feeling positively about the economy — contrary to the relationship between these two variables we saw in the decades before the pandemic.
  • Fast forward to 2024, and the personal savings rate has dropped to one of its lowest levels ever (the only time the savings rate was lower was in the years surrounding the Great Recession)
  • during and after the pandemic, Americans saw some of the highest rates of inflation the country has had in decades, and in a very short period of time. These sudden spikes naturally shocked many people who had been blissfully enjoying slow, steady price growth their entire adult lives. And it has taken a while for that shock to wear off, even as inflation has cre
  • the numbers align with our intuitive sense of how consumers process suddenly having their grocery store bill jump, as well as the findings from our model. In simple terms: Even if inflation is getting better, Americans aren't done being ticked off that it was bad to begin with.
  • surprisingly, our pre-pandemic model didn't find a notable relationship between housing prices and consumer sentiment
  • However, in our post-pandemic data, when we examined how correlated consumer sentiment was with each indicator we considered, consumer sentiment and median housing prices had the strongest correlation of all****** (a negative one, meaning higher prices were associated with lower consumer sentiment)
  • during the pandemic, low interest rates, high savings rates and changes in working patterns — namely, many workers' newfound ability to work from home — helped overheat the homebuying market, and buyers ran headlong into an enduring supply shortage. There simply weren't enough houses to buy, which drove up the costs of the ones that were for sale.
  • That's true even if a family has been able to save enough for a down payment, already a difficult task when rents remain high as well. Fewer people are able to cover their current housing costs while saving enough to make a down payment.
  • Low-income households are still the most likely to be burdened with high rents, but they're not the only ones affected anymore. High rents have also begun to affect those at middle-income levels as well.
  • In short, there was already a housing affordability crisis before the pandemic. Now it's worse, locking a wider array of people, at higher and higher income levels, out of the home-buying market
  • People who are renting but want to buy are stuck. People who live in starter homes and want to move to bigger homes are stuck. The conditions have frustrated a fundamental element of the American dream
  • In our pre-pandemic model, total vehicle sales had a strong positive relationship with consumer sentiment: If people were buying cars, you could pretty reasonably bet that they felt good about the economy. This feels intuitive — who buys a car if they think the economy
  • Cox Automotive also tracks vehicle affordability by calculating the estimated number of weeks' worth of median income needed to purchase the average new vehicle, and while that number has improved over the last two years, it remains high compared to pre-pandemic levels. In April, the most recent month with data, it took 37.7 weeks of median income to purchase a car, compared with fewer than 35 weeks at the end of 2019.
  • "Right before the pandemic, the typical average transaction price was around $38,000 for a new car. By 2023, it was $48,000," Schirmer said. This could all be contributing to the break in the relationship between car sales and sentiment, he noted. Basically, people might be buying cars, but they aren't necessarily happy about it.
  • Inspired by our model of economic indicators and sentiment from 1987 to 2019, we tried to train a similar linear regression model on the same data from 2021 to 2024 to more directly compare how things changed after the pandemic. While we were able to get a pretty good fit for this post-pandemic model,******* something interesting happened: Not a single variable showed up as a statistically significant predictor of consumer sentiment.
  • This suggests there's something much more complicated going on behind the scenes: Interactions between these variables are probably driving the prediction, and there's too much noise in this small post-pandemic data set for the model to disentangle i
  • Changes in the kinds of purchases we've discussed — homes, cars and everyday items like groceries — have fundamentally shifted the way Americans view how affordable their lives are and how they measure their quality of life.
  • Even though some indicators may be improving, Americans are simply weighing the factors differently than they used to, and that gives folks more than enough reason to have the economic blues.
Javier E

The AI Revolution Is Already Losing Steam - WSJ - 0 views

  • Most of the measurable and qualitative improvements in today’s large language model AIs like OpenAI’s ChatGPT and Google’s Gemini—including their talents for writing and analysis—come down to shoving ever more data into them. 
  • AI could become a commodity
  • To train next generation AIs, engineers are turning to “synthetic data,” which is data generated by other AIs. That approach didn’t work to create better self-driving technology for vehicles, and there is plenty of evidence it will be no better for large language models,
  • ...25 more annotations...
  • AIs like ChatGPT rapidly got better in their early days, but what we’ve seen in the past 14-and-a-half months are only incremental gains, says Marcus. “The truth is, the core capabilities of these systems have either reached a plateau, or at least have slowed down in their improvement,” he adds.
  • the gaps between the performance of various AI models are closing. All of the best proprietary AI models are converging on about the same scores on tests of their abilities, and even free, open-source models, like those from Meta and Mistral, are catching up.
  • models work by digesting huge volumes of text, and it’s undeniable that up to now, simply adding more has led to better capabilities. But a major barrier to continuing down this path is that companies have already trained their AIs on more or less the entire internet, and are running out of additional data to hoover up. There aren’t 10 more internets’ worth of human-generated content for today’s AIs to inhale.
  • A mature technology is one where everyone knows how to build it. Absent profound breakthroughs—which become exceedingly rare—no one has an edge in performance
  • companies look for efficiencies, and whoever is winning shifts from who is in the lead to who can cut costs to the bone. The last major technology this happened with was electric vehicles, and now it appears to be happening to AI.
  • the future for AI startups—like OpenAI and Anthropic—could be dim.
  • Microsoft and Google will be able to entice enough users to make their AI investments worthwhile, doing so will require spending vast amounts of money over a long period of time, leaving even the best-funded AI startups—with their comparatively paltry warchests—unable to compete.
  • Many other AI startups, even well-funded ones, are apparently in talks to sell themselves.
  • the bottom line is that for a popular service that relies on generative AI, the costs of running it far exceed the already eye-watering cost of training it.
  • That difference is alarming, but what really matters to the long-term health of the industry is how much it costs to run AIs. 
  • Changing people’s mindsets and habits will be among the biggest barriers to swift adoption of AI. That is a remarkably consistent pattern across the rollout of all new technologies.
  • the industry spent $50 billion on chips from Nvidia to train AI in 2023, but brought in only $3 billion in revenue.
  • For an almost entirely ad-supported company like Google, which is now offering AI-generated summaries across billions of search results, analysts believe delivering AI answers on those searches will eat into the company’s margins
  • Google, Microsoft and others said their revenue from cloud services went up, which they attributed in part to those services powering other company’s AIs. But sustaining that revenue depends on other companies and startups getting enough value out of AI to justify continuing to fork over billions of dollars to train and run those systems
  • three in four white-collar workers now use AI at work. Another survey, from corporate expense-management and tracking company Ramp, shows about a third of companies pay for at least one AI tool, up from 21% a year ago.
  • OpenAI doesn’t disclose its annual revenue, but the Financial Times reported in December that it was at least $2 billion, and that the company thought it could double that amount by 2025. 
  • That is still a far cry from the revenue needed to justify OpenAI’s now nearly $90 billion valuation
  • the company excels at generating interest and attention, but it’s unclear how many of those users will stick around. 
  • AI isn’t nearly the productivity booster it has been touted as
  • While these systems can help some people do their jobs, they can’t actually replace them. This means they are unlikely to help companies save on payroll. He compares it to the way that self-driving trucks have been slow to arrive, in part because it turns out that driving a truck is just one part of a truck driver’s job.
  • Add in the myriad challenges of using AI at work. For example, AIs still make up fake information,
  • getting the most out of open-ended chatbots isn’t intuitive, and workers will need significant training and time to adjust.
  • That’s because AI has to think anew every single time something is asked of it, and the resources that AI uses when it generates an answer are far larger than what it takes to, say, return a conventional search result
  • None of this is to say that today’s AI won’t, in the long run, transform all sorts of jobs and industries. The problem is that the current level of investment—in startups and by big companies—seems to be predicated on the idea that AI is going to get so much better, so fast, and be adopted so quickly that its impact on our lives and the economy is hard to comprehend. 
  • Mounting evidence suggests that won’t be the case.
Javier E

OpenAI Whistle-Blowers Describe Reckless and Secretive Culture - The New York Times - 0 views

  • A group of OpenAI insiders is blowing the whistle on what they say is a culture of recklessness and secrecy at the San Francisco artificial intelligence company, which is racing to build the most powerful A.I. systems ever created.
  • The group, which includes nine current and former OpenAI employees, has rallied in recent days around shared concerns that the company has not done enough to prevent its A.I. systems from becoming dangerous.
  • The members say OpenAI, which started as a nonprofit research lab and burst into public view with the 2022 release of ChatGPT, is putting a priority on profits and growth as it tries to build artificial general intelligence, or A.G.I., the industry term for a computer program capable of doing anything a human can.
  • ...21 more annotations...
  • They also claim that OpenAI has used hardball tactics to prevent workers from voicing their concerns about the technology, including restrictive nondisparagement agreements that departing employees were asked to sign.
  • “OpenAI is really excited about building A.G.I., and they are recklessly racing to be the first there,” said Daniel Kokotajlo, a former researcher in OpenAI’s governance division and one of the group’s organizers.
  • Other members include William Saunders, a research engineer who left OpenAI in February, and three other former OpenAI employees: Carroll Wainwright, Jacob Hilton and Daniel Ziegler. Several current OpenAI employees endorsed the letter anonymously because they feared retaliation from the company,
  • At OpenAI, Mr. Kokotajlo saw that even though the company had safety protocols in place — including a joint effort with Microsoft known as the “deployment safety board,” which was supposed to review new models for major risks before they were publicly released — they rarely seemed to slow anything down.
  • So was the departure of Dr. Leike, who along with Dr. Sutskever had led OpenAI’s “superalignment” team, which focused on managing the risks of powerful A.I. models. In a series of public posts announcing his departure, Dr. Leike said he believed that “safety culture and processes have taken a back seat to shiny products.”
  • “When I signed up for OpenAI, I did not sign up for this attitude of ‘Let’s put things out into the world and see what happens and fix them afterward,’” Mr. Saunders said.
  • Mr. Kokotajlo, 31, joined OpenAI in 2022 as a governance researcher and was asked to forecast A.I. progress. He was not, to put it mildly, optimistic.In his previous job at an A.I. safety organization, he predicted that A.G.I. might arrive in 2050. But after seeing how quickly A.I. was improving, he shortened his timelines. Now he believes there is a 50 percent chance that A.G.I. will arrive by 2027 — in just three years.
  • He also believes that the probability that advanced A.I. will destroy or catastrophically harm humanity — a grim statistic often shortened to “p(doom)” in A.I. circles — is 70 percent.
  • Last month, two senior A.I. researchers — Ilya Sutskever and Jan Leike — left OpenAI under a cloud. Dr. Sutskever, who had been on OpenAI’s board and voted to fire Mr. Altman, had raised alarms about the potential risks of powerful A.I. systems. His departure was seen by some safety-minded employees as a setback.
  • Mr. Kokotajlo said, he became so worried that, last year, he told Mr. Altman that the company should “pivot to safety” and spend more time and resources guarding against A.I.’s risks rather than charging ahead to improve its models. He said that Mr. Altman had claimed to agree with him, but that nothing much changed.
  • In April, he quit. In an email to his team, he said he was leaving because he had “lost confidence that OpenAI will behave responsibly" as its systems approach human-level intelligence.
  • “The world isn’t ready, and we aren’t ready,” Mr. Kokotajlo wrote. “And I’m concerned we are rushing forward regardless and rationalizing our actions.”
  • On his way out, Mr. Kokotajlo refused to sign OpenAI’s standard paperwork for departing employees, which included a strict nondisparagement clause barring them from saying negative things about the company, or else risk having their vested equity taken away.
  • Many employees could lose out on millions of dollars if they refused to sign. Mr. Kokotajlo’s vested equity was worth roughly $1.7 million, he said, which amounted to the vast majority of his net worth, and he was prepared to forfeit all of it.
  • Mr. Altman said he was “genuinely embarrassed” not to have known about the agreements, and the company said it would remove nondisparagement clauses from its standard paperwork and release former employees from their agreements.)
  • In their open letter, Mr. Kokotajlo and the other former OpenAI employees call for an end to using nondisparagement and nondisclosure agreements at OpenAI and other A.I. companies.
  • “Broad confidentiality agreements block us from voicing our concerns, except to the very companies that may be failing to address these issues,”
  • They also call for A.I. companies to “support a culture of open criticism” and establish a reporting process for employees to anonymously raise safety-related concerns.
  • They have retained a pro bono lawyer, Lawrence Lessig, the prominent legal scholar and activist
  • Mr. Kokotajlo and his group are skeptical that self-regulation alone will be enough to prepare for a world with more powerful A.I. systems. So they are calling for lawmakers to regulate the industry, too.
  • “There needs to be some sort of democratically accountable, transparent governance structure in charge of this process," Mr. Kokotajlo said. “Instead of just a couple of different private companies racing with each other, and keeping it all secret.”
« First ‹ Previous 121 - 140 of 819 Next › Last »
Showing 20 items per page