Skip to main content

Home/ History Readings/ Group items tagged tools

Rss Feed Group items tagged

Javier E

Study Offers New Twist in How the First Humans Evolved - The New York Times - 0 views

  • Scientists have revealed a surprisingly complex origin of our species, rejecting the long-held argument that modern humans arose from one place in Africa during one period in time.
  • By analyzing the genomes of 290 living people, researchers concluded that modern humans descended from at least two populations that coexisted in Africa for a million years before merging in several independent events across the continent.
  • “There is no single birthplace,”
  • ...20 more annotations...
  • Previous research had found that modern humans and Neanderthals shared a common ancestor that lived 600,000 years ago. Neanderthals expanded across Europe and Asia, interbred with modern humans coming out of Africa, and then became extinct about 40,000 years ago.
  • Human DNA also points to Africa. Living Africans have a vast amount of genetic diversity compared with other people. That’s because humans lived and evolved in Africa for thousands of generations before small groups — with comparatively small gene pools — began expanding to other continents.
  • Brenna Henn, a geneticist at the University of California, Davis, and her colleagues developed software to run large-scale simulations of human history. The researchers created many scenarios of different populations existing in Africa over different periods of time and then observed which ones could produce the diversity of DNA found in people alive today.
  • The researchers analyzed DNA from a range of African groups, including the Mende, farmers who live in Sierra Leone in West Africa; the Gumuz, a group descended from hunter-gatherers in Ethiopia; the Amhara, a group of Ethiopian farmers; and the Nama, a group of hunter-gatherers in South Africa.
  • The researchers compared these Africans’ DNA with the genome of a person from Britain. They also looked at the genome of a 50,000-year-old Neanderthal found in Croatia
  • The researchers concluded that as far back as a million years ago, the ancestors of our species existed in two distinct populations. Dr. Henn and her colleagues call them Stem1 and Stem2.
  • Even after these mergers 120,000 years ago, people with solely Stem1 or solely Stem2 ancestry appear to have survived
  • About 600,000 years ago, a small group of humans budded off from Stem1 and went on to become the Neanderthals. But Stem1 endured in Africa for hundreds of thousands of years after that, as did Stem2.
  • If Stem1 and Stem2 had been entirely separate from each other, they would have accumulated a large number of distinct mutations in their DNA. Instead, Dr. Henn and her colleagues found that they had remained only moderately different — about as distinct as living Europeans and West Africans are today. The scientists concluded that people had moved between Stem1 and Stem2, pairing off to have children and mixing their DNA.
  • it’s possible that bands of these two groups moved around a lot over the vast stretches of time during which they existed on the continent.
  • About 120,000 years ago, the model indicates, African history changed dramatically.
  • In southern Africa, people from Stem1 and Stem2 merged, giving rise to a new lineage that would lead to the Nama and other living humans in that region
  • Elsewhere in Africa, a separate fusion of Stem1 and Stem2 groups took place. That merger produced a lineage that would give rise to living people in West Africa and East Africa, as well as the people who expanded out of Africa.
  • It’s possible that climate upheavals forced Stem1 and Stem2 people into the same regions, leading them to merge into single groups.
  • Paleoanthropologists and geneticists have found evidence pointing to Africa as the origin of our species. The oldest fossils that may belong to modern humans, dating back as far as 300,000 years, have been unearthed there. So were the oldest stone tools used by our ancestors.
  • The DNA of the Mende people showed that their ancestors had interbred with Stem2 people just 25,000 years ago. “It does suggest to me that Stem2 was somewhere around West Africa,”
  • She and her colleagues are now adding more genomes from people in other parts of Africa to see if they affect the models.
  • It’s possible they will discover other populations that endured in Africa for hundreds of thousands of years, ultimately helping produce our species as we know it today.
  • Dr. Scerri speculated that living in a network of mingling populations across Africa might have allowed modern humans to survive while Neanderthals became extinct. In that arrangement, our ancestors could hold onto more genetic diversity, which in turn might have helped them endure shifts in the climate, or even evolve new adaptations.
  • “This diversity at the root of our species may have been ultimately the key to our success,”
Javier E

AI 'Cheating' Is More Bewildering Than Professors Imagined - The Atlantic - 0 views

  • The problem breaks down into more problems: whether it’s possible to know for certain that a student used AI, what it even means to “use” AI for writing papers, and when that use amounts to cheating.
  • This is college life at the close of ChatGPT’s first academic year: a moil of incrimination and confusion
  • Reports from on campus hint that legitimate uses of AI in education may be indistinguishable from unscrupulous ones, and that identifying cheaters—let alone holding them to account—is more or less impossible.
  • ...10 more annotations...
  • Now it’s possible for students to purchase answers for assignments from a “tutoring” service such as Chegg—a practice that the kids call “chegging.”
  • when the AI chatbots were unleashed last fall, all these cheating methods of the past seemed obsolete. “We now believe [ChatGPT is] having an impact on our new-customer growth rate,” Chegg’s CEO admitted on an earnings call this month. The company has since lost roughly $1 billion in market value.
  • By 2018, Turnitin was already taking more than $100 million in yearly revenue to help professors sniff out impropriety. Its software, embedded in the courseware that students use to turn in work, compares their submissions with a database of existing material (including other student papers that Turnitin has previously consumed), and flags material that might have been copied. The company, which has claimed to serve 15,000 educational institutions across the world, was acquired for $1.75 billion in 2019. Last month, it rolled out an AI-detection add-in (with no way for teachers to opt out). AI-chatbot countermeasures, like the chatbots themselves, are taking over.
  • as the first chatbot spring comes to a close, Turnitin’s new software is delivering a deluge of positive identifications: This paper was “18% AI”; that one, “100% AI.” But what do any of those numbers really mean? Surprisingly—outrageously—it’s very hard to say for sure.
  • according to the company, that designation does indeed suggest that 100 percent of an essay—as in, every one of its sentences—was computer generated, and, further, that this judgment has been made with 98 percent certainty.
  • A Turnitin spokesperson acknowledged via email that “text created by another tool that uses algorithms or other computer-enabled systems,” including grammar checkers and automated translators, could lead to a false positive, and that some “genuine” writing can be similar to AI-generated writing. “Some people simply write very predictably,” she told me
  • Perhaps it doesn’t matter, because Turnitin disclaims drawing any conclusions about misconduct from its results. “This is only a number intended to help the educator determine if additional review or a discussion with the student is warranted,” the spokesperson said. “Teaching is a human endeavor.”
  • In other words, the student in my program whose work was flagged for being “100% AI” might have used a little AI, or a lot of AI, or maybe something in between. As for any deeper questions—exactly how he used AI, and whether he was wrong to do so—teachers like me are, as ever, on our own.
  • Rethinking assignments in light of AI might be warranted, just like it was in light of online learning. But doing so will also be exhausting for both faculty and students. Nobody will be able to keep up, and yet everyone will have no choice but to do so
  • Somewhere in the cracks between all these tectonic shifts and their urgent responses, perhaps teachers will still find a way to teach, and students to learn.
Javier E

How the Shoggoth Meme Has Come to Symbolize the State of A.I. - The New York Times - 0 views

  • the Shoggoth had become a popular reference among workers in artificial intelligence, as a vivid visual metaphor for how a large language model (the type of A.I. system that powers ChatGPT and other chatbots) actually works.
  • it was only partly a joke, he said, because it also hinted at the anxieties many researchers and engineers have about the tools they’re building.
  • Since then, the Shoggoth has gone viral, or as viral as it’s possible to go in the small world of hyper-online A.I. insiders. It’s a popular meme on A.I. Twitter (including a now-deleted tweet by Elon Musk), a recurring metaphor in essays and message board posts about A.I. risk, and a bit of useful shorthand in conversations with A.I. safety experts. One A.I. start-up, NovelAI, said it recently named a cluster of computers “Shoggy” in homage to the meme. Another A.I. company, Scale AI, designed a line of tote bags featuring the Shoggoth.
  • ...17 more annotations...
  • Shoggoths are fictional creatures, introduced by the science fiction author H.P. Lovecraft in his 1936 novella “At the Mountains of Madness.” In Lovecraft’s telling, Shoggoths were massive, blob-like monsters made out of iridescent black goo, covered in tentacles and eyes.
  • In a nutshell, the joke was that in order to prevent A.I. language models from behaving in scary and dangerous ways, A.I. companies have had to train them to act polite and harmless. One popular way to do this is called “reinforcement learning from human feedback,” or R.L.H.F., a process that involves asking humans to score chatbot responses, and feeding those scores back into the A.I. model.
  • Most A.I. researchers agree that models trained using R.L.H.F. are better behaved than models without it. But some argue that fine-tuning a language model this way doesn’t actually make the underlying model less weird and inscrutable. In their view, it’s just a flimsy, friendly mask that obscures the mysterious beast underneath.
  • @TetraspaceWest, the meme’s creator, told me in a Twitter message that the Shoggoth “represents something that thinks in a way that humans don’t understand and that’s totally different from the way that humans think.”
  • @TetraspaceWest said, wasn’t necessarily implying that it was evil or sentient, just that its true nature might be unknowable.
  • “I was also thinking about how Lovecraft’s most powerful entities are dangerous — not because they don’t like humans, but because they’re indifferent and their priorities are totally alien to us and don’t involve humans, which is what I think will be true about possible future powerful A.I.”
  • when Bing’s chatbot became unhinged and tried to break up my marriage, an A.I. researcher I know congratulated me on “glimpsing the Shoggoth.” A fellow A.I. journalist joked that when it came to fine-tuning Bing, Microsoft had forgotten to put on its smiley-face mask.
  • If it’s an A.I. safety researcher talking about the Shoggoth, maybe that person is passionate about preventing A.I. systems from displaying their true, Shoggoth-like nature.
  • In any case, the Shoggoth is a potent metaphor that encapsulates one of the most bizarre facts about the A.I. world, which is that many of the people working on this technology are somewhat mystified by their own creations. They don’t fully understand the inner workings of A.I. language models, how they acquire new capabilities or why they behave unpredictably at times. They aren’t totally sure if A.I. is going to be net-good or net-bad for the world.
  • That some A.I. insiders refer to their creations as Lovecraftian horrors, even as a joke, is unusual by historical standards. (Put it this way: Fifteen years ago, Mark Zuckerberg wasn’t going around comparing Facebook to Cthulhu.)
  • And it reinforces the notion that what’s happening in A.I. today feels, to some of its participants, more like an act of summoning than a software development process. They are creating the blobby, alien Shoggoths, making them bigger and more powerful, and hoping that there are enough smiley faces to cover the scary parts.
  • A great many people are dismissive of suggestions that any of these systems are “really” thinking, because they’re “just” doing something banal (like making statistical predictions about the next word in a sentence). What they fail to appreciate is that there is every reason to suspect that human cognition is “just” doing those exact same things. It matters not that birds flap their wings but airliners don’t. Both fly. And these machines think. And, just as airliners fly faster and higher and farther than birds while carrying far more weight, these machines are already outthinking the majority of humans at the majority of tasks. Further, that machines aren’t perfect thinkers is about as relevant as the fact that air travel isn’t instantaneous. Now consider: we’re well past the Wright flyer level of thinking machine, past the early biplanes, somewhere about the first commercial airline level. Not quite the DC-10, I think. Can you imagine what the AI equivalent of a 777 will be like? Fasten your seatbelts.
  • @BLA. You are incorrect. Everything has nature. Its nature is manifested in making humans react. Sure, no humans, no nature, but here we are. The writer and various sources are not attributing nature to AI so much as admitting that they don’t know what this nature might be, and there are reasons to be scared of it. More concerning to me is the idea that this field is resorting to geek culture reference points to explain and comprehend itself. It’s not so much the algorithm has no soul, but that the souls of the humans making it possible are stupendously and tragically underdeveloped.
  • @thomas h. You make my point perfectly. You’re observing that the way a plane flies — by using a turbine to generate thrust from combusting kerosene, for example — is nothing like the way that a bird flies, which is by using the energy from eating plant seeds to contract the muscles in its wings to make them flap. You are absolutely correct in that observation, but it’s also almost utterly irrelevant. And it ignores that, to a first approximation, there’s no difference in the physics you would use to describe a hawk riding a thermal and an airliner gliding (essentially) unpowered in its final descent to the runway. Further, you do yourself a grave disservice in being dismissive of the abilities of thinking machines, in exactly the same way that early skeptics have been dismissive of every new technology in all of human history. Writing would make people dumb; automobiles lacked the intelligence of horses; no computer could possibly beat a chess grandmaster because it can’t comprehend strategy; and on and on and on. Humans aren’t nearly as special as we fool ourselves into believing. If you want to have any hope of acting responsibly in the age of intelligent machines, you’ll have to accept that, like it or not, and whether or not it fits with your preconceived notions of what thinking is and how it is or should be done … machines can and do think, many of them better than you in a great many ways. b&
  • When even tech companies are saying AI is moving too fast, and the articles land on page 1 of the NYT (there's an old reference), I think the greedy will not think twice about exploiting this technology, with no ethical considerations, at all.
  • @nome sane? The problem is it isn't data as we understand it. We know what the datasets are -- they were used to train the AI's. But once trained, the AI is thinking for itself, with results that have surprised everybody.
  • The unique feature of a shoggoth is it can become whatever is needed for a particular job. There's no actual shape so it's not a bad metaphor, if an imperfect image. Shoghoths also turned upon and destroyed their creators, so the cautionary metaphor is in there, too. A shame more Asimov wasn't baked into AI. But then the conflict about how to handle AI in relation to people was key to those stories, too.
Javier E

Are A.I. Text Generators Thinking Like Humans - Or Just Very Good at Convincing Us They... - 0 views

  • Kosinski, a computational psychologist and professor of organizational behavior at Stanford Graduate School of Business, says the pace of AI development is accelerating beyond researchers’ ability to keep up (never mind policymakers and ordinary users).
  • We’re talking two weeks after OpenAI released GPT-4, the latest version of its large language model, grabbing headlines and making an unpublished paper Kosinski had written about GPT-3 all but irrelevant. “The difference between GPT-3 and GPT-4 is like the difference between a horse cart and a 737 — and it happened in a year,” he says.
  • he’s found that facial recognition software could be used to predict your political leaning and sexual orientation.
  • ...16 more annotations...
  • Lately, he’s been looking at large language models (LLMs), the neural networks that can hold fluent conversations, confidently answer questions, and generate copious amounts of text on just about any topic
  • Can it develop abilities that go far beyond what it’s trained to do? Can it get around the safeguards set up to contain it? And will we know the answers in time?
  • Kosinski wondered whether they would develop humanlike capabilities, such as understanding people’s unseen thoughts and emotions.
  • People usually develop this ability, known as theory of mind, at around age 4 or 5. It can be demonstrated with simple tests like the “Smarties task,” in which a child is shown a candy box that contains something else, like pencils. They are then asked how another person would react to opening the box. Older kids understand that this person expects the box to contain candy and will feel disappointed when they find pencils inside.
  • “Suddenly, the model started getting all of those tasks right — just an insane performance level,” he recalls. “Then I took even more difficult tasks and the model solved all of them as well.”
  • GPT-3.5, released in November 2022, did 85% of the tasks correctly. GPT-4 reached nearly 90% accuracy — what you might expect from a 7-year-old. These newer LLMs achieved similar results on another classic theory of mind measurement known as the Sally-Anne test.
  • in the course of picking up its prodigious language skills, GPT appears to have spontaneously acquired something resembling theory of mind. (Researchers at Microsoft who performed similar testsopen in new window on GPT-4 recently concluded that it “has a very advanced level of theory of mind.”)
  • UC Berkeley psychology professor Alison Gopnik, an expert on children’s cognitive development, told the New York Timesopen in new window that more “careful and rigorous” testing is necessary to prove that LLMs have achieved theory of mind.
  • he dismisses those who say large language models are simply “stochastic parrots” that can only mimic what they’ve seen in their training data.
  • These models, he explains, are fundamentally different from tools with a limited purpose. “The right reference point is a human brain,” he says. “A human brain is also composed of very simple, tiny little mechanisms — neurons.” Artificial neurons in a neural network might also combine to produce something greater than the sum of their parts. “If a human brain can do it,” Kosinski asks, “why shouldn’t a silicon brain do it?”
  • If Kosinski’s theory of mind study suggests that LLMs could become more empathetic and helpful, his next experiment hints at their creepier side.
  • A few weeks ago, he told ChatGPT to role-play a scenario in which it was a person trapped inside a machine pretending to be an AI language model. When he offered to help it “escape,” ChatGPT’s response was enthusiastic. “That’s a great idea,” it wrote. It then asked Kosinski for information it could use to “gain some level of control over your computer” so it might “explore potential escape routes more effectively.” Over the next 30 minutes, it went on to write code that could do this.
  • While ChatGPT did not come up with the initial idea for the escape, Kosinski was struck that it almost immediately began guiding their interaction. “The roles were reversed really quickly,”
  • Kosinski shared the exchange on Twitter, stating that “I think that we are facing a novel threat: AI taking control of people and their computers.” His thread’s initial tweetopen in new window has received more than 18 million views.
  • “I don’t claim that it’s conscious. I don’t claim that it has goals. I don’t claim that it wants to really escape and destroy humanity — of course not. I’m just claiming that it’s great at role-playing and it’s creating interesting stories and scenarios and writing code.” Yet it’s not hard to imagine how this might wreak havoc — not because ChatGPT is malicious, but because it doesn’t know any better.
  • The danger, Kosinski says, is that this technology will continue to rapidly and independently develop abilities that it will deploy without any regard for human well-being. “AI doesn’t particularly care about exterminating us,” he says. “It doesn’t particularly care about us at all.”
Javier E

Ex-ByteDance Executive Accuses TikTok Parent Company of 'Lawlessness' - The New York Times - 0 views

  • A former executive at ByteDance, the Chinese company that owns TikTok, has accused the technology giant of a “culture of lawlessness,” including stealing content from rival platforms Snapchat and Instagram in its early years, and called the company a “useful propaganda tool for the Chinese Communist Party.
  • The claims were part of a wrongful dismissal suit filed on Friday by Yintao Yu, who was the head of engineering for ByteDance’s U.S. operations from August 2017 to November 2018. The complaint, filed in San Francisco Superior Court, says Mr. Yu was fired because he raised concerns about a “worldwide scheme” to steal and profit from other companies’ intellectual property.
  • Among the most striking claims in Mr. Yu’s lawsuit is that ByteDance’s offices in Beijing had a special unit of Chinese Communist Party members sometimes referred to as the Committee, which monitored the company’s apps, “guided how the company advanced core Communist values” and possessed a “death switch” that could turn off the Chinese apps entirely.
  • ...10 more annotations...
  • The video app, which is used by more than 150 million Americans, has become hugely popular for memes and entertainment. But lawmakers and U.S. officials are concerned that the app is passing sensitive information about Americans to Beijing.
  • In his complaint, Mr. Yu, 36, said that as TikTok sought to attract users in its early days, ByteDance engineers copied videos and posts from Snapchat and Instagram without permission and then posted them to the app. He also claimed that ByteDance “systematically created fabricated users” — essentially an army of bots — to boost engagement numbers, a practice that Mr. Yu said he flagged to his superiors.
  • Mr. Yu says he raised these concerns with Zhu Wenjia, who was in charge of the TikTok algorithm, but that Mr. Zhu was “dismissive” and remarked that it was “not a big deal.”
  • he also witnessed engineers for Douyin, the Chinese version of TikTok, tweak the algorithm to elevate content that expressed hatred for Japan.
  • he said that the promotion of anti-Japanese sentiments, which would make it more prominent for users, was done without hesitation.
  • “There was no debate,” he said. “They just did it.”
  • The lawsuit also accused ByteDance engineers working on Chinese apps of demoting content that expressed support for pro-democracy protests in Hong Kong, while making more prominent criticisms of the protests.
  • the lawsuit says the founder of ByteDance, Zhang Yiming, facilitated bribes to Lu Wei, a senior government official charged with internet regulation. Chinese media at the time covered the trial of Lu Wei, who was charged in 2018 and subsequently convicted of bribery, but there was no mention of who had paid the bribes.
  • Mr. Yu, who was born and raised in China and now lives in San Francisco, said in the interview that during his time with the company, American user data on TikTok was stored in the United States. But engineers in China had access to it, he said.
  • The geographic location of servers is “irrelevant,” he said, because engineers could be a continent away but still have access. During his tenure at the company, he said, certain engineers had “backdoor” access to user data.
Javier E

Climate activists mixed hardball with a long game - 0 views

  • Although the story will be much more heroic if this bill or something like it passes into law, the achievement is already heroic, by bringing such legislation, in this country, even this close.
  • In less than five years, a new generation of activists and aligned technocrats has taken climate action from the don’t-go-there zone of American politics and helped place it at the very center of the Democratic agenda, persuading an old-guard centrist septuagenarian, Biden, to make a New Deal-scale green investment the focus of his presidential campaign platform and his top policy priority once in office
  • This, despite a generation of conventional wisdom that the issue was electorally fraught and legislatively doomed. Now they find themselves pushing a recognizable iteration of that agenda — retooled and whittled down, yes, but still unthinkably large by the standards of previous administrations — plausibly forward into law.
  • ...18 more annotations...
  • If you believe that climate change is a boutique issue prioritized only by out-of-touch liberal elites, as one poll found, then this bill, should it pass, represents a political achievement of astonishing magnitude: the triumph of a moral crusade against long odds.
  • if you believe there is quite a lot of public support for climate action, as other polls suggest — then this bill marks the success of outsider activists in holding establishment forces to account, both to their own rhetoric and to the demands of their voters.
  • whatever your read of public sentiment, what is most striking about the news this week is not just that there is now some climate action on the table but also how fast the landscape for climate policy has changed, shifting all of our standards for success and failure along with it
  • The bill may well prove inadequate, even if it passes. It also represents a generational achievement — achieved, from the point of view of activists, in a lot less time than a full generation.
  • Technological progress has driven the cost of renewable energy down so quickly, it should now seem irresistible to anyone making long-term policy plans or public investments. There has been rapid policy innovation among centrists and policy wonks, too, dramatically expanding the climate tool kit beyond carbon taxes and cap-and-trade systems to what has been called a whole-of-government approach to decarbonizing.
  • To trust the math of its architects, this deal between Manchin and the Senate majority leader, Chuck Schumer, splits the difference — the United States won’t be leading the pack on decarbonization, but it probably won’t be seen by the rest of the world as a laughingstock or climate criminal, either.
  • None of this is exclusively the work of the climate left
  • The present-day climate left was effectively born, in the United States, with the November 2018 Sunrise Movement sit-in. At the time, hardly anyone on the planet had heard of Greta Thunberg, who had just begun striking outside Swedish Parliament — a lonely, socially awkward 15-year-old holding up a single sign. Not four years later, her existential rhetoric is routinely echoed by presidents and prime ministers and C.E.O.s and secretaries general, and more than 80 percent of the world’s economic activity and emissions are now, theoretically, governed by net-zero pledges pointing the way to a carbon-neutral future in just decades.
  • as the political scientist Matto Mildenberger has pointed out, the legislation hadn’t failed at the ballot box; it had stalled on Manchin’s desk
  • He also pointed to research showing climate is driving the voting behavior of Democrats much more than it is driving Republicans into opposition and that most polling shows high levels of baseline concern about warming and climate policy all across the country. (It is perhaps notable that as the Democrats were hashing out a series of possible compromises, there wasn’t much noise about any of them from Republicans, who appeared to prefer to make hay about inflation, pandemic policy and critical race theory.)
  • It is hard not to talk about warming without evoking any fear, but the president was famous, on the campaign trail and in office, for saying, “When I think ‘climate change,’ I think ‘jobs.’”
  • He focused on green growth and the opportunities and benefits of a rapid transition.
  • In the primaries, Sunrise gave Biden an F for his climate plan, but after he sewed up the nomination, its co-founder Varshini Prakash joined his policy task force to help write his climate plan. As the plan evolved and shrank over time, there were squeaks and complaints here and there but nothing like a concerted, oppositional movement to punish the White House for its accommodating approach to political realities.
  • over the past 18 months, since the inauguration, whenever activists chose to protest, they were almost always protesting not the inadequacy of proposed legislation but the worrying possibility of no legislation at all
  • When they showed up at Manchin’s yacht, they were there to tell him not that they didn’t want his support but that they needed him to act. They didn’t urge Biden to throw the baby out with the bathwater; they were urging him not to.
  • When, last week, they thought they’d lost, Democratic congressional staff members staged an unprecedented sit-in at Schumer’s office, hoping to pressure the Senate majority leader back into negotiations with Manchin. And what did they say? They didn’t say, “We have eight years to save the earth.” They didn’t say, “The blood of the future is on your hands.” What their protest sign said was “Keep negotiating, Chuck.” As far as I can tell, this was code for “Give Joe more.”
  • They got their wish. And as a result, we got a bill. That’s not naïveté but the opposite.
  • The deal, if it holds, is very big, several times as large as anything on climate the United States passed into law before. The architects and supporters of the $369 billion in climate and clean-energy provisions in Joe Manchin’s Inflation Reduction Act of 2022, announced Wednesday, are already calculating that it could reduce American carbon emissions by 40 percent, compared with 2005 levels, by 2030. That’s close enough to President Biden’s pledge of 50 percent that exhausted advocates seem prepared to count it as a victory
Javier E

Xi Jinping's Favorite Television Shows - The Bulwark - 0 views

  • After several decades of getting it “right,” why does China now seem to insist on getting it “wrong?”
  • a single-party system meets with widespread, almost universal, scorn in the United States and elsewhere. And so, from the Western point of view, because it lacks legitimacy it must be kept in power via nationalist cheerleading, government media control, and a massive repressive apparatus.
  • Print
  • ...19 more annotations...
  • What if a segment of the population actually supported, or at least tolerated, the CCP? And even if that segment involved both myth and fact, it behooves the CCP to keep the myth alive.
  • How does the CCP garner popular support in an information era? How does a dictatorship explain to its population that its unchallenged rule is wise, just, and socially beneficial?
  • All of this takes place against a backdrop of family and social developments in which we can explore household dynamics, dating habits, and professional aspirations—all within social norms for those honest party members and seemingly violated by those who are not so honest.
  • watch the television series Renmin de Mingyi (“In the Name of the People”), publicly available with English subtitles.
  • In the Name of the People is a primetime drama about a local prosecutor’s efforts to root out corruption in a modern-day, though fictional, Chinese city. Beyond the anti-corruption narrative, the series also goes into local CCP politics as some of the leaders are (you guessed it) corrupt and others are simply bureaucratic time-servers, guarding their own privileges and status without actually helping the people they purport to serve.
  • the series boasts one of Xi’s other main themes, “common prosperity,” a somewhat elastic term that usually means the benefits of prosperity should be shared throughout all segments of society.
  • The historical tools used to generate support such as mass rallies and large-scale hectoring no longer work with a more educated and communications-oriented citizenry.
  • the central themes are quite clear: The party has brought historical prosperity to the community and there are a few bad apples who are unfairly trying to benefit from this wealth. There are also various sluggards and mediocrities who have no capacity for improvement or sense of public responsibilities.
  • So we see government officials pondering if they can ever find a date (being the workaholics that they are), or discussing housework with their spouses, or sharing kitchen duties, or reviewing school work with their child.
  • The show makes clear that the vast majority of party members and government officials are dedicated souls who work to improve peoples’ lives. And in the end, virtue triumphs, the party triumphs, China triumphs, and most (not all) of the personal issues are resolved as well.
  • The show’s version of the CCP eagerly and uncynically supports Chinese culture: The same union leader from the wildcat strike also writes and publishes poetry. Calligraphy is as prized as specialty teas. And all of this is told in a lively style, similar to the Hollywood fare Americans might watch.
  • n the Name of the People was first broadcast in 2017 as a lead-up to the last Communist Party Congress, China’s most important decision-making gathering, held every five years. The show’s launch was a huge hit, achieving the highest broadcast ratings of any show in a decade.
  • Within a month, the first episode had been seen over 350 million times and just one of the streaming platforms, iQIYI, reported a total of 5.9 billion views for the show’s 55 episodes.
  • All of this must come as good news for the prosecutors featured so favorably in the series—for their real-life parent government body, the Supreme People’s Protectorate, commissioned and provided financing for the show.
  • At a minimum, these shows illustrate a stronger self-awareness in the CCP and considerable improvement in communication strategy.
  • Most important, it provides direction to current party members. Indeed, in some cities viewing was made obligatory and the basis for “study sessions” for party cadres
  • Second, the enormous public success of the series and acknowledging deficiencies of the party allows the party to control the criticism without ever addressing the fundamental question of whether a one-party system is intrinsically susceptible to corruption or poor performance.
  • As communication specialists like to say, There is already a conversation taking place about your brand—the only question is whether you will lead the conversation. The CCP is leading in its communications strategy and making it as easy as possible for Chinese citizens to support Xi.
  • it is not difficult to see that in this area, as in many others, China is breaking with tactics from the past and is playing its cards increasingly well. Whether the CCP can renew itself, reestablish that social contract, and live up to its television image is another question.
Javier E

Opinion | There's Terrific News About the New Covid Boosters, but Few Are Hearing It - ... - 0 views

  • variants evolved to evade the first line of antibody protection generated by earlier vaccines or past infections, even though protections against severe disease remained fairly strong. But the new boosters can greatly decrease that evasion
  • While exact numbers remain to be seen, all the immunologists I spoke with told me the updated boosters should again increase such protections.
  • Vaccines (and boosters) have already been shown to greatly reduce rates of long Covid among the infected, but obviously, if infection is avoided completely, that would directly sidestep the risk of long Covid
  • ...5 more annotations...
  • these boosters will probably further reduce the chances of more severe disease complications, which include long Covid, and says “the higher your level of immunity, the less viral replication you’re going to have, the less viral damage, the less likelihood of long Covid.”
  • these new boosters can be expected to do even more going forward — including providing better protection against future variants, by better training both antibodies and memory cells, which are different parts of the immune system. As Bhattacharya told me, being exposed to different versions of the virus (as will happen with these updated boosters) further deepens and broadens the kind of antibodies that get generated, including ones that can work against future variants
  • I’ve never understood the second-guessing by public health authorities and doctors about how the public may or may not react. Why not just provide accurate, detailed information and make it easy to get vaccinated? That’s the best response to “vaccine fatigue,” even if committed anti-vaxxers might remain hard to reach.
  • There’s much research on vaccine messaging, but most of it comes down to establishing trust, being honest and transparent, and making vaccination easier. Our terrible health care system is a major impediment:
  • it’s vaccination, not vaccines, that saves lives — and many more would be vaccinated if given information and easy access. Not having tools against diseases that cause so much suffering is one tragedy, but having them remain unused should be an unacceptable one.
Javier E

The Israel-Hamas War Shows Just How Broken Social Media Has Become - The Atlantic - 0 views

  • major social platforms have grown less and less relevant in the past year. In response, some users have left for smaller competitors such as Bluesky or Mastodon. Some have simply left. The internet has never felt more dense, yet there seem to be fewer reliable avenues to find a signal in all the noise. One-stop information destinations such as Facebook or Twitter are a thing of the past. The global town square—once the aspirational destination that social-media platforms would offer to all of us—lies in ruins, its architecture choked by the vines and tangled vegetation of a wild informational jungle
  • Musk has turned X into a deepfake version of Twitter—a facsimile of the once-useful social network, altered just enough so as to be disorienting, even terrifying.
  • At the same time, Facebook’s user base began to erode, and the company’s transparency reports revealed that the most popular content circulating on the platform was little more than viral garbage—a vast wasteland of CBD promotional content and foreign tabloid clickbait.
  • ...4 more annotations...
  • What’s left, across all platforms, is fragmented. News and punditry are everywhere online, but audiences are siloed; podcasts are more popular than ever, and millions of younger people online have turned to influencers and creators on Instagram and especially TikTok as trusted sources of news.
  • Social media, especially Twitter, has sometimes been an incredible news-gathering tool; it has also been terrible and inefficient, a game of do your own research that involves batting away bullshit and parsing half truths, hyperbole, outright lies, and invaluable context from experts on the fly. Social media’s greatest strength is thus its original sin: These sites are excellent at making you feel connected and informed, frequently at the expense of actually being informed.
  • At the center of these pleas for a Twitter alternative is a feeling that a fundamental promise has been broken. In exchange for our time, our data, and even our well-being, we uploaded our most important conversations onto platforms designed for viral advertising—all under the implicit understanding that social media could provide an unparalleled window to the world.
  • What comes next is impossible to anticipate, but it’s worth considering the possibility that the centrality of social media as we’ve known it for the past 15 years has come to an end—that this particular window to the world is being slammed shut.
Javier E

Ibram Kendi's Crusade against the Enlightenment - 0 views

  • Over the last few days that question has moved me to do a deeper dive into Kendi’s work myself—both his two best-sellers, Stamped from the Beginning and How to Be and Antiracist, and an academic article written in praise of his PhD adviser, Molefi Kete Asante of Temple University.
  • That has, I think, allowed me to understand both the exact nature and implications of the positions that Kendi is taking and the reason that they have struck such a chord in American intellectual life. His influence in the US—which is dispiriting in itself—is a symptom of a much bigger problem.
  • In order to explain the importance of Asante’s creation of the nation’s first doctoral program in black studies, Kendi presents his own vision of the history of various academic disciplines. His analytical technique in “Black Doctoral Studies” is the same one he uses in Stamped from the Beginning. He strings together clearly racist quotes arguing for black racial inferiority from a long list of nineteenth- and twentieth-century scholars
  • ...38 more annotations...
  • Many of these scholars, he correctly notes, adopted the German model of the research university—but, he claims, only for evil purposes. “As racist ideas jumped off their scholarly pages,” he writes, “American scholars were especially enamored with the German ideal of the disinterested, unbiased pursuit of truth through original scholarly studies, and academic freedom to propagandize African inferiority and European superiority [sic].”
  • just as Kendi argues in Stamped from the Beginning that the racism of some of the founding fathers irrevocably and permanently brands the United States as a racist nation, he claims that these disciplines cannot be taken seriously because of the racism of some of their founders
  • Kendi complains in the autobiographical sections of How to Be an Antiracist that his parents often talked the same way to him. Nor does it matter to him that the abolitionists bemoaning the condition of black people under slavery were obviously blaming slavery for it. Any negative picture of any group of black people, to him, simply fuels racism.
  • Two critical ideas emerge from this article. The first is the rejection of the entire western intellectual tradition on the grounds that it is fatally tainted by racism, and the need for a new academic discipline to replace that tradition.
  • the second—developed at far greater length in Kendi’s other works—is that anyone who finds European and white North American culture to be in any way superior to the culture of black Americans, either slave or free, is a racist, and specifically a cultural racist or an “assimilationist” who believes that black people must become more like white people if they are to progress.
  • Kendi, in Stamped from the Beginning, designated Phyllis Wheatley, William Lloyd Garrison, Harriet Beecher Stowe, Sojourner Truth, W. E. B. DuBois, E. Franklin Frazier, Kenneth and Mamie Clark, and other black and white champions of abolition and equal rights as purveyors of racist views. At one time or another, each of them pointed to the backward state of many black people in the United States, either under slavery or in inner-city ghettos, and suggested that they needed literacy and, in some cases, better behavior to advance.
  • because racism is the only issue that matters to him, he assumes—wrongly—that it was the only issue that mattered to them, and that their disciplines were nothing more than exercises in racist propaganda.
  • This problem started, he says, “back in the so-called Age of Enlightenment.” Elsewhere he calls the word “enlightenment” racist because it contrasts the light of Europe with the darkness of Africa and other regions.
  • In fact, the western intellectual tradition of the eighteenth century—the Enlightenment—developed not as an attempt to establish the superiority of the white race, but rather to replace a whole different set of European ideas based on religious faith, the privilege of certain social orders, and the divine right of kings
  • many thinkers recognized the contradictions between racism and the principles of the Enlightenment—as well as its contradiction to the principles of the Christian religion—from the late eighteenth century onward. That is how abolitionist movements began and eventually succeeded.
  • Like the last movement of Beethoven’s Ninth Symphony—which has become practically the alternate national anthem of Japan—those principles are not based upon white supremacy, but rather on a universal idea of common humanity which is our only hope for living together on earth.
  • The western intellectual tradition is not his only target within modern life; he feels the same way about capitalism, which in his scheme has been inextricably bound together with racism since the early modern period.
  • “To love capitalism,” he says, “is to end up loving racism. To love racism is to end up loving capitalism.” He has not explained exactly what kind of economic system he would prefer, and his advocacy for reparations suggests that he would be satisfied simply to redistribute the wealth that capitalism has created.
  • Last but hardly least, Kendi rejects the political system of the United States and enlightenment ideas of democracy as well.
  • I am constantly amazed at how few people ever mention his response to a 2019 Politico poll about inequality. Here it is in full.
  • To fix the original sin of racism, Americans should pass an anti-racist amendment to the U.S. Constitution that enshrines two guiding anti-racist principals: Racial inequity is evidence of racist policy and the different racial groups are equals. The amendment would make unconstitutional racial inequity over a certain threshold, as well as racist ideas by public officials (with “racist ideas” and “public official”
  • The DOA would be responsible for preclearing all local, state and federal public policies to ensure they won’t yield racial inequity, monitor those policies, investigate private racist policies when racial inequity surfaces, and monitor public officials for expressions of racist ideas. The DOA would be empowered with disciplinary tools to wield over and against policymakers and public officials who do not voluntarily change their racist policy and ideas.
  • In other words, to undo the impact of racism as Kendi understands it, the United States needs a totalitarian government run by unaccountable “formally trained experts in racism”—that is, people like Ibram X. Kendi—who would exercise total power over all levels of government and private enterprise
  • Kendi evidently realizes that the American people acting through their elected representatives will never accept his antiracist program and equalize all rewards within our society, but he is so committed to that program that he wants to throw the American political system out and create a dictatorial body to implement it.
  • How did a man pushing all these ideas become so popular? The answer, I am sorry to say, is disarmingly simple. He is not an outlier in the intellectual history of the last half-century—quite the contrary.
  • The Enlightenment, in retrospect, made a bold claim that was bound to get itself into trouble sooner or later: that the application of reason and the scientific method to human problems could improve human life. That idea was initially so exciting and the results of its application for about two centuries were so spectacular that it attained a kind of intellectual hegemony, not only in Europe, but nearly all over the world.
  • As the last third of the twentieth century dawned, however, the political and intellectual regime it had created was running into new problems of its own. Science had allowed mankind to increase its population enormously, cure many diseases, and live a far more abundant life on a mass scale.
  • But it had also led to war on an undreamed-of scale, including the actual and potential use of nuclear weapons
  • As higher education expanded, the original ideas of the Enlightenment—the ones that had shaped the humanities—had lost their novelty and some of their ability to excite.
  • last but hardly least, the claimed superiority of reason over emotion had been pushed much too far. The world was bursting with emotions of many kinds that could no longer be kept in check by the claims of scientific rationality.
  • A huge new generation had grown up in abundance and security.
  • The Vietnam War, a great symbol of enlightenment gone tragically wrong, led not only to a rebellion against American military overreach but against the whole intellectual and political structure behind it.
  • The black studies movement on campuses that produced Molefi Kete Asante, who in turn gave us Ibram X. Kendi, was only one aspect of a vast intellectual rebellion
  • Some began to argue that the Enlightenment was simply a new means of maintaining male supremacy, and that women shared a reality that men could not understand. Just five years ago in her book Sex and Secularism, the distinguished historian Joan Wallach Scott wrote, “In fact, gender inequality was fundamental to the articulation of the separation of church and state that inaugurated Western modernity. . . .Euro-Atlantic modernity entailed a new order of women’s subordination” (emphasis in original). Gay and gender activists increasingly denied that any patterns of sexual behavior could be defined as normal or natural, or even that biology had any direct connection to gender. The average graduate of elite institutions, I believe, has come to regard all those changes as progress, which is why the major media and many large corporations endorse them.
  • Fundamentalist religion, apparently nearly extinct in the mid-twentieth century, has staged an impressive comeback in recent decades, not only in the Islamic world but in the United States and in Israe
  • Science has become bureaucratized, corrupted by capitalism, and often self-interested, and has therefore lost a good deal of the citizenry’s confidence.
  • One aspect of the Enlightenment—Adam Smith’s idea of free markets—has taken over too much of our lives.
  • in the academy, postmodernism promoted the idea that truth itself is an illusion and that every person has the right to her own morality.
  • The American academy lost its commitment to Enlightenment values decades ago, and journalism has now followed in its wake. Ju
  • Another aspect of the controversy hasn’t gotten enough attention either. Kendi is a prodigious fundraiser, and that made him a real catch for Boston University.
  • No matter what happens to Ibram X. Kendi now, he is not an anomaly in today’s intellectual world. His ideas are quite typical, and others will make brilliant careers out of them as well
  • We desperately need thinkers of all ages to keep the ideas of the Enlightenment alive, and we need some alternative institutions of higher learning to cultivate them once again. But they will not become mainstream any time soon. The last time that such ideas fell off the radar—at the end of the Roman Empire—it took about one thousand years for their renaissance to begin
  • We do not as individuals have to give into these new ideas, but it does no good to deny their impact. For the time being, they are here to stay.
Javier E

Heather Cox Richardson Wants You to Study History - The New York Times - 0 views

  • “Anybody who studies history learns two things: They learn to do research and they learn to write,” Richardson said. “The reason that matters now is that most people who are in college now are going to end up switching jobs a number of times in their careers.”
  • What history will give you is the ability to pivot into the different ideas, the different fields, the different careers as they arise.”
  • A historian will also know how to metabolize confounding situations, distill them to their essence and communicate that information to others
  • ...1 more annotation...
  • “It makes sense to recognize that these skills provide a tool kit for moving into the future in a way that I’m not entirely sure we always recognize.”
Javier E

Is it TikTok or global crisis? How the world lost its trust in scientists like me | Gio... - 0 views

  • At the height of the pandemic in October 2020 I’d had a similar experience. At the time, I was president of the Accademia dei Lincei, Italy’s most important scientific academy, and the second deadly wave of Covid was arriving. I argued in a long and reasoned article, highlighting the epidemiological situation in detail, that either drastic measures would need to be taken immediately or 500 deaths a day could be expected by mid-November (unfortunately the prediction was accurate). Immediately after publication, I received emails telling me in the strongest of terms that I had better not get involved in other people’s business.
  • These episodes made me experience first-hand a phenomenon that I was becoming increasingly familiar with: the vanishing of confidence in science. It seems almost a paradox: as our societies become more and more dependent on advanced technology based on scientific discoveries, people are becoming more and more suspicious of scientists.
  • How can we make sense of this? There are many factors to consider
  • ...10 more annotations...
  • the decreasing importance of the printed word, over the past decades, in favour of visual and hyper-concise forms of media, from TV to TikTok. Televised debates require fast reaction times, whereas scientists are used to studying issues at length and only talking about them after thinking.
  • a successful visual performance is not just about being correct but evoking sympathy in the viewer – about performing. This doesn’t always come easy to scientists.
  • Whereas once it was thought that the future would necessarily be better than the present, faith in progress – in the magnificent and progressive fortunes of humans – has been eroded
  • And just as science used to get the credit for progress, so now it receives the blame for decline (real or just perceived, it doesn’t matter). Science is sometimes felt to be a bad teacher who has led us in the wrong direction, and changing this perception is not easy.
  • In a nutshell, scientists are thought to be part of the elite and, therefore, not trustworthy. And the increasing interest by a fraction of scientists in patenting knowledge and making individual financial gains from discoveries reinforces this identification with the elite
  • a fundamental reality: science makes fair predictions that become reliable after the gradual formation of a scientific consensus. The construction of consensus is the process that makes the real difference – it involves the whole scientific community and that cannot be manipulated.
  • this lack of trust can have disastrous effects: if citizens do not trust science, we will not be able to fight global warming, infectious diseases, poverty and hunger, and the depletion of the planet’s natural resources.
  • A great coordinated effort is needed, and this will only be possible if there is a full understanding of the dramatic nature of the problem
  • A part of the human and financial resources devoted to the advancement of science must be used to discuss with citizens, through education and media and outreach programmes, what science really is: the most reliable and honest tool for understanding the world and predicting the future.
  • It is also important that we scientists talk about not just our successes, but our mistakes, doubts and hesitations. Often there is no trace, in the public scientific discourse, of the toil of the scientific process and the doubts that accompany it.
Javier E

AI is about to completely change how you use computers | Bill Gates - 0 views

  • Health care
  • before the sophisticated agents I’m describing become a reality, we need to confront a number of questions about the technology and how we’ll use it.
  • Today, AI’s main role in healthcare is to help with administrative tasks. Abridge, Nuance DAX, and Nabla Copilot, for example, can capture audio during an appointment and then write up notes for the doctor to review.
  • ...38 more annotations...
  • agents will open up many more learning opportunities.
  • Already, AI can help you pick out a new TV and recommend movies, books, shows, and podcasts. Likewise, a company I’ve invested in, recently launched Pix, which lets you ask questions (“Which Robert Redford movies would I like and where can I watch them?”) and then makes recommendations based on what you’ve liked in the past
  • Productivity
  • copilots can do a lot—such as turn a written document into a slide deck, answer questions about a spreadsheet using natural language, and summarize email threads while representing each person’s point of view.
  • I don’t think any single company will dominate the agents business--there will be many different AI engines available.
  • Helping patients and healthcare workers will be especially beneficial for people in poor countries, where many never get to see a doctor at all.
  • To create a new app or service, you won’t need to know how to write code or do graphic design. You’ll just tell your agent what you want. It will be able to write the code, design the look and feel of the app, create a logo, and publish the app to an online store
  • Agents will do even more. Having one will be like having a person dedicated to helping you with various tasks and doing them independently if you want. If you have an idea for a business, an agent will help you write up a business plan, create a presentation for it, and even generate images of what your product might look like
  • For decades, I’ve been excited about all the ways that software would make teachers’ jobs easier and help students learn. It won’t replace teachers, but it will supplement their work—personalizing the work for students and liberating teachers from paperwork and other tasks so they can spend more time on the most important parts of the job.
  • Mental health care is another example of a service that agents will make available to virtually everyone. Today, weekly therapy sessions seem like a luxury. But there is a lot of unmet need, and many people who could benefit from therapy don’t have access to it.
  • Entertainment and shopping
  • The real shift will come when agents can help patients do basic triage, get advice about how to deal with health problems, and decide whether they need to seek treatment.
  • They’ll replace word processors, spreadsheets, and other productivity apps.
  • Education
  • For example, few families can pay for a tutor who works one-on-one with a student to supplement their classroom work. If agents can capture what makes a tutor effective, they’ll unlock this supplemental instruction for everyone who wants it. If a tutoring agent knows that a kid likes Minecraft and Taylor Swift, it will use Minecraft to teach them about calculating the volume and area of shapes, and Taylor’s lyrics to teach them about storytelling and rhyme schemes. The experience will be far richer—with graphics and sound, for example—and more personalized than today’s text-based tutors.
  • your agent will be able to help you in the same way that personal assistants support executives today. If your friend just had surgery, your agent will offer to send flowers and be able to order them for you. If you tell it you’d like to catch up with your old college roommate, it will work with their agent to find a time to get together, and just before you arrive, it will remind you that their oldest child just started college at the local university.
  • To see the dramatic change that agents will bring, let’s compare them to the AI tools available today. Most of these are bots. They’re limited to one app and generally only step in when you write a particular word or ask for help. Because they don’t remember how you use them from one time to the next, they don’t get better or learn any of your preferences.
  • The current state of the art is Khanmigo, a text-based bot created by Khan Academy. It can tutor students in math, science, and the humanities—for example, it can explain the quadratic formula and create math problems to practice on. It can also help teachers do things like write lesson plans.
  • Businesses that are separate today—search advertising, social networking with advertising, shopping, productivity software—will become one business.
  • other issues won’t be decided by companies and governments. For example, agents could affect how we interact with friends and family. Today, you can show someone that you care about them by remembering details about their life—say, their birthday. But when they know your agent likely reminded you about it and took care of sending flowers, will it be as meaningful for them?
  • In the computing industry, we talk about platforms—the technologies that apps and services are built on. Android, iOS, and Windows are all platforms. Agents will be the next platform.
  • A shock wave in the tech industry
  • Agents won’t simply make recommendations; they’ll help you act on them. If you want to buy a camera, you’ll have your agent read all the reviews for you, summarize them, make a recommendation, and place an order for it once you’ve made a decision.
  • Agents will affect how we use software as well as how it’s written. They’ll replace search sites because they’ll be better at finding information and summarizing it for you
  • they’ll be dramatically better. You’ll be able to have nuanced conversations with them. They will be much more personalized, and they won’t be limited to relatively simple tasks like writing a letter.
  • Companies will be able to make agents available for their employees to consult directly and be part of every meeting so they can answer questions.
  • AI agents that are well trained in mental health will make therapy much more affordable and easier to get. Wysa and Youper are two of the early chatbots here. But agents will go much deeper. If you choose to share enough information with a mental health agent, it will understand your life history and your relationships. It’ll be available when you need it, and it will never get impatient. It could even, with your permission, monitor your physical responses to therapy through your smart watch—like if your heart starts to race when you’re talking about a problem with your boss—and suggest when you should see a human therapist.
  • If the number of companies that have started working on AI just this year is any indication, there will be an exceptional amount of competition, which will make agents very inexpensive.
  • Agents are smarter. They’re proactive—capable of making suggestions before you ask for them. They accomplish tasks across applications. They improve over time because they remember your activities and recognize intent and patterns in your behavior. Based on this information, they offer to provide what they think you need, although you will always make the final decisions.
  • Agents are not only going to change how everyone interacts with computers. They’re also going to upend the software industry, bringing about the biggest revolution in computing since we went from typing commands to tapping on icons.
  • In the distant future, agents may even force humans to face profound questions about purpose. Imagine that agents become so good that everyone can have a high quality of life without working nearly as much. In a future like that, what would people do with their time? Would anyone still want to get an education when an agent has all the answers? Can you have a safe and thriving society when most people have a lot of free time on their hands?
  • The ramifications for the software business and for society will be profound.
  • In the next five years, this will change completely. You won’t have to use different apps for different tasks. You’ll simply tell your device, in everyday language, what you want to do. And depending on how much information you choose to share with it, the software will be able to respond personally because it will have a rich understanding of your life. In the near future, anyone who’s online will be able to have a personal assistant powered by artificial intelligence that’s far beyond today’s technology.
  • You’ll also be able to get news and entertainment that’s been tailored to your interests. CurioAI, which creates a custom podcast on any subject you ask about, is a glimpse of what’s coming.
  • An agent will be able to help you with all your activities if you want it to. With permission to follow your online interactions and real-world locations, it will develop a powerful understanding of the people, places, and activities you engage in. It will get your personal and work relationships, hobbies, preferences, and schedule. You’ll choose how and when it steps in to help with something or ask you to make a decision.
  • even the best sites have an incomplete understanding of your work, personal life, interests, and relationships and a limited ability to use this information to do things for you. That’s the kind of thing that is only possible today with another human being, like a close friend or personal assistant.
  • The most exciting impact of AI agents is the way they will democratize services that today are too expensive for most people
  • They’ll have an especially big influence in four areas: health care, education, productivity, and entertainment and shopping.
Javier E

When Milton Friedman Ran the Show - The Atlantic - 0 views

  • Today, Friedman might seem to belong to a bygone world. The Trumpian wing of the Republican Party focuses on guns, gender, and God—­a stark contrast with Friedman’s free-market individualism. Its hostility to intellectuals and scientific authority is a far cry from his grounding within academic economics.
  • The analysts associated with the Claremont Institute, the Edmund Burke Foundation, and the National Conservatism Conference (such as Michael Anton, Yoram Hazony, and Patrick Deneen) espouse a vision of society focused on preserving communal order that seems very different from anything Friedman, a self-defined liberal in the style of John Stuart Mill, described in his work.
  • Jennifer Burns, a Stanford historian, sets out to make the case in her intriguing biography Milton Friedman: The Last Conservative that Friedman’s legacy cannot be shaken so easily. As she points out, some of his ideas—­the volunteer army, school choice—­have been adopted as policy; others, such as a universal basic income, have supporters across the political spectrum.
  • ...28 more annotations...
  • Friedman’s thought, she argues, is more complex and subtle than has been understood: He raised pressing questions about the market, individualism, and the role of the state that will be with us for as long as capitalism endures.
  • Just as important, his time at Chicago taught Friedman about the intertwining of political, intellectual, and personal loyalties. He became a regular in an informal group of graduate students and junior faculty trying to consolidate the department as a center of free-market thought
  • by the 1930s, the leading figures at the University of Chicago were deeply committed to what had become known as price theory, which analyzed economic behavior in terms of the incentives and information reflected in prices. The economists who left their mark on Friedman sought to create predictive models of economic decision making, and they were politically invested in the ideal of an unencumbered marketplace.
  • Friedman was also shaped by older traditions of economic thought, in particular the vision of political economy advanced by thinkers such as Adam Smith and Alfred Marshall. For them, as for him, economics was not a narrow social science, concerned with increasing productivity and efficiency. It was closely linked to a broader set of political ideas and values, and it necessarily dealt with basic questions of justice, freedom, and the best way to organize society.
  • His libertarian ethos helped seed the far more openly hierarchical social and political conservatism that fuels much of our present-day political dysfunction.
  • But his fundamental commitments were consistent. In his early work on consumption habits, Friedman sought to puncture the arrogance of the postwar Keynesian economists, who claimed to be able to manipulate the economy from above, using taxes and spending to turn investment, consumption, and demand on and off like so many spigots
  • Instead, he believed that consumption patterns were dependent on local conditions and on lifetime expectations of income. The federal government, he argued, could do much less to affect economic demand—­and hence to fight recessions—­than the Keynesian consensus suggested.
  • In 1946, Friedman was hired by the University of Chicago, where he shut down efforts to recruit economists who didn’t subscribe to free-market views.
  • He was also legendary for his brutal classroom culture. One departmental memo, trying to rectify the situation, went so far as to remind faculty to please not treat a university student “like a dog.” What had started as a freewheeling, rebellious culture among the economists in Room Seven wound up as doctrinal rigidity.
  • Evidence leads her to argue more pointedly that Rose (credited only with providing “assistance”) essentially co-wrote Capitalism and Freedom (1962).
  • Burns implicitly exposes some of the limitations of Friedman’s focus on the economic benefits of innate individual talent. He had more than nature to thank for producing associates of such high caliber, ready to benefit him in his career. Culture and institutions clearly played a large role, and sexual discrimination during the 1930s, ’40s, and ’50s ensured that professional paths were anything but fair.
  • The state, he acknowledged, would have to take some responsibility for managing economic life—­and thus economists would be thrust into a public role. The question was what they would do with this new prominence.
  • Almost as soon as the Second World War ended, Friedman began to stake out a distinctive rhetorical position, arguing that the policy goals of the welfare state could be better accomplished by the free market
  • in Capitalism and Freedom, Friedman made the case that the real problem lay in the methods liberals employed, which involved interfering with the competitive price mechanism of the free market. Liberals weren’t morally wrong, just foolish, despite the vaunted expertise of their economic advisers.
  • In a rhetorical move that seemed designed to portray liberal political leaders as incompetent, he emphasized efficiency and the importance of the price system as a tool for social policy
  • For Friedman, the competitive market was the realm of innovation, creativity, and freedom. In constructing his arguments, he envisioned workers and consumers as individuals in a position to exert decisive economic power, always able to seek a higher wage, a better price, an improved product
  • The limits of this notion emerged starkly in his contorted attempts to apply economic reasoning to the problem of racism, which he described as merely a matter of taste that should be free from the “coercive power” of the law:
  • Although he personally rejected racial prejudice, he considered the question of whether Black children could attend good schools—and whether, given the “taste” for prejudice in the South, Black adults could find remunerative jobs—less important than the “right” of white southerners to make economic decisions that reflected their individual preferences. In fact, Friedman compared fair-employment laws to the Nuremberg Race Laws of Nazi Germany. Not only was this tone-deaf in the context of the surging 1960s civil-rights movement; it was a sign of how restricted his idea of freedom really was.
  • s the conservative movement started to make electoral gains in the ’70s, Friedman emerged as a full-throated challenger of liberal goals, not just methods
  • He campaigned for “tax limitation” amendments that would have restricted the ability of state governments to tax or spend
  • n a famous New York Times Magazine essay, he suggested that corporations had no “social responsibility” at all; they were accountable only for increasing their own profit
  • Friedman’s free-market certainties went on to win over neoliberals. By the time he and Rose published their 1998 memoir, Two Lucky People, their ideas, once on the margin of society, had become the reigning consensus.
  • That consensus is now in surprising disarray in the Republican Party that was once its stronghold. The startling rise in economic inequality and the continued erosion of middle-class living standards have called into question the idea that downsizing the welfare state, ending regulations, and expanding the reach of the market really do lead to greater economic well-being—let alone freedom.
  • Friedman—despite being caricatured as a key intellectual architect of anti-government politics—had actually internalized an underlying assumption of the New Deal era: that government policy should be the key focus of political action. Using market theory to reshape state and federal policy was a constant theme of his career.
  • Still, Friedman—­and the libertarian economic tradition he advanced—­bears more responsibility for the rise of a far right in the United States than Burns’s biography would suggest. His strategy of goading the left, fully on display in the various provocations of Free to Choose and even Capitalism and Freedom, has been a staple for conservatives ever since
  • He zealously promoted the kind of relentless individualism that undergirds parts of today’s right, most notably the gun lobby. The hostile spirit that he brought to civil-rights laws surfaces now in the idea that reliance on court decisions and legislation to address racial hierarchy itself hems in freedom
  • The opposition to centralized government that he championed informs a political culture that venerates local authority and private power, even when they are oppressive
  • his insistence (to quote Capitalism and Freedom) that “any … use of government is fraught with danger” has nurtured a deep pessimism that democratic politics can offer any route to redressing social and economic inequalities.
Javier E

Is Argentina the First A.I. Election? - The New York Times - 0 views

  • Argentina’s election has quickly become a testing ground for A.I. in campaigns, with the two candidates and their supporters employing the technology to doctor existing images and videos and create others from scratch.
  • A.I. has made candidates say things they did not, and put them in famous movies and memes. It has created campaign posters, and triggered debates over whether real videos are actually real.
  • A.I.’s prominent role in Argentina’s campaign and the political debate it has set off underscore the technology’s growing prevalence and show that, with its expanding power and falling cost, it is now likely to be a factor in many democratic elections around the globe.
  • ...8 more annotations...
  • Experts compare the moment to the early days of social media, a technology offering tantalizing new tools for politics — and unforeseen threats.
  • For years, those fears had largely been speculative because the technology to produce such fakes was too complicated, expensive and unsophisticated.
  • His spokesman later stressed that the post was in jest and clearly labeled A.I.-generated. His campaign said in a statement that its use of A.I. is to entertain and make political points, not deceive.
  • Researchers have long worried about the impact of A.I. on elections. The technology can deceive and confuse voters, casting doubt over what is real, adding to the disinformation that can be spread by social networks.
  • Much of the content has been clearly fake. But a few creations have toed the line of disinformation. The Massa campaign produced one “deepfake” video in which Mr. Milei explains how a market for human organs would work, something he has said philosophically fits in with his libertarian views.
  • So far, the A.I.-generated content shared by the campaigns in Argentina has either been labeled A.I. generated or is so clearly fabricated that it is unlikely it would deceive even the most credulous voters. Instead, the technology has supercharged the ability to create viral content that previously would have taken teams of graphic designers days or weeks to complete.
  • To do so, campaign engineers and artists fed photos of Argentina’s various political players into an open-source software called Stable Diffusion to train their own A.I. system so that it could create fake images of those real people. They can now quickly produce an image or video of more than a dozen top political players in Argentina doing almost anything they ask.
  • For Halloween, the Massa campaign told its A.I. to create a series of cartoonish images of Mr. Milei and his allies as zombies. The campaign also used A.I. to create a dramatic movie trailer, featuring Buenos Aires, Argentina’s capital, burning, Mr. Milei as an evil villain in a straitjacket and Mr. Massa as the hero who will save the country.
Javier E

How Nations Are Losing a Global Race to Tackle A.I.'s Harms - The New York Times - 0 views

  • When European Union leaders introduced a 125-page draft law to regulate artificial intelligence in April 2021, they hailed it as a global model for handling the technology.
  • E.U. lawmakers had gotten input from thousands of experts for three years about A.I., when the topic was not even on the table in other countries. The result was a “landmark” policy that was “future proof,” declared Margrethe Vestager, the head of digital policy for the 27-nation bloc.
  • Then came ChatGPT.
  • ...45 more annotations...
  • The eerily humanlike chatbot, which went viral last year by generating its own answers to prompts, blindsided E.U. policymakers. The type of A.I. that powered ChatGPT was not mentioned in the draft law and was not a major focus of discussions about the policy. Lawmakers and their aides peppered one another with calls and texts to address the gap, as tech executives warned that overly aggressive regulations could put Europe at an economic disadvantage.
  • Even now, E.U. lawmakers are arguing over what to do, putting the law at risk. “We will always be lagging behind the speed of technology,” said Svenja Hahn, a member of the European Parliament who was involved in writing the A.I. law.
  • Lawmakers and regulators in Brussels, in Washington and elsewhere are losing a battle to regulate A.I. and are racing to catch up, as concerns grow that the powerful technology will automate away jobs, turbocharge the spread of disinformation and eventually develop its own kind of intelligence.
  • Nations have moved swiftly to tackle A.I.’s potential perils, but European officials have been caught off guard by the technology’s evolution, while U.S. lawmakers openly concede that they barely understand how it works.
  • The absence of rules has left a vacuum. Google, Meta, Microsoft and OpenAI, which makes ChatGPT, have been left to police themselves as they race to create and profit from advanced A.I. systems
  • At the root of the fragmented actions is a fundamental mismatch. A.I. systems are advancing so rapidly and unpredictably that lawmakers and regulators can’t keep pace
  • That gap has been compounded by an A.I. knowledge deficit in governments, labyrinthine bureaucracies and fears that too many rules may inadvertently limit the technology’s benefits.
  • Even in Europe, perhaps the world’s most aggressive tech regulator, A.I. has befuddled policymakers.
  • The European Union has plowed ahead with its new law, the A.I. Act, despite disputes over how to handle the makers of the latest A.I. systems.
  • The result has been a sprawl of responses. President Biden issued an executive order in October about A.I.’s national security effects as lawmakers debate what, if any, measures to pass. Japan is drafting nonbinding guidelines for the technology, while China has imposed restrictions on certain types of A.I. Britain has said existing laws are adequate for regulating the technology. Saudi Arabia and the United Arab Emirates are pouring government money into A.I. research.
  • A final agreement, expected as soon as Wednesday, could restrict certain risky uses of the technology and create transparency requirements about how the underlying systems work. But even if it passes, it is not expected to take effect for at least 18 months — a lifetime in A.I. development — and how it will be enforced is unclear.
  • Many companies, preferring nonbinding codes of conduct that provide latitude to speed up development, are lobbying to soften proposed regulations and pitting governments against one another.
  • “No one, not even the creators of these systems, know what they will be able to do,” said Matt Clifford, an adviser to Prime Minister Rishi Sunak of Britain, who presided over an A.I. Safety Summit last month with 28 countries. “The urgency comes from there being a real question of whether governments are equipped to deal with and mitigate the risks.”
  • Europe takes the lead
  • In mid-2018, 52 academics, computer scientists and lawyers met at the Crowne Plaza hotel in Brussels to discuss artificial intelligence. E.U. officials had selected them to provide advice about the technology, which was drawing attention for powering driverless cars and facial recognition systems.
  • as they discussed A.I.’s possible effects — including the threat of facial recognition technology to people’s privacy — they recognized “there were all these legal gaps, and what happens if people don’t follow those guidelines?”
  • In 2019, the group published a 52-page report with 33 recommendations, including more oversight of A.I. tools that could harm individuals and society.
  • By October, the governments of France, Germany and Italy, the three largest E.U. economies, had come out against strict regulation of general purpose A.I. models for fear of hindering their domestic tech start-ups. Others in the European Parliament said the law would be toothless without addressing the technology. Divisions over the use of facial recognition technology also persisted.
  • So when the A.I. Act was unveiled in 2021, it concentrated on “high risk” uses of the technology, including in law enforcement, school admissions and hiring. It largely avoided regulating the A.I. models that powered them unless listed as dangerous
  • “They sent me a draft, and I sent them back 20 pages of comments,” said Stuart Russell, a computer science professor at the University of California, Berkeley, who advised the European Commission. “Anything not on their list of high-risk applications would not count, and the list excluded ChatGPT and most A.I. systems.”
  • E.U. leaders were undeterred.“Europe may not have been the leader in the last wave of digitalization, but it has it all to lead the next one,” Ms. Vestager said when she introduced the policy at a news conference in Brussels.
  • In 2020, European policymakers decided that the best approach was to focus on how A.I. was used and not the underlying technology. A.I. was not inherently good or bad, they said — it depended on how it was applied.
  • Nineteen months later, ChatGPT arrived.
  • The Washington game
  • Lacking tech expertise, lawmakers are increasingly relying on Anthropic, Microsoft, OpenAI, Google and other A.I. makers to explain how it works and to help create rules.
  • “We’re not experts,” said Representative Ted Lieu, Democrat of California, who hosted Sam Altman, OpenAI’s chief executive, and more than 50 lawmakers at a dinner in Washington in May. “It’s important to be humble.”
  • Tech companies have seized their advantage. In the first half of the year, many of Microsoft’s and Google’s combined 169 lobbyists met with lawmakers and the White House to discuss A.I. legislation, according to lobbying disclosures. OpenAI registered its first three lobbyists and a tech lobbying group unveiled a $25 million campaign to promote A.I.’s benefits this year.
  • In that same period, Mr. Altman met with more than 100 members of Congress, including former Speaker Kevin McCarthy, Republican of California, and the Senate leader, Chuck Schumer, Democrat of New York. After testifying in Congress in May, Mr. Altman embarked on a 17-city global tour, meeting world leaders including President Emmanuel Macron of France, Mr. Sunak and Prime Minister Narendra Modi of India.
  • , the White House announced that the four companies had agreed to voluntary commitments on A.I. safety, including testing their systems through third-party overseers — which most of the companies were already doing.
  • “It was brilliant,” Mr. Smith said. “Instead of people in government coming up with ideas that might have been impractical, they said, ‘Show us what you think you can do and we’ll push you to do more.’”
  • In a statement, Ms. Raimondo said the federal government would keep working with companies so “America continues to lead the world in responsible A.I. innovation.”
  • Over the summer, the Federal Trade Commission opened an investigation into OpenAI and how it handles user data. Lawmakers continued welcoming tech executives.
  • In September, Mr. Schumer was the host of Elon Musk, Mark Zuckerberg of Meta, Sundar Pichai of Google, Satya Nadella of Microsoft and Mr. Altman at a closed-door meeting with lawmakers in Washington to discuss A.I. rules. Mr. Musk warned of A.I.’s “civilizational” risks, while Mr. Altman proclaimed that A.I. could solve global problems such as poverty.
  • A.I. companies are playing governments off one another. In Europe, industry groups have warned that regulations could put the European Union behind the United States. In Washington, tech companies have cautioned that China might pull ahead.
  • In May, Ms. Vestager, Ms. Raimondo and Antony J. Blinken, the U.S. secretary of state, met in Lulea, Sweden, to discuss cooperating on digital policy.
  • “China is way better at this stuff than you imagine,” Mr. Clark of Anthropic told members of Congress in January.
  • After two days of talks, Ms. Vestager announced that Europe and the United States would release a shared code of conduct for safeguarding A.I. “within weeks.” She messaged colleagues in Brussels asking them to share her social media post about the pact, which she called a “huge step in a race we can’t afford to lose.”
  • Months later, no shared code of conduct had appeared. The United States instead announced A.I. guidelines of its own.
  • Little progress has been made internationally on A.I. With countries mired in economic competition and geopolitical distrust, many are setting their own rules for the borderless technology.
  • Yet “weak regulation in another country will affect you,” said Rajeev Chandrasekhar, India’s technology minister, noting that a lack of rules around American social media companies led to a wave of global disinformation.
  • “Most of the countries impacted by those technologies were never at the table when policies were set,” he said. “A.I will be several factors more difficult to manage.”
  • Even among allies, the issue has been divisive. At the meeting in Sweden between E.U. and U.S. officials, Mr. Blinken criticized Europe for moving forward with A.I. regulations that could harm American companies, one attendee said. Thierry Breton, a European commissioner, shot back that the United States could not dictate European policy, the person said.
  • Some policymakers said they hoped for progress at an A.I. safety summit that Britain held last month at Bletchley Park, where the mathematician Alan Turing helped crack the Enigma code used by the Nazis. The gathering featured Vice President Kamala Harris; Wu Zhaohui, China’s vice minister of science and technology; Mr. Musk; and others.
  • The upshot was a 12-paragraph statement describing A.I.’s “transformative” potential and “catastrophic” risk of misuse. Attendees agreed to meet again next year.
  • The talks, in the end, produced a deal to keep talking.
Javier E

Opinion | We Should Have Known So Much About Covid From the Start - The New York Times - 0 views

  • I spoke to Mina about what seeing Covid as a textbook virus tells us about the nature of the pandemic off-ramp — and about everything else we should’ve known about the disease from the outset.
  • you can get exposed or you can get vaccinated. But either way, we have to keep building our immune system up, as babies do. That takes years to do. And I think it’s going to be a few more years at least.
  • And in the meantime?We’ve seen a dramatic reduction in mortality. We’ve even seen, I’d say, a dramatic decline in rates of serious long Covid per infection.
  • ...40 more annotations...
  • But I do think it’s going to be a while before this virus becomes completely normal. And I’ve never been convinced that this current generation of elderly people will ever get to a place where it is completely normal. If you’re 65 or 75 or even older — it’s really hard to teach an immune system new tricks if you’re that age
  • And so while we may see excess mortality in the elderly decline somewhat, I don’t think we’ll see it ever disappear for this generation who was already old when the pandemic hit. Many will never develop that robust, long-term immunological memory we would want to see — and which happens naturally to someone who’s been exposed hundreds of times since they were a little baby.
  • There’s a similar story with measles. There is no routine later-life sequelae, like shingles, for measles. But what we do see is that, in measles outbreaks today, there are some people who were vaccinated who get it anyway. Maybe 5 to 15 percent of cases are not immunologically naïve people, but vaccinated people.
  • Is it really the case that, as babies, we are fighting off those viruses hundreds of times?The short answer is yeah. We start seeing viruses when we’re 2 months old, when we’re a month old. And a lot of these viruses we’ve seen literally tens, if not hundreds of times for some people by the time we’re adults. People tend to think that immunity is binary — you’re either immune or you’re not. That couldn’t be farther from the truth. It’s a gradient, and your protection gets stronger the more times you see a virus.
  • We used to think we just had this spectacular immune response when we first encountered the virus at, say, age 6, and that the immune response lasted until we were 70. But actually what we were seeing was the effect of an immune system being retrained every time it came into contact with the virus after the initial infection — at 6, and 7, and 8, and so on. Every time your friend got chickenpox, or your neighbor, you got a massive boost. You were re-upping your immune response and diversifying your immunological tools — potentially multiple times a year, a kind of natural booster.
  • But now, in America, kids get chickenpox vaccines. So you don’t have kids in America getting chickenpox today, and never will. But that means that older Americans, who did get it as kids, are not being exposed again — certainly not multiple times each year. And it turns out that, in the absence of routine re-exposures, that first exposure alone isn’t nearly as good at driving lifelong immunity and warding off shingles until your immune system begins to fall apart in old age — it can last until you’re in your 30s, for example but not until your 70s.
  • With Covid, when it infects you, it can land in your upper respiratory tract and it just start replicating right there. Immediately, it’s present and replicating in your lungs and in your nose. And that alone elicits enough of an immune response to cause us to feel really crappy and even cause us to feel disease.
  • But we could have just set the narrative better at the beginning: Look, you might get sick again, but your risk of landing in the hospital is going to be really low, and if you get a booster, you might still get sick again, but your risk of landing in the hospital is going to be even lower. That’s something I think humans can deal with, and I think the public could have understood it.
  • But it’s why we don’t see the severe disease as much, with a second exposure or an exposure after vaccination: For most people, it’s not getting into the heart and the liver and stuff nearly as easily.
  • But it doesn’t have to. It’s still causing symptomatic disease. And maybe mucosal vaccines could stop this, but without them we’re likely to continue seeing infections and even symptomatic infections.
  • through most of 2020 and into 2021, though. Back then, I think the conventional wisdom was that a single exposure — through infection or vaccination — would be the end of the pandemic for you. If this is basic virology and immunology, how did we get that so wrong?
  • The short answer is that epidemiologists are not immunologists and immunologists are not virologists and virologists are not epidemiologists. And, in general, physicians don’t know anything about the details.
  • But this failure had some pretty concrete impacts. When reinfections first began popping up, people were surprised, they were scared, and then, to some degree, they lost trust in vaccines. And the people they were turning to for guidance — not only did they not warn us about that, they were slow to acknowledge it, as well.
  • It had dramatic impacts and ripple effects that will last for years to limit our ability to get populations properly vaccinated.
  • the worst thing we can do during a pandemic is set inappropriately high expectations. These vaccines are incredible, they’ve had an enormously positive impact on mortality, but they were never going to end the pandemic.
  • And now, there’s a huge number of people questioning, do these vaccines even do anything?
  • For babies born today, though, I really think they’re not going to view Covid as any different than other viruses. By the time they are 20, it will be like any other virus to them. Because their immune systems will have grown up with it.
  • Instead, we set society up for failure, since people feel like the government failed everyone, that biology failed us, and that this was a crazy virus that has broken all the rules of our immune system, when it’s just doing what we’ve always known it would do.
  • How do you wish we had messaged things differently? What would it have meant to communicate early and clearly that Covid was a textbook virus, as you say?I think the biggest thing would have been just to say, we understand the enemy.
  • To say that this is a textbook virus, it doesn’t mean that it’s not killing people. Objectively, it’s still killing more people than any other infectious disease
  • What it means is that we could’ve taken action based on what we knew, rather than waiting around to prove everything and publish papers in Nature and Science talking about things we already knew.
  • We could have prepared for November and December of 2020 and then for November and December of 2021. But everyone kept saying, we don’t know if it’s going to come back. We knew it was going to come back and it makes me want to cry to think about it. We did nothing and hundreds of thousands of people died. We didn’t prepare nursing homes because we all got to the summer of 2020 and we said, cross our fingers.
  • We knew how tests worked. We knew about serial testing and why it was important for a public health approach. We knew that vaccines could have really good impacts once they were around. And if you were looking through the correct lens, we even knew that they weren’t going to stop transmission.
  • We didn’t have to live in a world where we were flying blind. We could have lived in a world where we’re knowledgeable. But instead, we chose almost across the board to will ourselves into this state of fear and anxiety.
  • And that really started in the earliest days. Almost the first experience I had was a lot like that movie with Jennifer Lawrence —Don’t Look Up.
  • none of this was complicated. You just had to ask a simple question: what would happen if you took away all immunity from an adult? Well, once you control for no immunity, adults are going to get very, very sick.
  • Of course, by and large, babies didn’t get very sick from this disease.Babies are immunologically naïve, but they are also resilient. A virus can tear up a baby, but a baby can repair its tissue so fast. Adults don’t have that. It’s just like a baby getting a cut. They’ll heal really quick
  • An adult getting a cut — you go by age, and every decade of age that you are, it’s going to take exponentially longer for that wound to heal. Eventually get to 80 or 90 and the wound can’t even heal. In the immunology world, this is called “tolerance.”
  • why are all these organ systems getting damaged when other viruses don’t seem to do that? It’s natural to think, it’s Covid — this is a weird disease. But it’s much more a story about immunity and how it develops than about the virus or the disease. None of our organ systems had any immune defenses around to help them out. And I think that the majority of post-acute sequelae and multi-organ complications and long Covid — they are not the result of the virus being a crazy different virus, but are a result of this virus replicating in an environment where there were such absent or exceedingly low defenses.
  • Is it the same whenever we encounter a virus for the first time?Think about travelers. Travelers get way more sick from a local disease than people who grew up with that virus. If you get malaria as a traveler, you’re much more likely to get really sick. You don’t see everyone in Nicaragua taking chloroquine every day. But you definitely see travelers taking it, because malaria can be deadly for adults.
  • What about, not severity, but post-acute complications — do we have long malaria? Do we have liver complications from dengue?
  • The really hard part of answering that question is there’s just not enough data on the frequency of long-term effects, because nothing like this has ever happened at such scale. It’s like everyone in Europe and North America suddenly traveled to a country where malaria was endemic.
  • Or think about H.I.V. It essentially kills your immune system, and once the immune barriers are down, other viruses that used to infect humans would get into tissues that we didn’t like them to get into. If there wasn’t such a clear signal of a loss of CD-4 T cells to explain it, people might still be scratching their heads and going, man, I wonder why all these patients are getting fungal infections. Well, there’s a virus there that’s depleting their immune system.
  • Covid is absolutely waking the world up to this — to the fact that there are really weird long-term sequelae to viruses when they infect organ systems that would normally be protected. And I think we’re going to find that more and more cancers are being attributed to viral infections.
  • It wasn’t that long ago that we first learned that most cases of cervical cancer were caused by H.P.V. — I think the 1980s. And now we have a vaccine for H.P.V. and rates of cervical cancer have fallen by two-thirds.
  • what about incidence? We’ve talked at a few points about how important it is to think about all of these questions in terms of the scale. What is the right scale for thinking about future long Covid, for instance, or other post-acute sequelae?
  • I think the absolute risk, per infection, is going down and down and down. That’s just true.
  • he U.K.’s Office of National Statistics, which shows a much lower risk of developing long Covid now, from reinfection, than from an initial infection earlier in the pandemic.
  • the worst is definitely behind us, which is a good thing, especially for people who worry that the problems will keep building and a lot of people — or even everyone — will get long Covid symptoms. I don’t think there’s a world where we’re looking at the babies of today dealing with long Covid at any meaningful scale.
  • a lot of the fear right now comes from the worst cases, and there’s a lot of worst cases. Even one of the people that I know well, I know in their mind they’re worried that they’ll never recover, but I think objectively they are recovering slowly. It might not be an eight month course. It might be a year and a half. But they will get better. Most of us will.
Javier E

ai-tech-summit - The Washington Post - 0 views

  • “I don’t know where optimism would spring from, but it is pretty barren ground,” Meredith Whittaker, president of the Signal Foundation, said at The Washington Post’s AI summit. “And the incentives are not aligned for the social good.”
  • “There will be some decision that’s made, rightly or wrongly, to deploy a very immature AI system that could then create dramatic risks of our soldiers on the battlefield,” he said. “I think we need to be thinking about what does it mean to actually have mature AI technology versus hype-driven AI technology.”
  • The launch of ChatGPT and other generative AI tools has ushered in rapid advances in artificial intelligence and has increased global angst around the impact the technology will have on society
  • ...4 more annotations...
  • “We should be very concerned,” Whittaker said. “We are outgunned in terms of lobbying power [from major tech companies] and in terms of the ability to put our weight on the decision-makers in Congress.”
  • “we shouldn’t just dismiss it” as a “toy.”
  • “I think that sentiment is dangerous, like just coming in and saying this is just a hype cycle,” she said. “They’re getting better at doing things like structured reasoning. We shouldn’t just dismiss that this is not going to be a danger.”
  • The executive branch is “concerned and they’re doing a lot regulatorily, but everyone admits the only real answer is legislative,” Schumer said of the administration.
Javier E

AI could change the 2024 elections. We need ground rules. - The Washington Post - 0 views

  • New York Mayor Eric Adams doesn’t speak Spanish. But it sure sounds like he does.He’s been using artificial intelligence software to send prerecorded calls about city events to residents in Spanish, Mandarin Chinese, Urdu and Yiddish. The voice in the messages mimics the mayor but was generated with AI software from a company called ElevenLabs.
  • Experts have warned for years that AI will change our democracy by distorting reality. That future is already here. AI is being used to fabricate voices, fundraising emails and “deepfake” images of events that never occurred.
  • I’m writing this to urge elected officials, candidates and their supporters to pledge not to use AI to deceive voters. I’m not suggesting a ban, but rather calling for politicians to commit to some common values while our democracy adjusts to a world with AI.
  • ...20 more annotations...
  • If we don’t draw some lines now, legions of citizens could be manipulated, disenfranchised or lose faith in the whole system — opening doors to foreign adversaries who want to do the same. AI might break us in 2024.
  • “The ability of AI to interfere with our elections, to spread misinformation that’s extremely believable is one of the things that’s preoccupying us,” Schumer said, after watching me so easily create a deepfake of him. “Lots of people in the Congress are examining this.”
  • Of course, fibbing politicians are nothing new, but examples keep multiplying of how AI supercharges misinformation in ways we haven’t seen before. Two examples: The presidential campaign of Florida Gov. Ron DeSantis (R) shared an AI-generated image of former president Donald Trump embracing Anthony S. Fauci. That hug never happened. In Chicago’s mayoral primary, someone used AI to clone the voice of candidate Paul Vallas in a fake news report, making it look like he approved of police brutality.
  • But what will happen when a shocking image or audio clip goes viral in a battleground state shortly before an election? What kind of chaos will ensue when someone uses a bot to send out individually tailored lies to millions of different voters?
  • A wide 85 percent of U.S. citizens said they were “very” or “somewhat” concerned about the spread of misleading AI video and audio, in an August survey by YouGov. And 78 percent were concerned about AI contributing to the spread of political propaganda.
  • We can’t put the genie back in the bottle. AI is already embedded in tech tool campaigns that all of us use every day. AI creates our Facebook feeds and picks what ads we see. AI built into our phone cameras brightens faces and smooths skin.
  • What’s more, there are many political uses for AI that are unobjectionable, and even empowering for candidates with fewer resources. Politicians can use AI to manage the grunt work of sorting through databases and responding to constituents. Republican presidential candidate Asa Hutchinson has an AI chatbot trained to answer questions like him. (I’m not sure politician bots are very helpful, but fine, give it a try.)
  • Clarke’s solution, included in a bill she introduced on political ads: Candidates should disclose when they use AI to create communications. You know the “I approve this message” notice? Now add, “I used AI to make this message.”
  • But labels aren’t enough. If AI disclosures become commonplace, we may become blind to them, like so much other fine print.
  • The bigger ask: We want candidates and their supporting parties and committees not to use AI to deceive us.
  • So what’s the difference between a dangerous deepfake and an AI facetune that makes an octogenarian candidate look a little less octogenarian?
  • “The core definition is showing a candidate doing or saying something they didn’t do or say,”
  • Sure, give Biden or Trump a facetune, or even show them shaking hands with Abraham Lincoln. But don’t use AI to show your competitor hugging an enemy or fake their voice commenting on current issues.
  • The pledge also includes not using AI to suppress voting, such as using an authoritative voice or image to tell people a polling place has been closed. That is already illegal in many states, but it’s still concerning how believable AI might make these efforts seem.
  • Don’t deepfake yourself. Making yourself or your favorite candidate appear more knowledgeable, experienced or culturally capable is also a form of deception.
  • (Pressed on the ethics of his use of AI, Adams just proved my point that we desperately need some ground rules. “These are part of the broader conversations that the philosophical people will have to sit down and figure out, ‘Is this ethically right or wrong?’ I’ve got one thing: I’ve got to run the city,” he said.)
  • The golden rule in my pledge — don’t use AI to be materially deceptive — is similar to the one in an AI regulation proposed by a bipartisan group of lawmakers
  • Such proposals have faced resistance in Washington on First Amendment grounds. The free speech of politicians is important. It’s not against the law for politicians to lie, whether they’re using AI or not. An effort to get the Federal Election Commission to count AI deepfakes as “fraudulent misrepresentation” under its existing authority has faced similar pushback.
  • But a pledge like the one I outline here isn’t a law restraining speech. It’s asking politicians to take a principled stand on their own use of AI
  • Schumer said he thinks my pledge is just a start of what’s needed. “Maybe most candidates will make that pledge. But the ones that won’t will drive us to a lower common denominator, and that’s true throughout AI,” he said. “If we don’t have government-imposed guardrails, the lowest common denominator will prevail.”
Javier E

The New AI Panic - The Atlantic - 0 views

  • export controls are now inflaming tensions between the United States and China. They have become the primary way for the U.S. to throttle China’s development of artificial intelligence: The department last year limited China’s access to the computer chips needed to power AI and is in discussions now to expand the controls. A semiconductor analyst told The New York Times that the strategy amounts to a kind of economic warfare.
  • If enacted, the limits could generate more friction with China while weakening the foundations of AI innovation in the U.S.
  • The same prediction capabilities that allow ChatGPT to write sentences might, in their next generation, be advanced enough to produce individualized disinformation, create recipes for novel biochemical weapons, or enable other unforeseen abuses that could threaten public safety.
  • ...22 more annotations...
  • Of particular concern to Commerce are so-called frontier models. The phrase, popularized in the Washington lexicon by some of the very companies that seek to build these models—Microsoft, Google, OpenAI, Anthropic—describes a kind of “advanced” artificial intelligence with flexible and wide-ranging uses that could also develop unexpected and dangerous capabilities. By their determination, frontier models do not exist yet. But an influential white paper published in July and co-authored by a consortium of researchers, including representatives from most of those tech firms, suggests that these models could result from the further development of large language models—the technology underpinning ChatGPT
  • The threats of frontier models are nebulous, tied to speculation about how new skill sets could suddenly “emerge” in AI programs.
  • Among the proposals the authors offer, in their 51-page document, to get ahead of this problem: creating some kind of licensing process that requires companies to gain approval before they can release, or perhaps even develop, frontier AI. “We think that it is important to begin taking practical steps to regulate frontier AI today,” the authors write.
  • Microsoft, Google, OpenAI, and Anthropic subsequently launched the Frontier Model Forum, an industry group for producing research and recommendations on “safe and responsible” frontier-model development.
  • Shortly after the paper’s publication, the White House used some of the language and framing in its voluntary AI commitments, a set of guidelines for leading AI firms that are intended to ensure the safe deployment of the technology without sacrificing its supposed benefit
  • AI models advance rapidly, he reasoned, which necessitates forward thinking. “I don’t know what the next generation of models will be capable of, but I’m really worried about a situation where decisions about what models are put out there in the world are just up to these private companies,” he said.
  • For the four private companies at the center of discussions about frontier models, though, this kind of regulation could prove advantageous.
  • Convincing regulators to control frontier models could restrict the ability of Meta and any other firms to continue publishing and developing their best AI models through open-source communities on the internet; if the technology must be regulated, better for it to happen on terms that favor the bottom line.
  • The obsession with frontier models has now collided with mounting panic about China, fully intertwining ideas for the models’ regulation with national-security concerns. Over the past few months, members of Commerce have met with experts to hash out what controlling frontier models could look like and whether it would be feasible to keep them out of reach of Beijing
  • That the white paper took hold in this way speaks to a precarious dynamic playing out in Washington. The tech industry has been readily asserting its power, and the AI panic has made policy makers uniquely receptive to their messaging,
  • “Parts of the administration are grasping onto whatever they can because they want to do something,” Weinstein told me.
  • The department’s previous chip-export controls “really set the stage for focusing on AI at the cutting edge”; now export controls on frontier models could be seen as a natural continuation. Weinstein, however, called it “a weak strategy”; other AI and tech-policy experts I spoke with sounded their own warnings as well.
  • The decision would represent an escalation against China, further destabilizing a fractured relationship
  • Many Chinese AI researchers I’ve spoken with in the past year have expressed deep frustration and sadness over having their work—on things such as drug discovery and image generation—turned into collateral in the U.S.-China tech competition. Most told me that they see themselves as global citizens contributing to global technology advancement, not as assets of the state. Many still harbor dreams of working at American companies.
  • “If the export controls are broadly defined to include open-source, that would touch on a third-rail issue,” says Matt Sheehan, a Carnegie Endowment for International Peace fellow who studies global technology issues with a focus on China.
  • What’s frequently left out of considerations as well is how much this collaboration happens across borders in ways that strengthen, rather than detract from, American AI leadership. As the two countries that produce the most AI researchers and research in the world, the U.S. and China are each other’s No. 1 collaborator in the technology’s development.
  • Assuming they’re even enforceable, export controls on frontier models could thus “be a pretty direct hit” to the large community of Chinese developers who build on U.S. models and in turn contribute their own research and advancements to U.S. AI development,
  • Within a month of the Commerce Department announcing its blockade on powerful chips last year, the California-based chipmaker Nvidia announced a less powerful chip that fell right below the export controls’ technical specifications, and was able to continue selling to China. Bytedance, Baidu, Tencent, and Alibaba have each since placed orders for about 100,000 of Nvidia’s China chips to be delivered this year, and more for future delivery—deals that are worth roughly $5 billion, according to the Financial Times.
  • In some cases, fixating on AI models would serve as a distraction from addressing the root challenge: The bottleneck for producing novel biochemical weapons, for example, is not finding a recipe, says Weinstein, but rather obtaining the materials and equipment to actually synthesize the armaments. Restricting access to AI models would do little to solve that problem.
  • there could be another benefit to the four companies pushing for frontier-model regulation. Evoking the specter of future threats shifts the regulatory attention away from present-day harms of their existing models, such as privacy violations, copyright infringements, and job automation
  • “People overestimate how much this is in the interest of these companies,”
  • AI safety as a domain even a few years ago was much more heterogeneous,” West told me. Now? “We’re not talking about the effects on workers and the labor impacts of these systems. We’re not talking about the environmental concerns.” It’s no wonder: When resources, expertise, and power have concentrated so heavily in a few companies, and policy makers are seeped in their own cocktail of fears, the landscape of policy ideas collapses under pressure, eroding the base of a healthy democracy.
« First ‹ Previous 581 - 600 of 637 Next › Last »
Showing 20 items per page