Skip to main content

Home/ TOK Friends/ Group items matching "Political" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
Javier E

Why the very concept of 'general knowledge' is under attack | Times2 | The Times - 0 views

  • why has University Challenge lasted, virtually unchanged, for so long?
  • The answer may lie in a famous theory about our brains put forward by the psychologist Raymond Cattell in 1963
  • Cattell divided intelligence into two categories: fluid and crystallised. Fluid intelligence refers to basic reasoning and other mental activities that require minimal learning — just an alert and flexible brain.
  • ...12 more annotations...
  • By contrast, crystallised intelligence is based on experience and the accumulation of knowledge. Fluid intelligence peaks at the age of about 20 then gradually declines, whereas crystallised intelligence grows through your life until you hit your mid-sixties, when you start forgetting things.
  • that explains much about University Challenge’s appeal. Because the contestants are mostly aged around 20 and very clever, their fluid intelligence is off the scale
  • On the other hand, because they have had only 20 years to acquire crystallised intelligence, their store of general knowledge is likely to be lacking in some areas.
  • In each episode there will be questions that older viewers can answer, thanks to their greater store of crystallised intelligence, but the students cannot. Therefore we viewers don’t feel inferior when confronted by these smart young people. On the contrary: we feel, in some areas, slightly superior.
  • there is a real threat to the future of University Challenge and much else of value in our society, and it is this. The very concept of “general knowledge” — of a widely accepted core of information that educated, inquisitive people should have in their memory banks — is under attack from two different groups.
  • It’s a brilliantly balanced format
  • They argue that all knowledge is contextual and that things taken for granted in the past — for instance, a canon of great authors that everyone should read at school — merely reflect an outdated, usually Eurocentric view of what’s intellectually important.
  • The first comprises the deconstructionists and decolonialists
  • The other group is the technocrats who argue that the extent of human knowledge is now so vast that it’s impossible for any individual to know more than, perhaps, one billionth of it
  • So why not leave it entirely to computers to do the heavy lifting of knowledge storing and recall, thus freeing our minds for creativity and problem solving?
  • The problem with the agitators on both sides of today’s culture wars is that they are forcefully trying to shape what’s accepted as general knowledge according to a blatant political agenda.
  • And the problem with relying on, say, Wikipedia’s 6.5 million English-language articles to store general knowledge for all of us? It’s the tacit implication that “mere facts” are too tedious to be clogging up our brains. From there it’s a short step to saying that facts don’t matter at all, that everything should be decided by “feelings”. And from there it’s an even shorter step to fake news and pernicious conspiracy theories, the belittling of experts and hard evidence, the closing of minds, the thickening of prejudice and the trivialisation of the national conversation.
Javier E

A Commencement Address Too Honest to Deliver in Person - The Atlantic - 0 views

  • Use this hiatus to do something you would never have done if this emergency hadn’t hit. When the lockdown lifts, move to another state or country. Take some job that never would have made sense if you were worrying about building a career—bartender, handyman, AmeriCorps volunteer.
  • If you use the next two years as a random hiatus, you may not wind up richer, but you’ll wind up more interesting.
  • The biggest way most colleges fail is this: They don’t plant the intellectual and moral seeds students are going to need later, when they get hit by the vicissitudes of life.
  • ...13 more annotations...
  • If you didn’t study Jane Austen while you were here, you probably lack the capacity to think clearly about making a marriage decision. If you didn’t read George Eliot, then you missed a master class on how to judge people’s character. If you didn’t read Nietzsche, you are probably unprepared to handle the complexities of atheism—and if you didn’t read Augustine and Kierkegaard, you’re probably unprepared to handle the complexities of faith.
  • The list goes on. If you didn’t read de Tocqueville, you probably don’t understand your own country. If you didn’t study Gibbon, you probably lack the vocabulary to describe the rise and fall of cultures and nations.
  • The wisdom of the ages is your inheritance; it can make your life easier. These resources often fail to get shared because universities are too careerist, or because faculty members are more interested in their academic specialties or politics than in teaching undergraduates, or because of a host of other reasons
  • What are you putting into your mind? Our culture spends a lot less time worrying about this, and when it does, it goes about it all wrong.
  • my worry is that, especially now that you’re out of college, you won’t put enough really excellent stuff into your brain.
  • I worry that it’s possible to grow up now not even aware that those upper registers of human feeling and thought exist.
  • The theory of maximum taste says that each person’s mind is defined by its upper limit—the best that it habitually consumes and is capable of consuming.
  • After college, most of us resolve to keep doing this kind of thing, but we’re busy and our brains are tired at the end of the day. Months and years go by. We get caught up in stuff, settle for consuming Twitter and, frankly, journalism. Our maximum taste shrinks.
  • I’m worried about the future of your maximum taste. People in my and earlier generations, at least those lucky enough to get a college education, got some exposure to the classics, which lit a fire that gets rekindled every time we sit down to read something really excellent.
  • the “theory of maximum taste.” This theory is based on the idea that exposure to genius has the power to expand your consciousness. If you spend a lot of time with genius, your mind will end up bigger and broader than if you spend your time only with run-of-the-mill stuff.
  • the whole culture is eroding the skill the UCLA scholar Maryanne Wolf calls “deep literacy,” the ability to deeply engage in a dialectical way with a text or piece of philosophy, literature, or art.
  • “To the extent that you cannot perceive the world in its fullness, to the same extent you will fall back into mindless, repetitive, self-reinforcing behavior, unable to escape.”
  • I can’t say that to you, because it sounds fussy and elitist and OK Boomer. And if you were in front of me, you’d roll your eyes.
  •  
    Or as the neurologist Richard Cytowic put it to Adam Garfinkle, "To the extent that you cannot perceive the world in its fullness, to the same extent you will fall back into mindless, repetitive, self-reinforcing behavior, unable to escape."*
Javier E

Elliot Ackerman Went From U.S. Marine to Bestselling Novelist - WSJ - 0 views

  • Years before he impressed critics with his first novel, “Green on Blue” (2015), written from the perspective of an Afghan boy, Ackerman was already, in his words, “telling stories and inhabiting the minds of others.” He explains that much of his work as a special-operations officer involved trying to grasp what his adversaries were thinking, to better anticipate how they might act
  • “Look, I really believe in stories, I believe in art, I believe that this is how we express our humanity,” he says. “You can’t understand a society without understanding the stories they tell about themselves, and how these stories are constantly changing.”
  • his, in essence, is the subject of “Halcyon,” in which a scientific breakthrough allows Robert Ableson, a World War II hero and renowned lawyer, to come back from the dead. Yet the 21st-century America he returns to feels like a different place, riven by debates over everything from Civil War monuments to workplace misconduct.
  • ...9 more annotations...
  • The novel probes how nothing in life is fixed, including the legacies of the dead and the stories we tell about our pas
  • “The study of history shouldn’t be backward looking,” explains a historian in “Halcyon.” “To matter, it has to take us forward.”
  • Ackerman was in college on Sept. 11, 2001, but what he remembers more vividly is watching the premiere of the TV miniseries “Band of Brothers” the previous Sunday. “If you wanted to know the zeitgeist in the U.S. at the time, it was this very sentimental view of World War II,” he says. “There was this nostalgia for a time where we’re the good guys, they’re the bad guys, and we’re going to liberate oppressed people.”
  • Ackerman, who also covers wars and veteran affairs as a journalist, says that America’s backing of Ukraine is essential in the face of what he calls “an authoritarian axis rising up in the world, with China, Russia and Iran.” Were the country to offer similar help to Taiwan in the face of an invasion from China, he notes, having some air bases in nearby Afghanistan would help, but the U.S. gave those up in 2021.
  • With Islamic fundamentalists now in control of places where he lost friends, he says he is often asked if he regrets his service. “When you are a young man and your country goes to war, you’re presented with a choice: You either fight or you don’t,” he writes in his 2019 memoir “Places and Names.” “I don’t regret my choice, but maybe I regret being asked to choose.”
  • Serving in the military at a time when wars are no longer generation-defining events has proven alienating for Ackerman. “When you’ve got wars with an all-volunteer military funded through deficit spending, they can go on forever because there are no political costs
  • The catastrophic withdrawal from Afghanistan in 2021, which Ackerman covers in his recent memoir “The Fifth Act,” compounded this moral injury. “The fact that there has been so little government support for our Afghan allies has left it to vets to literally clean this up,” he says, noting that he still fields requests for help on WhatsApp. He adds that unless lawmakers act, the tens of thousands of Afghans currently living in the U.S. on humanitarian parole will be sent back to Taliban-held Afghanistan later this year: “It’s very painful to see how our allies are treated.”
  • Looking back on America’s misadventures in Iraq, Afghanistan and elsewhere, he notes that “the stories we tell about war are really important to the decisions we make around war. It’s one reason why storytelling fills me with a similar sense of purpose.”
  • “We don’t talk about the world and our place in it in a holistic way, or a strategic way,” Ackerman says. “We were telling a story about ending America’s longest war, when the one we should’ve been telling was about repositioning ourselves in a world that’s becoming much more dangerous,” he adds. “Our stories sometimes get us in trouble, and we’re still dealing with that trouble today.”
Javier E

How the Shoggoth Meme Has Come to Symbolize the State of A.I. - The New York Times - 0 views

  • the Shoggoth had become a popular reference among workers in artificial intelligence, as a vivid visual metaphor for how a large language model (the type of A.I. system that powers ChatGPT and other chatbots) actually works.
  • it was only partly a joke, he said, because it also hinted at the anxieties many researchers and engineers have about the tools they’re building.
  • Since then, the Shoggoth has gone viral, or as viral as it’s possible to go in the small world of hyper-online A.I. insiders. It’s a popular meme on A.I. Twitter (including a now-deleted tweet by Elon Musk), a recurring metaphor in essays and message board posts about A.I. risk, and a bit of useful shorthand in conversations with A.I. safety experts. One A.I. start-up, NovelAI, said it recently named a cluster of computers “Shoggy” in homage to the meme. Another A.I. company, Scale AI, designed a line of tote bags featuring the Shoggoth.
  • ...17 more annotations...
  • Most A.I. researchers agree that models trained using R.L.H.F. are better behaved than models without it. But some argue that fine-tuning a language model this way doesn’t actually make the underlying model less weird and inscrutable. In their view, it’s just a flimsy, friendly mask that obscures the mysterious beast underneath.
  • In a nutshell, the joke was that in order to prevent A.I. language models from behaving in scary and dangerous ways, A.I. companies have had to train them to act polite and harmless. One popular way to do this is called “reinforcement learning from human feedback,” or R.L.H.F., a process that involves asking humans to score chatbot responses, and feeding those scores back into the A.I. model.
  • Shoggoths are fictional creatures, introduced by the science fiction author H.P. Lovecraft in his 1936 novella “At the Mountains of Madness.” In Lovecraft’s telling, Shoggoths were massive, blob-like monsters made out of iridescent black goo, covered in tentacles and eyes.
  • @TetraspaceWest said, wasn’t necessarily implying that it was evil or sentient, just that its true nature might be unknowable.
  • And it reinforces the notion that what’s happening in A.I. today feels, to some of its participants, more like an act of summoning than a software development process. They are creating the blobby, alien Shoggoths, making them bigger and more powerful, and hoping that there are enough smiley faces to cover the scary parts.
  • “I was also thinking about how Lovecraft’s most powerful entities are dangerous — not because they don’t like humans, but because they’re indifferent and their priorities are totally alien to us and don’t involve humans, which is what I think will be true about possible future powerful A.I.”
  • when Bing’s chatbot became unhinged and tried to break up my marriage, an A.I. researcher I know congratulated me on “glimpsing the Shoggoth.” A fellow A.I. journalist joked that when it came to fine-tuning Bing, Microsoft had forgotten to put on its smiley-face mask.
  • @TetraspaceWest, the meme’s creator, told me in a Twitter message that the Shoggoth “represents something that thinks in a way that humans don’t understand and that’s totally different from the way that humans think.”
  • In any case, the Shoggoth is a potent metaphor that encapsulates one of the most bizarre facts about the A.I. world, which is that many of the people working on this technology are somewhat mystified by their own creations. They don’t fully understand the inner workings of A.I. language models, how they acquire new capabilities or why they behave unpredictably at times. They aren’t totally sure if A.I. is going to be net-good or net-bad for the world.
  • That some A.I. insiders refer to their creations as Lovecraftian horrors, even as a joke, is unusual by historical standards. (Put it this way: Fifteen years ago, Mark Zuckerberg wasn’t going around comparing Facebook to Cthulhu.)
  • If it’s an A.I. safety researcher talking about the Shoggoth, maybe that person is passionate about preventing A.I. systems from displaying their true, Shoggoth-like nature.
  • A great many people are dismissive of suggestions that any of these systems are “really” thinking, because they’re “just” doing something banal (like making statistical predictions about the next word in a sentence). What they fail to appreciate is that there is every reason to suspect that human cognition is “just” doing those exact same things. It matters not that birds flap their wings but airliners don’t. Both fly. And these machines think. And, just as airliners fly faster and higher and farther than birds while carrying far more weight, these machines are already outthinking the majority of humans at the majority of tasks. Further, that machines aren’t perfect thinkers is about as relevant as the fact that air travel isn’t instantaneous. Now consider: we’re well past the Wright flyer level of thinking machine, past the early biplanes, somewhere about the first commercial airline level. Not quite the DC-10, I think. Can you imagine what the AI equivalent of a 777 will be like? Fasten your seatbelts.
  • @thomas h. You make my point perfectly. You’re observing that the way a plane flies — by using a turbine to generate thrust from combusting kerosene, for example — is nothing like the way that a bird flies, which is by using the energy from eating plant seeds to contract the muscles in its wings to make them flap. You are absolutely correct in that observation, but it’s also almost utterly irrelevant. And it ignores that, to a first approximation, there’s no difference in the physics you would use to describe a hawk riding a thermal and an airliner gliding (essentially) unpowered in its final descent to the runway. Further, you do yourself a grave disservice in being dismissive of the abilities of thinking machines, in exactly the same way that early skeptics have been dismissive of every new technology in all of human history. Writing would make people dumb; automobiles lacked the intelligence of horses; no computer could possibly beat a chess grandmaster because it can’t comprehend strategy; and on and on and on. Humans aren’t nearly as special as we fool ourselves into believing. If you want to have any hope of acting responsibly in the age of intelligent machines, you’ll have to accept that, like it or not, and whether or not it fits with your preconceived notions of what thinking is and how it is or should be done … machines can and do think, many of them better than you in a great many ways. b&
  • @BLA. You are incorrect. Everything has nature. Its nature is manifested in making humans react. Sure, no humans, no nature, but here we are. The writer and various sources are not attributing nature to AI so much as admitting that they don’t know what this nature might be, and there are reasons to be scared of it. More concerning to me is the idea that this field is resorting to geek culture reference points to explain and comprehend itself. It’s not so much the algorithm has no soul, but that the souls of the humans making it possible are stupendously and tragically underdeveloped.
  • When even tech companies are saying AI is moving too fast, and the articles land on page 1 of the NYT (there's an old reference), I think the greedy will not think twice about exploiting this technology, with no ethical considerations, at all.
  • @nome sane? The problem is it isn't data as we understand it. We know what the datasets are -- they were used to train the AI's. But once trained, the AI is thinking for itself, with results that have surprised everybody.
  • The unique feature of a shoggoth is it can become whatever is needed for a particular job. There's no actual shape so it's not a bad metaphor, if an imperfect image. Shoghoths also turned upon and destroyed their creators, so the cautionary metaphor is in there, too. A shame more Asimov wasn't baked into AI. But then the conflict about how to handle AI in relation to people was key to those stories, too.
Javier E

Elon Musk Is Not Playing Four-Dimensional Chess - 0 views

  • Musk is not wrong that Twitter is chock-full of noise and garbage, but the most pernicious stuff comes from real people and a media ecosystem that amplifies and rewards incendiary bullshit
  • This dynamic is far more of a problem for Twitter (but also the news media and the internet in general) than shadowy bot farms are. But it’s also a dilemma without much of a concrete solution
  • Were Musk actually curious or concerned with the health of the online public discourse, he might care about the ways that social media platforms like Twitter incentivize this behavior and create an information economy where our sense of proportion on a topic can be so easily warped. But Musk isn’t interested in this stuff, in part because he is a huge beneficiary of our broken information environment and can use it to his advantage to remain constantly in the spotlight.
  • ...3 more annotations...
  • Musk’s concern with bots isn’t only a bullshit tactic he’s using to snake out of a bad business deal and/or get a better price for Twitter; it’s also a great example of his shallow thinking. The man has at least some ability to oversee complex engineering systems that land rockets, but his narcissism affords him a two-dimensional understanding of the way information travels across social media.
  • He is drawn to the conspiratorial nature of bots and information manipulation, because it is a more exciting and easier-to-understand solution to more complex or uncomfortable problems. Instead of facing the reality that many people dislike him as a result of his personality, behavior, politics, or shitty management style, he blames bots. Rather than try to understand the gnarly mechanics and hard-to-solve problems of democratized speech, he sorts them into overly simplified boxes like censorship and spam and then casts himself as the crusading hero who can fix it all. But he can’t and won’t, because he doesn’t care enough to find the answers.
  • Musk isn’t playing chess or even checkers. He’s just the richest man in the world, bored, mad, and posting like your great-uncle.
Javier E

Why Trump Supporters Aren't Backing Down - The Atlantic - 0 views

  • Almost all of Trump’s supporters want to cast their gaze elsewhere—on some other issue, on some other hearing, on some other controversy. They’ll do anything to keep from having to confront the reality of what happened on January 6. What you’re very unlikely to see, except in the rarest of cases, is genuine self-reflection or soul-searching, regret or remorse, feelings of embarrassment and shame.
  • Trump supporters have spent much of the past half dozen years defending their man; their political and cultural identity has become fused with his. Some of them may have started out as lukewarm allies, but over time their support became less qualified and more enthusiastic. The unusual intensity of the Trump years increased their bond to him.
  • He was the captain of Team Red. In their minds, loyalty demanded they stick with him, acting as his shield one day, his sword the next.
  • ...3 more annotations...
  • But something else, something even more powerful, was going on. Many Trump supporters grew to hate his critics even more than they came to love Trump. For them, Trump’s detractors were not just wrong but wicked, obsessed with getting Trump, and hell-bent on destroying America
  • For Trump supporters to admit that they were wrong about him—and especially to admit that Trump’s critics had been right about him—would blow their circuits. If they ever do turn on Trump, they will admit it only to themselves and maybe a few close intimates
  • asking Trump supporters to focus on his moral turpitude is like asking them to stare into the sun. They can do it for a split second, and then they have to look away. The Trump years have been all about looking away.
Javier E

Why Rotterdam Wouldn't Allow a Bridge to Be Dismantled for Bezos' Yacht - The New York Times - 0 views

  • explaining the anger that Mr. Bezos and Oceanco, the maker of the three-masted, $500 million schooner, inspired after making what may have sounded like a fairly benign request. The company asked the local government to briefly dismantle the elevated middle span of the Hef, which is 230 feet tall at its highest point, allowing the vessel to sail down the King’s Harbor channel and out to sea.
  • The whole process would have taken a day or two and Oceanco would have covered the costs.AdvertisementContinue reading the main story
  • The bridge, a lattice of moss-green steel in the shape of a hulking “H,” is not actually used by anyone. It served as a railroad bridge for decades until it was replaced by a tunnel and decommissioned in the early 1990s. It’s been idle ever since.
  • ...5 more annotations...
  • In sum, the operation would have been fast, free and disrupted nothing. So why the fuss?
  • “What can you buy if you have unlimited cash? Can you bend every rule? Can you take apart monuments?”
  • “There’s a principle at stake,”
  • The first problem was the astounding wealth of Mr. Bezos.
  • “The Dutch like to say, ‘Acting normal is crazy enough,’
Javier E

Opinion | How Behavioral Economics Took Over America - The New York Times - 0 views

  • Some behavioral interventions do seem to lead to positive changes, such as automatically enrolling children in school free lunch programs or simplifying mortgage information for aspiring homeowners. (Whether one might call such interventions “nudges,” however, is debatable.)
  • it’s not clear we need to appeal to psychology studies to make some common-sense changes, especially since the scientific rigor of these studies is shaky at best.
  • Nudges are related to a larger area of research on “priming,” which tests how behavior changes in response to what we think about or even see without noticing
  • ...16 more annotations...
  • Behavioral economics is at the center of the so-called replication crisis, a euphemism for the uncomfortable fact that the results of a significant percentage of social science experiments can’t be reproduced in subsequent trials
  • this key result was not replicated in similar experiments, undermining confidence in a whole area of study. It’s obvious that we do associate old age and slower walking, and we probably do slow down sometimes when thinking about older people. It’s just not clear that that’s a law of the mind.
  • And these attempts to “correct” human behavior are based on tenuous science. The replication crisis doesn’t have a simple solution
  • Journals have instituted reforms like having scientists preregister their hypotheses to avoid the possibility of results being manipulated during the research. But that doesn’t change how many uncertain results are already out there, with a knock-on effect that ripples through huge segments of quantitative social scienc
  • The Johns Hopkins science historian Ruth Leys, author of a forthcoming book on priming research, points out that cognitive science is especially prone to building future studies off disputed results. Despite the replication crisis, these fields are a “train on wheels, the track is laid and almost nothing stops them,” Dr. Leys said.
  • These cases result from lax standards around data collection, which will hopefully be corrected. But they also result from strong financial incentives: the possibility of salaries, book deals and speaking and consulting fees that range into the millions. Researchers can get those prizes only if they can show “significant” findings.
  • It is no coincidence that behavioral economics, from Dr. Kahneman to today, tends to be pro-business. Science should be not just reproducible, but also free of obvious ideology.
  • Technology and modern data science have only further entrenched behavioral economics. Its findings have greatly influenced algorithm design.
  • The collection of personal data about our movements, purchases and preferences inform interventions in our behavior from the grocery store to who is arrested by the police.
  • Setting people up for safety and success and providing good default options isn’t bad in itself, but there are more sinister uses as well. After all, not everyone who wants to exploit your cognitive biases has your best interests at heart.
  • Despite all its flaws, behavioral economics continues to drive public policy, market research and the design of digital interfaces.
  • One might think that a kind of moratorium on applying such dubious science would be in order — except that enacting one would be practically impossible. These ideas are so embedded in our institutions and everyday life that a full-scale audit of the behavioral sciences would require bringing much of our society to a standstill.
  • There is no peer review for algorithms that determine entry to a stadium or access to credit. To perform even the most banal, everyday actions, you have to put implicit trust in unverified scientific results.
  • We can’t afford to defer questions about human nature, and the social and political policies that come from them, to commercialized “research” that is scientifically questionable and driven by ideology. Behavioral economics claims that humans aren’t rational.
  • That’s a philosophical claim, not a scientific one, and it should be fought out in a rigorous marketplace of ideas. Instead of unearthing real, valuable knowledge of human nature, behavioral economics gives us “one weird trick” to lose weight or quit smoking.
  • Humans may not be perfectly rational, but we can do better than the predictably irrational consequences that behavioral economics has left us with today.
Javier E

The Sad Trombone Debate: The RNC Throws in the Towel and Gets Ready to Roll Over for Trump. Again. - 0 views

  • Death to the Internet
  • Yesterday Ben Thompson published a remarkable essay in which he more or less makes the case that the internet is a socially deleterious invention, that it will necessarily get more toxic, and that the best we can hope for is that it gets so bad, so fast, that everyone is shocked into turning away from it.
  • Ben writes the best and most insightful newsletter about technology and he has been, in all the years I’ve read him, a techno-optimist.
  • ...24 more annotations...
  • this is like if Russell Moore came out and said that, on the whole, Christianity turns out to be a bad thing. It’s that big of a deal.
  • Thompson’s case centers around constraints and supply, particularly as they apply to content creation.
  • In the pre-internet days, creating and distributing content was relatively expensive, which placed content publishers—be they newspapers, or TV stations, or movie studios—high on the value chain.
  • The internet reduced distribution costs to zero and this shifted value away from publishers and over to aggregators: Suddenly it was more important to aggregate an audience—a la Google and Facebook—than to be a content creator.
  • Audiences were valuable; content was commoditized.
  • What has alarmed Thompson is that AI has now reduced the cost of creating content to zero.
  • what does the world look like when both the creation and distribution of content are zero?
  • Hellscape
  • We’re headed to a place where content is artificially created and distributed in such a way as to be tailored to a given user’s preferences. Which will be the equivalent of living in a hall of mirrors.
  • What does that mean for news? Nothing good.
  • It doesn’t really make sense to talk about “news media” because there are fundamental differences between publication models that are driven by scale.
  • So the challenges the New York Times face will be different than the challenges that NPR or your local paper face.
  • Two big takeaways:
  • (1) Ad-supported publications will not survive
  • Zero-cost for content creation combined with zero-cost distribution means an infinite supply of content. The more content you have, the more ad space exists—the lower ad prices go.
  • Actually, some ad-supported publications will survive. They just won’t be news. What will survive will be content mills that exist to serve ads specifically matched to targeted audiences.
  • (2) Size is determinative.
  • The New York Times has a moat by dint of its size. It will see the utility of its soft “news” sections decline in value, because AI is going to be better at creating cooking and style content than breaking hard news. But still, the NYT will be okay because it has pivoted hard into being a subscription-based service over the last decade.
  • At the other end of the spectrum, independent journalists should be okay. A lone reporter running a focused Substack who only needs four digits’ worth of subscribers to sustain them.
  • But everything in between? That’s a crapshoot.
  • Technology writers sometimes talk about the contrast between “builders” and “conservers” — roughly speaking, between those who are most animated by what we stand to gain from technology and those animated by what we stand to lose.
  • in our moment the builder and conserver types are proving quite mercurial. On issues ranging from Big Tech to medicine, human enhancement to technologies of governance, the politics of technology are in upheaval.
  • Dispositions are supposed to be basically fixed. So who would have thought that deep blue cities that yesterday were hotbeds of vaccine skepticism would today become pioneers of vaccine passports? Or that outlets that yesterday reported on science and tech developments in reverent tones would today make it their mission to unmask “tech bros”?
  • One way to understand this churn is that the builder and the conserver types each speak to real, contrasting features within human nature. Another way is that these types each pick out real, contrasting features of technology. Focusing strictly on one set of features or the other eventually becomes unstable, forcing the other back into view.
Javier E

'Oppenheimer,' 'The Maniac' and Our Terrifying Prometheus Moment - The New York Times - 0 views

  • Prometheus was the Titan who stole fire from the gods of Olympus and gave it to human beings, setting us on a path of glory and disaster and incurring the jealous wrath of Zeus. In the modern world, especially since the beginning of the Industrial Revolution, he has served as a symbol of progress and peril, an avatar of both the liberating power of knowledge and the dangers of technological overreach.
  • More than 200 years after the Shelleys, Prometheus is having another moment, one closer in spirit to Mary’s terrifying ambivalence than to Percy’s fulsome gratitude. As technological optimism curdles in the face of cyber-capitalist villainy, climate disaster and what even some of its proponents warn is the existential threat of A.I., that ancient fire looks less like an ember of divine ingenuity than the start of a conflagration. Prometheus is what we call our capacity for self-destruction.
  • Annie Dorsen’s theater piece “Prometheus Firebringer,” which was performed at Theater for a New Audience in September, updates the Greek myth for the age of artificial intelligence, using A.I. to weave a cautionary tale that my colleague Laura Collins-Hughes called “forcefully beneficial as an examination of our obeisance to technology.”
  • ...13 more annotations...
  • Something similar might be said about “The Maniac,” Benjamín Labatut’s new novel, whose designated Prometheus is the Hungarian-born polymath John von Neumann, a pioneer of A.I. as well as an originator of game theory.
  • both narratives are grounded in fact, using the lives and ideas of real people as fodder for allegory and attempting to write a new mythology of the modern world.
  • Oppenheimer wasn’t a principal author of that theory. Those scientists, among them Niels Bohr, Erwin Schrödinger and Werner Heisenberg, were characters in Labatut’s previous novel, “When We Cease to Understand the World.” That book provides harrowing illumination of a zone where scientific insight becomes indistinguishable from madness or, perhaps, divine inspiration. The basic truths of the new science seem to explode all common sense: A particle is also a wave; one thing can be in many places at once; “scientific method and its object could no longer be prised apart.”
  • More than most intellectual bastions, the institute is a house of theory. The Promethean mad scientists of the 19th century were creatures of the laboratory, tinkering away at their infernal machines and homemade monsters. Their 20th-century counterparts were more likely to be found at the chalkboard, scratching out our future in charts, equations and lines of code.
  • The consequences are real enough, of course. The bombs dropped on Hiroshima and Nagasaki killed at least 100,000 people. Their successor weapons, which Oppenheimer opposed, threatened to kill everybody els
  • on Neumann and Oppenheimer were close contemporaries, born a year apart to prosperous, assimilated Jewish families in Budapest and New York. Von Neumann, conversant in theoretical physics, mathematics and analytic philosophy, worked for Oppenheimer at Los Alamos during the Manhattan Project. He spent most of his career at the Institute for Advanced Study, where Oppenheimer served as director after the war.
  • the intellectual drama of “Oppenheimer” — as distinct from the dramas of his personal life and his political fate — is about how abstraction becomes reality. The atomic bomb may be, for the soldiers and politicians, a powerful strategic tool in war and diplomacy. For the scientists, it’s something else: a proof of concept, a concrete manifestation of quantum theory.
  • . Oppenheimer’s designation as Prometheus is precise. He snatched a spark of quantum insight from those divinities and handed it to Harry S. Truman and the U.S. Army Air Forces.
  • Labatut’s account of von Neumann is, if anything, more unsettling than “Oppenheimer.” We had decades to get used to the specter of nuclear annihilation, and since the end of the Cold War it has been overshadowed by other terrors. A.I., on the other hand, seems newly sprung from science fiction, and especially terrifying because we can’t quite grasp what it will become.
  • Von Neumann, who died in 1957, did not teach machines to play Go. But when asked “what it would take for a computer, or some other mechanical entity, to begin to think and behave like a human being,” he replied that “it would have to play, like a child.”
  • MANIAC. The name was an acronym for “Mathematical Analyzer, Numerical Integrator and Computer,” which doesn’t sound like much of a threat. But von Neumann saw no limit to its potential. “If you tell me precisely what it is a machine cannot do,” he declared, “then I can always make a machine which will do just that.” MANIAC didn’t just represent a powerful new kind of machine, but “a new type of life.”
  • If Oppenheimer took hold of the sacred fire of atomic power, von Neumann’s theft was bolder and perhaps more insidious: He stole a piece of the human essence. He’s not only a modern Prometheus; he’s a second Frankenstein, creator of an all but human, potentially more than human monster.
  • “Technological power as such is always an ambivalent achievement,” Labatut’s von Neumann writes toward the end of his life, “and science is neutral all through, providing only means of control applicable to any purpose, and indifferent to all. It is not the particularly perverse destructiveness of one specific invention that creates danger. The danger is intrinsic. For progress there is no cure.”
Javier E

Resilience, Another Thing We Can't Talk About - 0 views

  • I also think that we as a society are failing to inculcate resilience in our young people, and that culture war has left many progressive people in the curious position of arguing against the importance of resilience
  • Sadly, nothing is complicated for progressives today. I think the attitude that all questions are simple and nothing is complicated is the second most prominent element of contemporary progressive social culture, beneath only lol lol lol lmao lol lo
  • Teaching people how to suffer, how to respond to suffering and survive suffering and grow from suffering, is one of the most essential tasks of any community. Because suffering is inevitable. And I do think that we have lost sight of this essential element of growing up in contemporary society
  • ...9 more annotations...
  • Haidt isn’t helping himself any. The term “culture of victimhood” reminds many people of the “snowflake” insult, the idea than anyone from a marginalized background who complains about injustice is really just self-involved and weak.
  • I find his predictions about how these dynamics will somehow undermine American capitalism to be unconvincing, running towards bizarre. If social media is making our kids depressed and anxious, that is the reason to be concerned, not some tangled logic about national greatness.
  • I think that suffering is the only truly universal endowment of the human species.
  • ecause Haidt talked about a culture of victimhood, he was immediately coded as right-wing, which is to say on the wrong side of the culture war
  • (The piece notes that the age at which children are allowed to play outside alone has moved from 7 or 8 to 10 or 12 in short order.)
  • the critics of someone like Haidt, the most coherent criticism they mount is that talk of toughness and resilience can be used opportunistically to dismiss demands for justice. “You just need to toughen up” is not, obviously, a constructive, good-faith response to a demand that the police stop killing unarmed Black people
  • I don’t think that’s the version Haidt is articulating
  • Yes, we must do all we can to reduce injustice, and we need to be compassionate to everyone. But we also need to understand that no political movement, no matter how effective, can ever end suffering and thus obviate the need for resilience.
  • I’m really not a fan of therapy culture, where the imperatives and vocabulary and purpose of therapy are now assumed to be necessary in every domain of human affairs. But that’s not because I think therapy is bad; I think therapy, as therapy, is very good. It’s because I think everything can’t be therapy, and the effort to make everything therapy will have the perverse effect of making nothing therapy.
Javier E

Two recent surveys show AI will do more harm than good - The Washington Post - 0 views

  • A Monmouth University poll released last week found that only 9 percent of Americans believed that computers with artificial intelligence would do more good than harm to society.
  • When the same question was asked in a 1987 poll, a higher share of respondents – about one in five – said AI would do more good than harm,
  • In other words, people have less unqualified confidence in AI now than they did 35 years ago, when the technology was more science fiction than reality.
  • ...8 more annotations...
  • The Pew Research Center survey asked people different questions but found similar doubts about AI. Just 15 percent of respondents said they were more excited than concerned about the increasing use of AI in daily life.
  • “It’s fantastic that there is public skepticism about AI. There absolutely should be,” said Meredith Broussard, an artificial intelligence researcher and professor at New York University.
  • Broussard said there can be no way to design artificial intelligence software to make inherently human decisions, like grading students’ tests or determining the course of medical treatment.
  • Most Americans essentially agree with Broussard that AI has a place in our lives, but not for everything.
  • Most people said it was a bad idea to use AI for military drones that try to distinguish between enemies and civilians or trucks making local deliveries without human drivers. Most respondents said it was a good idea for machines to perform risky jobs such as coal mining.
  • Roman Yampolskiy, an AI specialist at the University of Louisville engineering school, told me he’s concerned about how quickly technologists are building computers that are designed to “think” like the human brain and apply knowledge not just in one narrow area, like recommending Netflix movies, but for complex tasks that have tended to require human intelligence.
  • “We have an arms race between multiple untested technologies. That is my concern,” Yampolskiy said. (If you want to feel terrified, I recommend Yampolskiy’s research paper on the inability to control advanced AI.)
  • The term “AI” is a catch-all for everything from relatively uncontroversial technology, such as autocomplete in your web search queries, to the contentious software that promises to predict crime before it happens. Our fears about the latter might be overwhelming our beliefs about the benefits from more mundane AI.
« First ‹ Previous 1181 - 1200 of 1210 Next ›
Showing 20 items per page