Skip to main content

Home/ History Readings/ Group items tagged online learning

Rss Feed Group items tagged

Javier E

An Ancient Guide to the Good Life | The New Yorker - 0 views

  • What’s striking about AITA is the language in which it states its central question: you’re asked not whether I did the right thing but, rather, what sort of person I’m being.
  • We would have a different morality, and an impoverished one, if we judged actions only with those terms of pure evaluation, “right” or “wrong,” and judged people only “good” or “bad.”
  • , if Aristotle’s ethics is to be sold as a work of what we call self-help, we have to ask: How helpful is it?
  • ...40 more annotations...
  • Our vocabulary of commendation and condemnation is perpetually changing, but it has always relied on “thick” ethical terms, which combine description and evaluation.
  • “How to flourish” was one such topic, “flourishing” being a workable rendering of Aristotle’s term eudaimonia. We might also translate the term in the usual way, as “happiness,” as long as we suspend some of that word’s modern associations; eudaimonia wasn’t something that waxed and waned with our moods
  • For Aristotle, ethics was centrally concerned with how to live a good life: a flourishing existence was also a virtuous one.
  • “famously terse, often crabbed in their style.” Crabbed, fragmented, gappy: it can be a headache trying to match his pronouns to the nouns they refer to. Some of his arguments are missing crucial premises; others fail to spell out their conclusions.
  • Aristotle is obscure in other ways, too. His highbrow potshots at unnamed contemporaries, his pop-cultural references, must have tickled his aristocratic Athenian audience. But the people and the plays he referred to are now lost or forgotten. Some readers have found his writings “affectless,” stripped of any trace of a human voice, or of a beating human heart.
  • Flourishing is the ultimate goal of human life; a flourishing life is one that is lived in accord with the various “virtues” of the character and intellect (courage, moderation, wisdom, and so forth); a flourishing life also calls for friendships with good people and a certain measure of good fortune in the way of a decent income, health, and looks.
  • much of what it says can sound rather obvious
  • Virtue is not just about acting rightly but about feeling rightly. What’s best, Aristotle says, is “to have such feelings at the right time, at the right objects and people, with the right goal, and in the right manner.” Good luck figuring out what the “right time” or object or manner is.
  • Virtue is a state “consisting in a mean,” Aristotle maintains, and this mean “is defined by reference to reason, that is to say, to the reason by reference to which the prudent person would define it.
  • The phrase “prudent person” here renders the Greek phronimos, a person possessed of that special quality of mind which Aristotle called “phronesis.” But is Aristotle then saying that virtue consists in being disposed to act as the virtuous person does? That sounds true, but trivially so.
  • it helps to reckon with the role that habits of mind play in Aristotle’s account. Meyer’s translation of “phronesis” is “good judgment,” and the phrase nicely captures the combination of intelligence and experience which goes into acquiring it, along with the difficulty of reducing it to a set of explicit principles that anyone could apply mechanically, like an algorithm.
  • “good judgment” is an improvement on the old-fashioned and now misleading “prudence”; it’s also less clunky than another standby, “practical wisdom.”
  • The enormous role of judgment in Aristotle’s picture of how to live can sound, to modern readers thirsty for ethical guidance, like a cop-out. Especially when they might instead pick up a treatise by John Stuart Mill and find an elegantly simple principle for distinguishing right from wrong, or one by Kant, in which they will find at least three. They might, for that matter, look to Jordan Peterson, who conjures up as many as twelve.
  • the question of how to flourish could receive a gloomy answer from Aristotle: it may be too late to start trying. Why is that? Flourishing involves, among other things, performing actions that manifest virtues, which are qualities of character that enable us to perform what Aristotle calls our “characteristic activity
  • But how do we come to acquire these qualities of character, or what Meyer translates as “dispositions”? Aristotle answers, “From our regular practice.”
  • In a passage missing from Meyer’s ruthless abridgment, Aristotle warns, “We need to have been brought up in noble habits if we are to be adequate students of noble and just things. . . . For we begin from the that; if this is apparent enough to us, we can begin without also knowing why. Someone who is well brought up has the beginnings, or can easily acquire them.”
  • Aristotle suggests, more generally, that you should identify the vices you’re susceptible to and then “pull yourself away in the opposite direction, since by pulling hard against one fault, you get to the mean (as when straightening out warped planks).
  • Sold as a self-help manual in a culture accustomed to gurus promulgating “rules for living,” Aristotle’s ethics may come as a disappointment. But our disappointment may tell us more about ourselves than it does about Aristotle.
  • Michael Oakeshott wrote that “nobody supposes that the knowledge that belongs to the good cook is confined to what is or may be written down in the cookery book.” Proficiency in cooking is, of course, a matter of technique
  • My tutor’s fundamental pedagogical principle was that to teach a text meant being, at least for the duration of the tutorial, its most passionate champion. Every smug undergraduate exposé of a fallacy would be immediately countered with a robust defense of Aristotle’s reasoning.
  • “How to read Aristotle? Slowly.”
  • I was never slow enough. There was always another nuance, another textual knot to unravel
  • Sometimes we acquire our skills by repeatedly applying a rule—following a recipe—but when we succeed what we become are not good followers of recipes but good cooks. Through practice, as Aristotle would have said, we acquire judgment.
  • What we were doing with this historical text wasn’t history but philosophy. We were reading it not for what it might reveal about an exotic culture but for the timelessly important truths it might contain—an attitude at odds with the relativism endemic in the rest of the humanities.
  • There is no shortcut to understanding Aristotle, no recipe. You get good at reading him by reading him, with others, slowly and often. Regular practice: for Aristotle, it’s how you get good generally.
  • “My parents taught me the difference between right and wrong,” he said, “and I can’t think what more there is to say about it.” The appropriate response, and the Aristotelian one, would be to agree with the spirit of the remark. There is such a thing as the difference between right and wrong. But reliably telling them apart takes experience, the company of wise friends, and the good luck of having been well brought u
  • we are all Aristotelians, most of the time, even when forces in our culture briefly persuade us that we are something else. Ethics remains what it was to the Greeks: a matter of being a person of a certain sort of sensibility, not of acting on “principles,” which one reserves for unusual situations of the kind that life sporadically throws up
  • That remains a truth about ethics even when we’ve adopted different terms for describing what type of person not to be: we don’t speak much these days of being “small-souled” or “intemperate,” but we do say a great deal about “douchebags,” “creeps,” and, yes, “assholes.
  • In one sense, it tells us nothing that the right thing to do is to act and feel as the person of good judgment does. In another sense, it tells us virtually everything that can be said at this level of generality.
  • If self-help means denying the role that the perceptions of others play in making us who we are, if it means a set of rules for living that remove the need for judgment, then we are better off without it.
  • Aristotle had little hope that a philosopher’s treatise could teach someone without much experience of life how to make the crucial ethical distinctions. We learn to spot an “asshole” from living; how else
  • when our own perceptions falter, we continue to do today exactly what Aristotle thought we should do. He asserts, in another significant remark that doesn’t make Meyer’s cut, that we should attend to the words of the old and experienced at least as much as we do to philosophical proofs: “these people see correctly because experience has given them their eye.”
  • Is it any surprise that the Internet is full of those who need help seeing rightly? Finding no friendly neighborhood phronimos to provide authoritative advice, you defer instead to the wisdom of an online community.
  • “The self-made man,” Oakeshott wrote, “is never literally self-made, but depends upon a certain kind of society and upon a large unrecognized inheritance.”
  • It points us in the right direction: toward the picture of a person with a certain character, certain habits of thinking and feeling, a certain level of self-knowledge and knowledge of other people.
  • We have long lived in a world desperate for formulas, simple answers to the simple question “What should I do?”
  • the algorithms, the tenets, the certificates are all attempts to solve the problem—which is everybody’s problem—of how not to be an asshole. Life would be a lot easier if there were rules, algorithms, and life hacks solving that problem once and for all. There aren’t.
  • At the heart of the Nicomachean Ethics is a claim that remains both edifying and chastening: phronesis doesn’t come that easy. Aristotle devised a theory that was vague in just the right places, one that left, intentionally, space to be filled in by life. 
  • Twenty-four centuries later, we’re still guided by the approach toward ethical life that Aristotle exemplified, one in which the basic question is not what we do but who we are
  • The Internet has no shortage of moralists and moralizers, but one ethical epicenter is surely the extraordinary, addictive subreddit called “Am I the Asshole?,” popularly abbreviated AITA
Javier E

How the AI apocalypse gripped students at elite schools like Stanford - The Washington ... - 0 views

  • Edwards thought young people would be worried about immediate threats, like AI-powered surveillance, misinformation or autonomous weapons that target and kill without human intervention — problems he calls “ultraserious.” But he soon discovered that some students were more focused on a purely hypothetical risk: That AI could become as smart as humans and destroy mankind.
  • In these scenarios, AI isn’t necessarily sentient. Instead, it becomes fixated on a goal — even a mundane one, like making paper clips — and triggers human extinction to optimize its task.
  • To prevent this theoretical but cataclysmic outcome, mission-driven labs like DeepMind, OpenAI and Anthropic are racing to build a good kind of AI programmed not to lie, deceive or kill us.
  • ...28 more annotations...
  • Meanwhile, donors such as Tesla CEO Elon Musk, disgraced FTX founder Sam Bankman-Fried, Skype founder Jaan Tallinn and ethereum co-founder Vitalik Buterin — as well as institutions like Open Philanthropy, a charitable organization started by billionaire Facebook co-founder Dustin Moskovitz — have worked to push doomsayers from the tech industry’s margins into the mainstream.
  • More recently, wealthy tech philanthropists have begun recruiting an army of elite college students to prioritize the fight against rogue AI over other threats
  • Other skeptics, like venture capitalist Marc Andreessen, are AI boosters who say that hyping such fears will impede the technology’s progress.
  • Critics call the AI safety movement unscientific. They say its claims about existential risk can sound closer to a religion than research
  • And while the sci-fi narrative resonates with public fears about runaway AI, critics say it obsesses over one kind of catastrophe to the exclusion of many others.
  • Open Philanthropy spokesperson Mike Levine said harms like algorithmic racism deserve a robust response. But he said those problems stem from the same root issue: AI systems not behaving as their programmers intended. The theoretical risks “were not garnering sufficient attention from others — in part because these issues were perceived as speculative,” Levine said in a statement. He compared the nonprofit’s AI focus to its work on pandemics, which also was regarded as theoretical until the coronavirus emerged.
  • Among the reputational hazards of the AI safety movement is its association with an array of controversial figures and ideas, like EA, which is also known for recruiting ambitious young people on elite college campuses.
  • The foundation began prioritizing existential risks around AI in 2016,
  • there was little status or money to be gained by focusing on risks. So the nonprofit set out to build a pipeline of young people who would filter into top companies and agitate for change from the insid
  • Colleges have been key to this growth strategy, serving as both a pathway to prestige and a recruiting ground for idealistic talent
  • The clubs train students in machine learning and help them find jobs in AI start-ups or one of the many nonprofit groups dedicated to AI safety.
  • Many of these newly minted student leaders view rogue AI as an urgent and neglected threat, potentially rivaling climate change in its ability to end human life. Many see advanced AI as the Manhattan Project of their generation
  • Despite the school’s ties to Silicon Valley, Mukobi said it lags behind nearby UC Berkeley, where younger faculty members research AI alignment, the term for embedding human ethics into AI systems.
  • Mukobi joined Stanford’s club for effective altruism, known as EA, a philosophical movement that advocates doing maximum good by calculating the expected value of charitable acts, like protecting the future from runaway AI. By 2022, AI capabilities were advancing all around him — wild developments that made those warnings seem prescient.
  • At Stanford, Open Philanthropy awarded Luby and Edwards more than $1.5 million in grants to launch the Stanford Existential Risk Initiative, which supports student research in the growing field known as “AI safety” or “AI alignment.
  • from the start EA was intertwined with tech subcultures interested in futurism and rationalist thought. Over time, global poverty slid down the cause list, while rogue AI climbed toward the top.
  • In the past year, EA has been beset by scandal, including the fall of Bankman-Fried, one of its largest donors
  • Another key figure, Oxford philosopher Nick Bostrom, whose 2014 bestseller “Superintelligence” is essential reading in EA circles, met public uproar when a decades-old diatribe about IQ surfaced in January.
  • Programming future AI systems to share human values could mean “an amazing world free from diseases, poverty, and suffering,” while failure could unleash “human extinction or our permanent disempowerment,” Mukobi wrote, offering free boba tea to anyone who attended the 30-minute intro.
  • Open Philanthropy’s new university fellowship offers a hefty direct deposit: undergraduate leaders receive as much as $80,000 a year, plus $14,500 for health insurance, and up to $100,000 a year to cover group expenses.
  • Student leaders have access to a glut of resources from donor-sponsored organizations, including an “AI Safety Fundamentals” curriculum developed by an OpenAI employee.
  • Interest in the topic is also growing among Stanford faculty members, Edwards said. He noted that a new postdoctoral fellow will lead a class on alignment next semester in Stanford’s storied computer science department.
  • Edwards discovered that shared online forums function like a form of peer review, with authors changing their original text in response to the comments
  • Mukobi feels energized about the growing consensus that these risks are worth exploring. He heard students talking about AI safety in the halls of Gates, the computer science building, in May after Geoffrey Hinton, another “godfather” of AI, quit Google to warn about AI. By the end of the year, Mukobi thinks the subject could be a dinner-table topic, just like climate change or the war in Ukraine.
  • Luby, Edwards’s teaching partner for the class on human extinction, also seems to find these arguments persuasive. He had already rearranged the order of his AI lesson plans to help students see the imminent risks from AI. No one needs to “drink the EA Kool-Aid” to have genuine concerns, he said.
  • Edwards, on the other hand, still sees things like climate change as a bigger threat than rogue AI. But ChatGPT and the rapid release of AI models has convinced him that there should be room to think about AI safety.
  • Interested students join reading groups where they get free copies of books like “The Precipice,” and may spend hours reading the latest alignment papers, posting career advice on the Effective Altruism forum, or adjusting their P(doom), a subjective estimate of the probability that advanced AI will end badly. The grants, travel, leadership roles for inexperienced graduates and sponsored co-working spaces build a close-knit community.
  • The course will not be taught by students or outside experts. Instead, he said, it “will be a regular Stanford class.”
Javier E

Republican Group Running Anti-Trump Ads Finds Little Is Working - The New York Times - 0 views

  • The political action committee, called Win It Back, has close ties to the influential fiscally conservative group Club for Growth. It has already spent more than $4 million trying to lower Mr. Trump’s support among Republican voters in Iowa and nearly $2 million more trying to damage him in South Carolina
  • But in the memo — dated Thursday and obtained by The New York Times — the head of Win It Back PAC, David McIntosh, acknowledges to donors that after extensive testing of more than 40 anti-Trump television ads, “all attempts to undermine his conservative credentials on specific issues were ineffective.”
  • “Even when you show video to Republican primary voters — with complete context — of President Trump saying something otherwise objectionable to primary voters, they find a way to rationalize and dismiss it,” Mr. McIntosh states in the “key learnings” section of the memo.
  • ...5 more annotations...
  • “Every traditional postproduction ad attacking President Trump either backfired or produced no impact on his ballot support and favorability,” Mr. McIntosh adds. “This includes ads that primarily feature video of him saying liberal or stupid comments from his own mouth.”
  • Examples of “failed” ads cited in the memo included attacks on Mr. Trump’s “handling of the pandemic, promotion of vaccines, praise of Dr. Fauci, insane government spending, failure to build the wall, recent attacks on pro-life legislation, refusal to fight woke issues, openness to gun control, and many others.”
  • “Broadly acceptable messages against President Trump with Republican primary voters that do not produce a meaningful backlash include sharing concerns about his ability to beat President Biden, expressions of Trump fatigue due to the distractions he creates and the polarization of the country, as well as his pattern of attacking conservative leaders for self-interested reasons,”
  • “It is essential to disarm the viewer at the opening of the ad by establishing that the person being interviewed on camera is a Republican who previously supported President Trump,” he adds, “otherwise, the viewer will automatically put their guard up, assuming the messenger is just another Trump-hater whose opinion should be summarily dismissed.”
  • Win It Back did not bother running ads focused on Mr. Trump as an instigator of political violence or as a threat to democracy. The group tested in a focus group and online panel an ad called “Risk,” narrated by former Representative Liz Cheney, that focused on Mr. Trump’s actions on Jan. 6, 2021. But the group found that the Cheney ad helped Mr. Trump with the Republican voters, according to Mr. McIntosh.
Javier E

The Age of Social Media Is Ending - The Atlantic - 0 views

  • Slowly and without fanfare, around the end of the aughts, social media took its place. The change was almost invisible, but it had enormous consequences. Instead of facilitating the modest use of existing connections—largely for offline life (to organize a birthday party, say)—social software turned those connections into a latent broadcast channel. All at once, billions of people saw themselves as celebrities, pundits, and tastemakers.
  • A global broadcast network where anyone can say anything to anyone else as often as possible, and where such people have come to think they deserve such a capacity, or even that withholding it amounts to censorship or suppression—that’s just a terrible idea from the outset. And it’s a terrible idea that is entirely and completely bound up with the concept of social media itself: systems erected and used exclusively to deliver an endless stream of content.
  • “social media,” a name so familiar that it has ceased to bear meaning. But two decades ago, that term didn’t exist
  • ...35 more annotations...
  • A social network is an idle, inactive system—a Rolodex of contacts, a notebook of sales targets, a yearbook of possible soul mates. But social media is active—hyperactive, really—spewing material across those networks instead of leaving them alone until needed.
  • As the original name suggested, social networking involved connecting, not publishing. By connecting your personal network of trusted contacts (or “strong ties,” as sociologists call them) to others’ such networks (via “weak ties”), you could surface a larger network of trusted contacts
  • The whole idea of social networks was networking: building or deepening relationships, mostly with people you knew. How and why that deepening happened was largely left to the users to decide.
  • That changed when social networking became social media around 2009, between the introduction of the smartphone and the launch of Instagram. Instead of connection—forging latent ties to people and organizations we would mostly ignore—social media offered platforms through which people could publish content as widely as possible, well beyond their networks of immediate contacts.
  • Social media turned you, me, and everyone into broadcasters (if aspirational ones). The results have been disastrous but also highly pleasurable, not to mention massively profitable—a catastrophic combination.
  • soon enough, all social networks became social media first and foremost. When groups, pages, and the News Feed launched, Facebook began encouraging users to share content published by others in order to increase engagement on the service, rather than to provide updates to friends. LinkedIn launched a program to publish content across the platform, too. Twitter, already principally a publishing platform, added a dedicated “retweet” feature, making it far easier to spread content virally across user networks.
  • The authors propose social media as a system in which users participate in “information exchange.” The network, which had previously been used to establish and maintain relationships, becomes reinterpreted as a channel through which to broadcast.
  • The toxicity of social media makes it easy to forget how truly magical this innovation felt when it was new. From 2004 to 2009, you could join Facebook and everyone you’d ever known—including people you’d definitely lost track of—was right there, ready to connect or reconnect. The posts and photos I saw characterized my friends’ changing lives, not the conspiracy theories that their unhinged friends had shared with them
  • Twitter, which launched in 2006, was probably the first true social-media site, even if nobody called it that at the time. Instead of focusing on connecting people, the site amounted to a giant, asynchronous chat room for the world. Twitter was for talking to everyone—which is perhaps one of the reasons journalists have flocked to it
  • on Twitter, anything anybody posted could be seen instantly by anyone else. And furthermore, unlike posts on blogs or images on Flickr or videos on YouTube, tweets were short and low-effort, making it easy to post many of them a week or even a day.
  • a “web 2.0” revolution in “user-generated content,” offering easy-to-use, easily adopted tools on websites and then mobile apps. They were built for creating and sharing “content,”
  • When we look back at this moment, social media had already arrived in spirit if not by name. RSS readers offered a feed of blog posts to catch up on, complete with unread counts. MySpace fused music and chatter; YouTube did it with video (“Broadcast Yourself”)
  • This is also why journalists became so dependent on Twitter: It’s a constant stream of sources, events, and reactions—a reporting automat, not to mention an outbound vector for media tastemakers to make tastes.
  • Other services arrived or evolved in this vein, among them Reddit, Snapchat, and WhatsApp, all far more popular than Twitter. Social networks, once latent routes for possible contact, became superhighways of constant content
  • Although you can connect the app to your contacts and follow specific users, on TikTok, you are more likely to simply plug into a continuous flow of video content that has oozed to the surface via algorithm.
  • In the social-networking era, the connections were essential, driving both content creation and consumption. But the social-media era seeks the thinnest, most soluble connections possible, just enough to allow the content to flow.
  • The ensuing disaster was multipar
  • “influencer” became an aspirational role, especially for young people for whom Instagram fame seemed more achievable than traditional celebrity—or perhaps employment of any kind.
  • social-media operators discovered that the more emotionally charged the content, the better it spread across its users’ networks. Polarizing, offensive, or just plain fraudulent information was optimized for distribution. By the time the platforms realized and the public revolted, it was too late to turn off these feedback loops.
  • When network connections become activated for any reason or no reason, then every connection seems worthy of traversing.
  • Rounding up friends or business contacts into a pen in your online profile for possible future use was never a healthy way to understand social relationships.
  • when social networking evolved into social media, user expectations escalated. Driven by venture capitalists’ expectations and then Wall Street’s demands, the tech companies—Google and Facebook and all the rest—became addicted to massive scale
  • Social media showed that everyone has the potential to reach a massive audience at low cost and high gain—and that potential gave many people the impression that they deserve such an audience.
  • On social media, everyone believes that anyone to whom they have access owes them an audience: a writer who posted a take, a celebrity who announced a project, a pretty girl just trying to live her life, that anon who said something afflictive
  • Facebook and all the rest enjoyed a massive rise in engagement and the associated data-driven advertising profits that the attention-driven content economy created. The same phenomenon also created the influencer economy, in which individual social-media users became valuable as channels for distributing marketing messages or product sponsorships by means of their posts’ real or imagined reach
  • people just aren’t meant to talk to one another this much. They shouldn’t have that much to say, they shouldn’t expect to receive such a large audience for that expression, and they shouldn’t suppose a right to comment or rejoinder for every thought or notion either.
  • From being asked to review every product you buy to believing that every tweet or Instagram image warrants likes or comments or follows, social media produced a positively unhinged, sociopathic rendition of human sociality.
  • That’s no surprise, I guess, given that the model was forged in the fires of Big Tech companies such as Facebook, where sociopathy is a design philosophy.
  • If change is possible, carrying it out will be difficult, because we have adapted our lives to conform to social media’s pleasures and torments. It’s seemingly as hard to give up on social media as it was to give up smoking en masse
  • Quitting that habit took decades of regulatory intervention, public-relations campaigning, social shaming, and aesthetic shifts. At a cultural level, we didn’t stop smoking just because the habit was unpleasant or uncool or even because it might kill us. We did so slowly and over time, by forcing social life to suffocate the practice. That process must now begin in earnest for social media.
  • Something may yet survive the fire that would burn it down: social networks, the services’ overlooked, molten core. It was never a terrible idea, at least, to use computers to connect to one another on occasion, for justified reasons, and in moderation
  • The problem came from doing so all the time, as a lifestyle, an aspiration, an obsession. The offer was always too good to be true, but it’s taken us two decades to realize the Faustian nature of the bargain.
  • when I first wrote about downscale, the ambition seemed necessary but impossible. It still feels unlikely—but perhaps newly plausible.
  • To win the soul of social life, we must learn to muzzle it again, across the globe, among billions of people. To speak less, to fewer people and less often–and for them to do the same to you, and everyone else as well
  • We cannot make social media good, because it is fundamentally bad, deep in its very structure. All we can do is hope that it withers away, and play our small part in helping abandon it.
Javier E

How the Shoggoth Meme Has Come to Symbolize the State of A.I. - The New York Times - 0 views

  • the Shoggoth had become a popular reference among workers in artificial intelligence, as a vivid visual metaphor for how a large language model (the type of A.I. system that powers ChatGPT and other chatbots) actually works.
  • it was only partly a joke, he said, because it also hinted at the anxieties many researchers and engineers have about the tools they’re building.
  • Since then, the Shoggoth has gone viral, or as viral as it’s possible to go in the small world of hyper-online A.I. insiders. It’s a popular meme on A.I. Twitter (including a now-deleted tweet by Elon Musk), a recurring metaphor in essays and message board posts about A.I. risk, and a bit of useful shorthand in conversations with A.I. safety experts. One A.I. start-up, NovelAI, said it recently named a cluster of computers “Shoggy” in homage to the meme. Another A.I. company, Scale AI, designed a line of tote bags featuring the Shoggoth.
  • ...17 more annotations...
  • Shoggoths are fictional creatures, introduced by the science fiction author H.P. Lovecraft in his 1936 novella “At the Mountains of Madness.” In Lovecraft’s telling, Shoggoths were massive, blob-like monsters made out of iridescent black goo, covered in tentacles and eyes.
  • In a nutshell, the joke was that in order to prevent A.I. language models from behaving in scary and dangerous ways, A.I. companies have had to train them to act polite and harmless. One popular way to do this is called “reinforcement learning from human feedback,” or R.L.H.F., a process that involves asking humans to score chatbot responses, and feeding those scores back into the A.I. model.
  • Most A.I. researchers agree that models trained using R.L.H.F. are better behaved than models without it. But some argue that fine-tuning a language model this way doesn’t actually make the underlying model less weird and inscrutable. In their view, it’s just a flimsy, friendly mask that obscures the mysterious beast underneath.
  • @TetraspaceWest, the meme’s creator, told me in a Twitter message that the Shoggoth “represents something that thinks in a way that humans don’t understand and that’s totally different from the way that humans think.”
  • @TetraspaceWest said, wasn’t necessarily implying that it was evil or sentient, just that its true nature might be unknowable.
  • “I was also thinking about how Lovecraft’s most powerful entities are dangerous — not because they don’t like humans, but because they’re indifferent and their priorities are totally alien to us and don’t involve humans, which is what I think will be true about possible future powerful A.I.”
  • when Bing’s chatbot became unhinged and tried to break up my marriage, an A.I. researcher I know congratulated me on “glimpsing the Shoggoth.” A fellow A.I. journalist joked that when it came to fine-tuning Bing, Microsoft had forgotten to put on its smiley-face mask.
  • If it’s an A.I. safety researcher talking about the Shoggoth, maybe that person is passionate about preventing A.I. systems from displaying their true, Shoggoth-like nature.
  • In any case, the Shoggoth is a potent metaphor that encapsulates one of the most bizarre facts about the A.I. world, which is that many of the people working on this technology are somewhat mystified by their own creations. They don’t fully understand the inner workings of A.I. language models, how they acquire new capabilities or why they behave unpredictably at times. They aren’t totally sure if A.I. is going to be net-good or net-bad for the world.
  • That some A.I. insiders refer to their creations as Lovecraftian horrors, even as a joke, is unusual by historical standards. (Put it this way: Fifteen years ago, Mark Zuckerberg wasn’t going around comparing Facebook to Cthulhu.)
  • And it reinforces the notion that what’s happening in A.I. today feels, to some of its participants, more like an act of summoning than a software development process. They are creating the blobby, alien Shoggoths, making them bigger and more powerful, and hoping that there are enough smiley faces to cover the scary parts.
  • A great many people are dismissive of suggestions that any of these systems are “really” thinking, because they’re “just” doing something banal (like making statistical predictions about the next word in a sentence). What they fail to appreciate is that there is every reason to suspect that human cognition is “just” doing those exact same things. It matters not that birds flap their wings but airliners don’t. Both fly. And these machines think. And, just as airliners fly faster and higher and farther than birds while carrying far more weight, these machines are already outthinking the majority of humans at the majority of tasks. Further, that machines aren’t perfect thinkers is about as relevant as the fact that air travel isn’t instantaneous. Now consider: we’re well past the Wright flyer level of thinking machine, past the early biplanes, somewhere about the first commercial airline level. Not quite the DC-10, I think. Can you imagine what the AI equivalent of a 777 will be like? Fasten your seatbelts.
  • @BLA. You are incorrect. Everything has nature. Its nature is manifested in making humans react. Sure, no humans, no nature, but here we are. The writer and various sources are not attributing nature to AI so much as admitting that they don’t know what this nature might be, and there are reasons to be scared of it. More concerning to me is the idea that this field is resorting to geek culture reference points to explain and comprehend itself. It’s not so much the algorithm has no soul, but that the souls of the humans making it possible are stupendously and tragically underdeveloped.
  • @thomas h. You make my point perfectly. You’re observing that the way a plane flies — by using a turbine to generate thrust from combusting kerosene, for example — is nothing like the way that a bird flies, which is by using the energy from eating plant seeds to contract the muscles in its wings to make them flap. You are absolutely correct in that observation, but it’s also almost utterly irrelevant. And it ignores that, to a first approximation, there’s no difference in the physics you would use to describe a hawk riding a thermal and an airliner gliding (essentially) unpowered in its final descent to the runway. Further, you do yourself a grave disservice in being dismissive of the abilities of thinking machines, in exactly the same way that early skeptics have been dismissive of every new technology in all of human history. Writing would make people dumb; automobiles lacked the intelligence of horses; no computer could possibly beat a chess grandmaster because it can’t comprehend strategy; and on and on and on. Humans aren’t nearly as special as we fool ourselves into believing. If you want to have any hope of acting responsibly in the age of intelligent machines, you’ll have to accept that, like it or not, and whether or not it fits with your preconceived notions of what thinking is and how it is or should be done … machines can and do think, many of them better than you in a great many ways. b&
  • When even tech companies are saying AI is moving too fast, and the articles land on page 1 of the NYT (there's an old reference), I think the greedy will not think twice about exploiting this technology, with no ethical considerations, at all.
  • @nome sane? The problem is it isn't data as we understand it. We know what the datasets are -- they were used to train the AI's. But once trained, the AI is thinking for itself, with results that have surprised everybody.
  • The unique feature of a shoggoth is it can become whatever is needed for a particular job. There's no actual shape so it's not a bad metaphor, if an imperfect image. Shoghoths also turned upon and destroyed their creators, so the cautionary metaphor is in there, too. A shame more Asimov wasn't baked into AI. But then the conflict about how to handle AI in relation to people was key to those stories, too.
Javier E

Opinion | Lina Khan: We Must Regulate A.I. Here's How. - The New York Times - 0 views

  • The last time we found ourselves facing such widespread social change wrought by technology was the onset of the Web 2.0 era in the mid-2000s.
  • Those innovative services, however, came at a steep cost. What we initially conceived of as free services were monetized through extensive surveillance of the people and businesses that used them. The result has been an online economy where access to increasingly essential services is conditioned on the widespread hoarding and sale of our personal data.
  • These business models drove companies to develop endlessly invasive ways to track us, and the Federal Trade Commission would later find reason to believe that several of these companies had broken the law
  • ...10 more annotations...
  • What began as a revolutionary set of technologies ended up concentrating enormous private power over key services and locking in business models that come at extraordinary cost to our privacy and security.
  • The trajectory of the Web 2.0 era was not inevitable — it was instead shaped by a broad range of policy choices. And we now face another moment of choice. As the use of A.I. becomes more widespread, public officials have a responsibility to ensure this hard-learned history doesn’t repeat itself.
  • the Federal Trade Commission is taking a close look at how we can best achieve our dual mandate to promote fair competition and to protect Americans from unfair or deceptive practices.
  • we already can see several risks. The expanding adoption of A.I. risks further locking in the market dominance of large incumbent technology firms. A handful of powerful businesses control the necessary raw materials that start-ups and other companies rely on to develop and deploy A.I. tools. This includes cloud services and computing power, as well as vast stores of data.
  • Enforcers have the dual responsibility of watching out for the dangers posed by new A.I. technologies while promoting the fair competition needed to ensure the market for these technologies develops lawfully.
  • generative A.I. risks turbocharging fraud. It may not be ready to replace professional writers, but it can already do a vastly better job of crafting a seemingly authentic message than your average con artist — equipping scammers to generate content quickly and cheaply.
  • bots are even being instructed to use words or phrases targeted at specific groups and communities. Scammers, for example, can draft highly targeted spear-phishing emails based on individual users’ social media posts. Alongside tools that create deep fake videos and voice clones, these technologies can be used to facilitate fraud and extortion on a massive scale.
  • we will look not just at the fly-by-night scammers deploying these tools but also at the upstream firms that are enabling them.
  • these A.I. tools are being trained on huge troves of data in ways that are largely unchecked. Because they may be fed information riddled with errors and bias, these technologies risk automating discrimination
  • We once again find ourselves at a key decision point. Can we continue to be the home of world-leading technology without accepting race-to-the-bottom business models and monopolistic control that locks out higher quality products or the next big idea? Yes — if we make the right policy choices.
lilyrashkind

Biden says he heard late about baby formula shortage - The Washington Post - 0 views

  • Biden’s comments came after he met with executives of companies that manufacture infant formula, who told the president they knew the shortage would be severe in February after the closure of an Abbott plant in Michigan. Biden suggested he was not informed until April.
  • The disconnect between the industry’s alertness to the looming crisis and the administration’s lack of awareness was hinted at during the panel discussion itself. Biden asked one panelist directly if his company had been surprised by the “profound effect immediately” of the Abbott closure.“No, sir, we were aware of the general impact that this would have,” said Robert Cleveland, a senior vice president at Reckitt. “From the moment that that recall was announced, we reached out immediately to retail partners like Target and Walmart to tell them this is what we think will happen.”
  • “We have been doing this whole-of-government approach since the recall,” she said at the White House daily press briefing after Biden met with the executives. “We have been working on this for months, for months. We have been taking this incredibly seriously.”When pressed why Biden himself said he was unaware of the “whole-of-government approach,” Jean-Pierre said that Biden “has multiple issues, crises at the moment” and that administration officials often respond to problems before the president is aware.
  • ...4 more annotations...
  • In recent weeks, Biden has scrambled to show he is on top of the matter. He invoked the Defense Production Act to ramp up domestic production of baby formula, and his administration has airlifted supplies from foreign countries to try to address the mushrooming crisis. But the declaration by industry leaders that the scope of the problem was immediately clear to them in February raises questions of why the president was slow to learn of the issue.
  • “Seeing the empty shelves is unacceptable,” she said. “Seeing what families are going through is unacceptable. This is why we have been working 24-7 to make sure that we are using every lever at our disposal to deliver for the American people.”
  • Officials said United Airlines had agreed to transport Kendamil formula for free from Heathrow Airport in London to multiple airports across the United States over a three-week period. The formula will be distributed and available for purchase at selected U.S. retailers nationwide as well as online, the White House said.All told, about 3.7 million 8-ounce bottle equivalents of Kendamil infant formula will be delivered, it said.
  • Still, despite the all-hands-on-deck approach to replenishing the American supply, store shelves continue to be emptier: Data firm IRI reported Tuesday that the nationwide in-stock inventory figure was 76 percent for the week ending May 22, down from 79 percent the week before.
Javier E

The new tech worldview | The Economist - 0 views

  • Sam Altman is almost supine
  • the 37-year-old entrepreneur looks about as laid-back as someone with a galloping mind ever could. Yet the ceo of OpenAi, a startup reportedly valued at nearly $20bn whose mission is to make artificial intelligence a force for good, is not one for light conversation
  • Joe Lonsdale, 40, is nothing like Mr Altman. He’s sitting in the heart of Silicon Valley, dressed in linen with his hair slicked back. The tech investor and entrepreneur, who has helped create four unicorns plus Palantir, a data-analytics firm worth around $15bn that works with soldiers and spooks
  • ...25 more annotations...
  • a “builder class”—a brains trust of youngish idealists, which includes Patrick Collison, co-founder of Stripe, a payments firm valued at $74bn, and other (mostly white and male) techies, who are posing questions that go far beyond the usual interests of Silicon Valley’s titans. They include the future of man and machine, the constraints on economic growth, and the nature of government.
  • They share other similarities. Business provided them with their clout, but doesn’t seem to satisfy their ambition
  • The number of techno-billionaires in America (Mr Collison included) has more than doubled in a decade.
  • ome of them, like the Medicis in medieval Florence, are keen to use their money to bankroll the intellectual ferment
  • The other is Paul Graham, co-founder of Y Combinator, a startup accelerator, whose essays on everything from cities to politics are considered required reading on tech campuses.
  • Mr Altman puts it more optimistically: “The iPhone and cloud computing enabled a Cambrian explosion of new technology. Some things went right and some went wrong. But one thing that went weirdly right is a lot of people got rich and said ‘OK, now what?’”
  • A belief that with money and brains they can reboot social progress is the essence of this new mindset, making it resolutely upbeat
  • The question is: are the rest of them further evidence of the tech industry’s hubristic decadence? Or do they reflect the start of a welcome capacity for renewal?
  • Two well-known entrepreneurs from that era provided the intellectual seed capital for some of today’s techno nerds.
  • Mr Thiel, a would-be libertarian philosopher and investor
  • This cohort of eggheads starts from common ground: frustration with what they see as sluggish progress in the world around them.
  • Yet the impact could ultimately be positive. Frustrations with a sluggish society have encouraged them to put their money and brains to work on problems from science funding and the redistribution of wealth to entirely new universities. Their exaltation of science may encourage a greater focus on hard tech
  • the rationalist movement has hit the mainstream. The result is a fascination with big ideas that its advocates believe goes beyond simply rose-tinted tech utopianism
  • A burgeoning example of this is “progress studies”, a movement that Mr Collison and Tyler Cowen, an economist and seer of the tech set, advocated for in an article in the Atlantic in 2019
  • Progress, they think, is a combination of economic, technological and cultural advancement—and deserves its own field of study
  • There are other examples of this expansive worldview. In an essay in 2021 Mr Altman set out a vision that he called “Moore’s Law for Everything”, based on similar logic to the semiconductor revolution. In it, he predicted that smart machines, building ever smarter replacements, would in the coming decades outcompete humans for work. This would create phenomenal wealth for some, obliterate wages for others, and require a vast overhaul of taxation and redistribution
  • His two bets, on OpenAI and nuclear fusion, have become fashionable of late—the former’s chatbot, ChatGPT, is all the rage. He has invested $375m in Helion, a company that aims to build a fusion reactor.
  • Mr Lonsdale, who shares a libertarian streak with Mr Thiel, has focused attention on trying to fix the shortcomings of society and government. In an essay this year called “In Defence of Us”, he argues against “historical nihilism”, or an excessive focus on the failures of the West.
  • With a soft spot for Roman philosophy, he has created the Cicero Institute in Austin that aims to inject free-market principles such as competition and transparency into public policy.
  • He is also bringing the startup culture to academia, backing a new place of learning called the University of Austin, which emphasises free speech.
  • All three have business ties to their mentors. As a teen, Mr Altman was part of the first cohort of founders in Mr Graham’s Y Combinator, which went on to back successes such as Airbnb and Dropbox. In 2014 he replaced him as its president, and for a while counted Mr Thiel as a partner (Mr Altman keeps an original manuscript of Mr Thiel’s book “Zero to One” in his library). Mr Thiel was also an early backer of Stripe, founded by Mr Collison and his brother, John. Mr Graham saw promise in Patrick Collison while the latter was still at school. He was soon invited to join Y Combinator. Mr Graham remains a fan: “If you dropped Patrick on a desert island, he would figure out how to reproduce the Industrial Revolution,”
  • While at university, Mr Lonsdale edited the Stanford Review, a contrarian publication co-founded by Mr Thiel. He went on to work for his mentor and the two men eventually helped found Palantir. He still calls Mr Thiel “a genius”—though he claims these days to be less “cynical” than his guru.
  • “The tech industry has always told these grand stories about itself,” says Adrian Daub of Stanford University and author of the book, “What Tech Calls Thinking”. Mr Daub sees it as a way of convincing recruits and investors to bet on their risky projects. “It’s incredibly convenient for their business models.”
  • In the 2000s Mr Thiel supported the emergence of a small community of online bloggers, self-named the “rationalists”, who were focused on removing cognitive biases from thinking (Mr Thiel has since distanced himself). That intellectual heritage dates even further back, to “cypherpunks”, who noodled about cryptography, as well as “extropians”, who believed in improving the human condition through life extensions
  • Silicon Valley has shown an uncanny ability to reinvent itself in the past.
Javier E

Dave Ramsey Tells Millions What to Do With Their Money. People Under 40 Say He's Wrong.... - 0 views

  • Ramsey, the well-known and intensely followed 63-year-old conservative Christian radio host, has 4.4 million Instagram followers, 1.9 million TikTok followers and legions more who listen to his radio shows and podcasts.
  • His message is brutal and direct: Avoid debt at all costs. Pay for everything in cash. Embrace frugality.
  • Plenty of 20- and 30-year-olds are pushing back, largely on TikTok. The hashtag #daveramseywouldntapprove, for instance, has 66.8 million views. Many say they don’t want to eat rice and beans every night—a popular Ramsey trope—or hold down multiple jobs to pay off loans. They also say Ramsey is out of touch with their reality.
  • ...16 more annotations...
  • Rising inflation has led to surging prices for groceries, cars and many essentials. The cost of a college education has skyrocketed in two decades, with the average student debt for federal loans at $37,000, according to the Education Department. Overall debts for Americans in their 30s jumped 27% from late 2019 to early 2023—steeper than for any other age group.
  • home prices have risen considerably, while wages haven’t kept pace.
  • “What Dave Ramsey really misses is any kind of social context,” says Morgan Sanner, a
  • She began paying off $48,000 in student loans (a Ramsey do) and also took out a loan to buy a 2016 Honda (a Ramsey don’t). Her rationale was that it was safer to pay extra for a more reliable car than a junker she could buy with cash. S
  • he feels these sorts of real-life decisions don’t factor into his advice.
  • When she saw a comment from Ramsey online about how people receiving pandemic stimulus payments were “pretty much screwed already,” Israel felt it came across as shaming people. The pandemic shutdowns ended a decadelong economic expansion for Black Americans, a disproportionate number of whom lost their jobs and relied on those checks.
  • “Moralizing financial decisions is very damaging to marginalized groups,” says Israel, who is Black.
  • Many young adults scratch their heads over his advice that people should let their credit scores dwindle and die.
  • People need a good credit score, says Mandy Phillips, a 39-year-old residential mortgage loan originator in Redding, Calif. She uses TikTok and other social media to educate millennials and Gen Z about home buying. Scores are vital when applying for mortgages and rentals.
  • She also takes issue with Ramsey’s advice to only obtain a home loan if you can take out a 15-year fixed-rate mortgage with a down payment of at least 10%. Few younger buyers can pay the large monthly bills of shorter-term mortgages.
  • “That may have worked years ago in the ’80s and ’90s, but that’s not something that is achievable for the average American,” Phillips says.
  • Housing is a particularly hot-button topic. He advises people to only buy a house with their lawfully wedded spouse. Yet many young adults are pooling their finances with partners, friends or roommates to buy their first homes. 
  • Ramsey is perhaps best known for advocating a “debt snowball method”: People with multiple loans pay off the smallest balances first, regardless of interest rate. As you knock out each loan, he says, the money you have to put toward larger debt snowballs. Seeing small wins motivates people to keep going, he says.Conventional economic theory would be to pay off the highest-interest loans first, says James Choi, a finance professor at the Yale School of Management, who has studied the advice of popular finance gurus.
  • Ramsey’s save-not-spend message sounds logical, young adults say. It’s his all-or-nothing approach that doesn’t work for them.
  • Kate Hindman, a 31-year-old administrative assistant in Pasadena, Calif., who has taken an anti-Ramsey stance on TikTok, ended up with $30,000 in credit-card debt after she and her husband faced income-reducing job changes. They’ve since turned it into a consolidation loan with an 8% interest rate and pay about $1,200 a month.
  • She wonders if the debt aversion is generational. Perhaps younger people are less willing to make huge sacrifices to be debt-free. Maybe carrying some amount of debt forever is a new normal.
« First ‹ Previous 141 - 149 of 149
Showing 20 items per page