Skip to main content

Home/ TOK Friends/ Group items tagged philosopher

Rss Feed Group items tagged

Javier E

Living the life authentic: Bernard Williams on Paul Gauguin | Aeon Essays - 0 views

  • Williams invites us to see Gauguin’s meaning in life as deeply intertwined with his artistic ambition. His art is, to use Williams’s term for such meaning-giving enterprises, his ground project
  • This is what a ground project does, according to Williams: it gives a reason, not just given that you are alive, but a reason to be alive in the first place.
  • The desires and goals at the heart of what Williams calls a ground project form a fundamental part of one’s identity, and in that sense being true to one’s deepest desires is being true to who one is most deeply.
  • ...4 more annotations...
  • We see here the enormously influential cultural ideal mentioned at the outset: the purpose of life is to be authentic, where that means finding out who you are and living accordingly. Gauguin, in other words, was a cultural prototype for a conception of life’s meaning that today has widespread appeal around the world.
  • Williams, however, thinks that Gauguin’s eventual success as a painter constitutes a form of moral luck, in that his artistic achievement justifies what he did. It provides a justification that not everyone will accept, but one that can make sense to Gauguin himself, and perhaps to others
  • In his essay ‘Moral Luck’ (1976), Williams discusses Paul Gauguin’s decision to leave Paris in order to move to Tahiti where he hoped he could become a great painter. Gauguin left behind – basically abandoned – his wife and children
  • If there’s one theme in all my work it’s about authenticity and self-expression,’ said the philosopher Bernard Williams in an interview with The Guardian in 2002
Javier E

Opinion | The Ugly Secrets Behind the Costco Chicken - The New York Times - 0 views

  • we must guard our moral compasses. And some day, I think, future generations will look back at our mistreatment of livestock and poultry with pain and bafflement. They will wonder how we in the early 21st century could have been so oblivious to the cruelties that delivered $4.99 chickens to a Costco rotisserie.
  • Torture a single chicken in your backyard, and you risk arrest. Abuse tens of millions of them? Why, that’s agribusiness.
  • Those commendable savings have been achieved in part by developing chickens that effectively are bred to suffer. Scientists have created what are sometimes called “exploding chickens” that put on weight at a monstrous clip, about six times as fast as chickens in 1925. The journal Poultry Science once calculated that if humans grew at the same rate as these chickens, a 2-month-old baby would weigh 660 pounds.
  • ...10 more annotations...
  • When Herbert Hoover talked about putting “a chicken in every pot,” chicken was a luxury: In 1930, whole dressed chicken retailed in the United States for $7 a pound in today’s dollars. In contrast, that Costco bird now sells for less than $2 a pound.
  • It’s not that Costco chickens suffer more than Walmart or Safeway birds. All are part of an industrial agricultural system that, at the expense of animal well-being, has become extremely efficient at producing cheap protein.
  • “They’re living on their own feces, with no fresh air and no natural light,” said Leah Garcés, the president of Mercy for Animals. “I don’t think it’s what a Costco customer expects.”
  • Garcés wants Costco to sign up for the “Better Chicken Commitment,” an industry promise to work toward slightly better standards for industrial agriculture. For example, each adult chicken would get at least one square foot of space, there would be some natural light and the company would avoid breeds that put on weight that the legs can’t support.
  • Burger King, Popeyes, Chipotle, Denny’s and some 200 other food companies have embraced the Better Chicken Commitment, but grocery chains generally have not, with the exception of Whole Foods.
  • Yet what struck me was that Costco completely accepts that animal welfare should be an important consideration. We may disagree about whether existing standards are adequate, but the march of moral progress on animal rights is unmistakable.
  • When I began writing about these issues, I never guessed that McDonald’s would commit to cage-free eggs, that California would legislate protections for mother pigs, that there would be court fights about whether an elephant has legal “personhood,” and that Pope Francis would suggest that animals go to heaven and that the Virgin Mary “grieves for the sufferings” of mistreated livestock.
  • I don’t pretend that there are neat solutions. We raised a flock of chickens on our family farm when I was a kid, and we managed to be neither efficient nor humane. Many birds died, and being eaten by a coyote wasn’t such a pleasant way to go, either. There’s no need for a misplaced nostalgia for traditional farming practices, just a pragmatic acknowledgment of animal suffering and trade-offs to reduce it.
  • We treat poultry particularly poorly because humans identify less with birds than with fellow mammals. We may empathize with a calf with big eyes, but less so with species that we dismiss as “bird brains.”
  • Still, the issue remains as the English philosopher Jeremy Bentham posed it in 1789: “The question is not, Can they reason?, nor Can they talk? but, Can they suffer?”
Javier E

The Economic Case for Regulating Social Media - The New York Times - 0 views

  • Social media platforms like Facebook, YouTube and Twitter generate revenue by using detailed behavioral information to direct ads to individual users.
  • this bland description of their business model fails to convey even a hint of its profound threat to the nation’s political and social stability.
  • legislators in Congress to propose the breakup of some tech firms, along with other traditional antitrust measures. But the main hazard posed by these platforms is not aggressive pricing, abusive service or other ills often associated with monopoly.
  • ...16 more annotations...
  • Instead, it is their contribution to the spread of misinformation, hate speech and conspiracy theories.
  • digital platforms, since the marginal cost of serving additional consumers is essentially zero. Because the initial costs of producing a platform’s content are substantial, and because any company’s first goal is to remain solvent, it cannot just give stuff away. Even so, when price exceeds marginal cost, competition relentlessly pressures rival publishers to cut prices — eventually all the way to zero. This, in a nutshell, is the publisher’s dilemma in the digital age.
  • These firms make money not by charging for access to content but by displaying it with finely targeted ads based on the specific types of things people have already chosen to view. If the conscious intent were to undermine social and political stability, this business model could hardly be a more effective weapon.
  • The algorithms that choose individual-specific content are crafted to maximize the time people spend on a platform
  • As the developers concede, Facebook’s algorithms are addictive by design and exploit negative emotional triggers. Platform addiction drives earnings, and hate speech, lies and conspiracy theories reliably boost addiction.
  • the subscription model isn’t fully efficient: Any positive fee would inevitably exclude at least some who would value access but not enough to pay the fee
  • a conservative think tank, says, for example, that government has no business second-guessing people’s judgments about what to post or read on social media.
  • That position would be easier to defend in a world where individual choices had no adverse impact on others. But negative spillover effects are in fact quite common
  • individual and collective incentives about what to post or read on social media often diverge sharply.
  • There is simply no presumption that what spreads on these platforms best serves even the individual’s own narrow interests, much less those of society as a whole.
  • a simpler step may hold greater promise: Platforms could be required to abandon that model in favor of one relying on subscriptions, whereby members gain access to content in return for a modest recurring fee.
  • Major newspapers have done well under this model, which is also making inroads in book publishing. The subscription model greatly weakens the incentive to offer algorithmically driven addictive content provided by individuals, editorial boards or other sources.
  • Careful studies have shown that Facebook’s algorithms have increased political polarization significantly
  • More worrisome, those excluded would come disproportionately from low-income groups. Such objections might be addressed specifically — perhaps with a modest tax credit to offset subscription fees — or in a more general way, by making the social safety net more generous.
  • Adam Smith, the 18th-century Scottish philosopher widely considered the father of economics, is celebrated for his “invisible hand” theory, which describes conditions under which market incentives promote socially benign outcomes. Many of his most ardent admirers may view steps to constrain the behavior of social media platforms as regulatory overreach.
  • But Smith’s remarkable insight was actually more nuanced: Market forces often promote society’s welfare, but not always. Indeed, as he saw clearly, individual interests are often squarely at odds with collective aspirations, and in many such instances it is in society’s interest to intervene. The current information crisis is a case in point.
Javier E

He Wants to Save Classics From Whiteness. Can the Field Survive? - The New York Times - 0 views

  • Padilla laid out an indictment of his field. “If one were intentionally to design a discipline whose institutional organs and gatekeeping protocols were explicitly aimed at disavowing the legitimate status of scholars of color,” he said, “one could not do better than what classics has done.”
  • Padilla believes that classics is so entangled with white supremacy as to be inseparable from it. “Far from being extrinsic to the study of Greco-Roman antiquity,” he has written, “the production of whiteness turns on closer examination to reside in the very marrows of classics.”
  • Rather than kowtowing to criticism, Williams said, “maybe we should start defending our discipline.” She protested that it was imperative to stand up for the classics as the political, literary and philosophical foundation of European and American culture: “It’s Western civilization. It matters because it’s the West.” Hadn’t classics given us the concepts of liberty, equality and democracy?
  • ...46 more annotations...
  • Williams ceded the microphone, and Padilla was able to speak. “Here’s what I have to say about the vision of classics that you outlined,” he said. “I want nothing to do with it. I hope the field dies that you’ve outlined, and that it dies as swiftly as possible.”
  • “I believe in merit. I don’t look at the color of the author.” She pointed a finger in Padilla’s direction. “You may have got your job because you’re Black,” Williams said, “but I would prefer to think you got your job because of merit.”
  • What he did find was a slim blue-and-white textbook titled “How People Lived in Ancient Greece and Rome.” “Western civilization was formed from the union of early Greek wisdom and the highly organized legal minds of early Rome,” the book began. “The Greek belief in a person’s ability to use his powers of reason, coupled with Roman faith in military strength, produced a result that has come to us as a legacy, or gift from the past.” Thirty years later, Padilla can still recite those opening lines.
  • In 2017, he published a paper in the journal Classical Antiquity that compared evidence from antiquity and the Black Atlantic to draw a more coherent picture of the religious life of the Roman enslaved. “It will not do merely to adopt a pose of ‘righteous indignation’ at the distortions and gaps in the archive,” he wrote. “There are tools available for the effective recovery of the religious experiences of the enslaved, provided we work with these tools carefully and honestly.”
  • Padilla sensed that his pursuit of classics had displaced other parts of his identity, just as classics and “Western civilization” had displaced other cultures and forms of knowledge. Recovering them would be essential to dismantling the white-supremacist framework in which both he and classics had become trapped. “I had to actively engage in the decolonization of my mind,” he told me.
  • He also gravitated toward contemporary scholars like José Esteban Muñoz, Lorgia García Peña and Saidiya Hartman, who speak of race not as a physical fact but as a ghostly system o
  • In response to rising anti-immigrant sentiment in Europe and the United States, Mary Beard, perhaps the most famous classicist alive, wrote in The Wall Street Journal that the Romans “would have been puzzled by our modern problems with migration and asylum,” because the empire was founded on the “principles of incorporation and of the free movement of people.”
  • In November 2015, he wrote an essay for Eidolon, an online classics journal, clarifying that in Rome, as in the United States, paeans to multiculturalism coexisted with hatred of foreigners. Defending a client in court, Cicero argued that “denying foreigners access to our city is patently inhumane,” but ancient authors also recount the expulsions of whole “suspect” populations, including a roundup of Jews in 139 B.C., who were not considered “suitable enough to live alongside Romans.”
  • The job of classicists is not to “point out the howlers,” he said on a 2017 panel. “To simply take the position of the teacher, the qualified classicist who knows things and can point to these mistakes, is not sufficient.”
  • Dismantling structures of power that have been shored up by the classical tradition will require more than fact-checking; it will require writing an entirely new story about antiquity, and about who we are today
  • To find that story, Padilla is advocating reforms that would “explode the canon” and “overhaul the discipline from nuts to bolts,” including doing away with the label “classics” altogether.
  • . “What I want to be thinking about in the next few weeks,” he told them, “is how we can be telling the story of the early Roman Empire not just through a variety of sources but through a variety of persons.” He asked the students to consider the lives behind the identities he had assigned them, and the way those lives had been shaped by the machinery of empire, which, through military conquest, enslavement and trade, creates the conditions for the large-scale movement of human beings.
  • ultimately, he decided that leaving enslaved characters out of the role play was an act of care. “I’m not yet ready to turn to a student and say, ‘You are going to be a slave.’”
  • Privately, even some sympathetic classicists worry that Padilla’s approach will only hasten the field’s decline. “I’ve spoken to undergrad majors who say that they feel ashamed to tell their friends they’re studying classics,”
  • “I very much admire Dan-el’s work, and like him, I deplore the lack of diversity in the classical profession,” Mary Beard told me via email. But “to ‘condemn’ classical culture would be as simplistic as to offer it unconditional admiration.”
  • In a 2019 talk, Beard argued that “although classics may become politicized, it doesn’t actually have a politics,” meaning that, like the Bible, the classical tradition is a language of authority — a vocabulary that can be used for good or ill by would-be emancipators and oppressors alike.
  • Over the centuries, classical civilization has acted as a model for people of many backgrounds, who turned it into a matrix through which they formed and debated ideas about beauty, ethics, power, nature, selfhood, citizenship and, of course, race
  • Anthony Grafton, the great Renaissance scholar, put it this way in his preface to “The Classical Tradition”: “An exhaustive exposition of the ways in which the world has defined itself with regard to Greco-Roman antiquity would be nothing less than a comprehensive history of the world.”
  • Classics as we know it today is a creation of the 18th and 19th centuries. During that period, as European universities emancipated themselves from the control of the church, the study of Greece and Rome gave the Continent its new, secular origin story. Greek and Latin writings emerged as a competitor to the Bible’s moral authority, which lent them a liberatory power
  • Historians stress that such ideas cannot be separated from the discourses of nationalism, colorism and progress that were taking shape during the modern colonial period, as Europeans came into contact with other peoples and their traditions. “The whiter the body is, the more beautiful it is,” Winkelmann wrote.
  • While Renaissance scholars were fascinated by the multiplicity of cultures in the ancient world, Enlightenment thinkers created a hierarchy with Greece and Rome, coded as white, on top, and everything else below.
  • Jefferson, along with most wealthy young men of his time, studied classics at college, where students often spent half their time reading and translating Greek and Roman texts. “Next to Christianity,” writes Caroline Winterer, a historian at Stanford, “the central intellectual project in America before the late 19th century was classicism.
  • Of the 2.5 million people living in America in 1776, perhaps only 3,000 had gone to college, but that number included many of the founders
  • They saw classical civilization as uniquely educative — a “lamp of experience,” in the words of Patrick Henry, that could light the path to a more perfect union. However true it was, subsequent generations would come to believe, as Hannah Arendt wrote in “On Revolution,” that “without the classical example … none of the men of the Revolution on either side of the Atlantic would have possessed the courage for what then turned out to be unprecedented action.”
  • Comparisons between the United States and the Roman Empire became popular as the country emerged as a global power. Even after Latin and Greek were struck from college-entrance exams, the proliferation of courses on “great books” and Western civilization, in which classical texts were read in translation, helped create a coherent national story after the shocks of industrialization and global warfare.
  • even as the classics were pulled apart, laughed at and transformed, they continued to form the raw material with which many artists shaped their visions of modernity.
  • Over the centuries, thinkers as disparate as John Adams and Simone Weil have likened classical antiquity to a mirror. Generations of intellectuals, among them feminist, queer and Black scholars, have seen something of themselves in classical texts, flashes of recognition that held a kind of liberatory promise
  • The language that is used to describe the presence of classical antiquity in the world today — the classical tradition, legacy or heritage — contains within it the idea of a special, quasi-genetic relationship. In his lecture “There Is No Such Thing as Western Civilization,” Kwame Anthony Appiah (this magazine’s Ethicist columnist) mockingly describes the belief in such a kinship as the belief in a “golden nugget” of insight — a precious birthright and shimmering sign of greatness — that white Americans and Europeans imagine has been passed down to them from the ancients.
  • To see classics the way Padilla sees it means breaking the mirror; it means condemning the classical legacy as one of the most harmful stories we’ve told ourselves
  • Padilla is wary of colleagues who cite the radical uses of classics as a way to forestall change; he believes that such examples have been outmatched by the field’s long alliance with the forces of dominance and oppression.
  • Classics and whiteness are the bones and sinew of the same body; they grew strong together, and they may have to die together. Classics deserves to survive only if it can become “a site of contestation” for the communities who have been denigrated by it in the past.
  • if classics fails his test, Padilla and others are ready to give it up. “I would get rid of classics altogether,” Walter Scheidel, another of Padilla’s former advisers at Stanford, told me. “I don’t think it should exist as an academic field.”
  • One way to get rid of classics would be to dissolve its faculties and reassign their members to history, archaeology and language departments.
  • many classicists are advocating softer approaches to reforming the discipline, placing the emphasis on expanding its borders. Schools including Howard and Emory have integrated classics with Ancient Mediterranean studies, turning to look across the sea at Egypt, Anatolia, the Levant and North Africa. The change is a declaration of purpose: to leave behind the hierarchies of the Enlightenment and to move back toward the Renaissance model of the ancient world as a place of diversity and mixture.
  • Ian Morris put it more bluntly. “Classics is a Euro-American foundation myth,” Morris said to me. “Do we really want that sort of thing?”
  • There’s a more interesting story to be told about the history of what we call the West, the history of humanity, without valorizing particular cultures in it,” said Josephine Quinn, a professor of ancient history at Oxford. “It seems to me the really crucial mover in history is always the relationship between people, between cultures.”
  • “In some moods, I feel that this is just a moment of despair, and people are trying to find significance even if it only comes from self-accusation,” he told me. “I’m not sure that there is a discipline that is exempt from the fact that it is part of the history of this country. How distinctly wicked is classics? I don’t know that it is.”
  • “One of the dubious successes of my generation is that it did break the canon,” Richlin told me. “I don’t think we could believe at the time that we would be putting ourselves out of business, but we did.” She added: “If they blew up the classics departments, that would really be the end.”
  • Padilla, like Douglass, now sees the moment of absorption into the classical, literary tradition as simultaneous with his apprehension of racial difference; he can no longer find pride or comfort in having used it to bring himself out of poverty.
  • “Claiming dignity within this system of structural oppression,” Padilla has said, “requires full buy-in into its logic of valuation.” He refuses to “praise the architects of that trauma as having done right by you at the end.”
  • Last June, as racial-justice protests unfolded across the nation, Padilla turned his attention to arenas beyond classics. He and his co-authors — the astrophysicist Jenny Greene, the literary theorist Andrew Cole and the poet Tracy K. Smith — began writing their open letter to Princeton with 48 proposals for reform. “Anti-Blackness is foundational to America,” the letter began. “Indifference to the effects of racism on this campus has allowed legitimate demands for institutional support and redress in the face of microaggression and outright racist incidents to go long unmet.”
  • Padilla believes that the uproar over free speech is misguided. “I don’t see things like free speech or the exchange of ideas as ends in themselves,” he told me. “I have to be honest about that. I see them as a means to the end of human flourishing.”
  • “There is a certain kind of classicist who will look on what transpired and say, ‘Oh, that’s not us,’” Padilla said when we spoke recently. “What is of interest to me is why is it so imperative for classicists of a certain stripe to make this discursive move? ‘This is not us.’
  • Joel Christensen, the Brandeis professor, now feels that it is his “moral and ethical and intellectual responsibility” to teach classics in a way that exposes its racist history. “Otherwise we’re just participating in propaganda,”
  • Christensen, who is 42, was in graduate school before he had his “crisis of faith,” and he understands the fear that many classicists may experience at being asked to rewrite the narrative of their life’s work. But, he warned, “that future is coming, with or without Dan-el.”
  • On Jan. 6, Padilla turned on the television minutes after the windows of the Capitol were broken. In the crowd, he saw a man in a Greek helmet with TRUMP 2020 painted in white. He saw a man in a T-shirt bearing a golden eagle on a fasces — symbols of Roman law and governance — below the logo 6MWE, which stands for “Six Million Wasn’t Enough,
Javier E

The Cancel Culture Checklist - Persuasion - 0 views

  • a third of Americans say that they are personally worried about losing their jobs or missing out on career opportunities if they express their real political opinions.
  • Cancel culture now poses a real threat to intellectual freedom in the United States.
  • Americans in all walks of life have been publicly shamed, pressured into ritualistic apologies or summarily fired
  • ...29 more annotations...
  • But critics of the critics of cancel culture make a powerful retort. Accusing others of canceling can, they claim, be a way to stigmatize legitimate criticism. As Hannah Giorgis writes in the Atlantic, “critical tweets are not censorship.”
  • So what, exactly, does a cancellation consist of? And how does it differ from the exercise of free speech and robust critical debate?
  • At a conceptual level, the difference is clear. Criticism marshals evidence and arguments in a rational effort to persuade.
  • Canceling, by contrast, seeks to organize and manipulate the social or media environment in order to isolate, deplatform or intimidate ideological opponents
  • its intent—or at least its predictable outcome—is to coerce conformity and reduce the scope for forms of criticism that are not sanctioned by the prevailing consensus of some local majority.
  • In practice, however, telling canceling apart from criticism can be difficult because both take the form of criticizing others.
  • The more signs you see, the more certain you can be that you are looking at a cancel campaign.
  • A better approach might therefore be diagnostic. Like the symptoms of cancer, the hallmarks of a cancellation are many. Though not all instances involve every single characteristic, they all involve some of its key attribute
  • Six warning signs make up my personal checklist for cancel culture.
  • Punitiveness
  • A critical culture seeks to correct rather than punish. In science, the penalty for being wrong is not that you lose your job or your friends. Normally, the only penalty is that you lose the argument
  • Canceling, by contrast, seeks to punish rather than correct—and often for a single misstep rather than a long track record of failure
  • Deplatforming
  • A critical culture tolerates dissent rather than silencing it. It understands that dissent can seem obnoxious, harmful, hateful and, yes, unsafe.
  • Canceling, by contrast, seeks to shut up and shout down its targets. Cancelers often define the mere act of disagreeing with them as a threat to their safety or even an act of violence
  • Organization
  • Critical culture relies on persuasion. The way to win an argument is to convince others that you are right.
  • By contrast, it’s common to see cancelers organize hundreds of petition-signers or thousands of social media users to dig up and prosecute an indictment.
  • Secondary Boycotts
  • With its commitments to exploring a wide range of ideas and correcting rather than coercing the errant, a critical culture sees no value in instilling a climate of fear
  • But instilling fear is what canceling is all about. By choosing targets unpredictably (almost anything can trigger a campaign), providing no safe harbors (even conformists can get hit), and implicitly threatening anyone who sides with those who are targeted, canceling sends the message: “you could be next.”
  • Moral Grandstanding
  • Precisely because speech can be hurtful, critical culture discourages extreme rhetoric. It encourages people to listen to each other, to use evidence and argumentation, to behave reasonably and to avoid personal attacks.
  • Cancel culture is much more invested in what philosophers Justin Tosi and Brandon Warmke call “moral grandstanding”: the display of moral outrage to impress one’s peer group, dominate others, or both
  • Truthiness
  • Concern for accuracy is the north star of a critical culture. Not everyone gets every fact right, nor do people always agree on what is true; and yet people in a critical culture try to present their own and others’ viewpoints honestly and accurately.
  • canceling is not about seeking truth or persuading others; it is a form of information warfare, in which truthiness suffices if it serves the cause.
  • Those are my six warning signs. If you spot one or two, you should fear that a canceling may be happening; if you see five or six, you can be sure.
  • Though our critics like to claim that those of us who worry about cancel culture just don’t like being criticized on the internet, cancel culture is all too real. And though it may at times bear a superficial resemblance to critical culture, the two are diametrically opposed—and not so very difficult to tell apart.
Javier E

On the Shortness of Life 2.0 - by Peter Juul - The Liberal Patriot - 0 views

  • Four Thousand Weeks: Time Management for Mortals, writer and regular Guardian columnist Oliver Burkeman faithfully carries the spirit of Seneca’s classic essay forward
  • It’s a deft and eclectic synthesis of ancient and modern thinking about how humanity can come to terms with our limited time on Earth – the title derives from the length of the average human lifespan – ranging intellectually from ancient Greek and Roman philosophers like Seneca to modern-day Buddhist and existentialist thinkers.
  • he only touches on politics briefly and sporadically throughout the book’s 245 pages. But those of us in politics and policy – whatever capacity we find ourselves in – can learn quite a bit
  • ...15 more annotations...
  • defined by Burkeman as “a machine for misusing your life.” Social media platforms like Twitter and Facebook don’t just distract us from more important matters, he argues, “they change how we’re defining ‘important matters’ in the first place.”
  • Social media also amounts to “a machine for getting you to care about too many things, even if they’re each indisputably worthwhile.” Hence the urge to depict every policy problem as an urgent if not existential crisis
  • social media has turned all of us into “angrier, less empathetic, more anxious or more numbed out” versions of ourselves.
  • our political and policy debates tend towards what Burkeman calls “paralyzing grandiosity” – the false notion that in the face of problems like climate change, economic inequality, and ongoing threats to democracy “only the most revolutionary, world-transforming causes are worth fighting for.” It’s a sentiment that derives from and reinforces catastrophism and absolutism
  • Four Thousand Weeks is filled to the brim with practical advice that we can easily adapt
  • Embrace “radical incrementalism.
  • we lack the patience to tolerate the fact that most of the things we want to happen won’t occur in one fell swoop.
  • We’ve got to resist the need for speed and desire for rapid resolution of problems, letting them instead take the time they take. In part, that means accepting even limited progress rather than giving up and growing cynical
  • Take a break
  • Burkeman’s advice to rest for rest’s sake, “to spend some of our time, that is, on activities in which the only thing we’re trying to get from them is the doing itself.”
  • Burkeman suggests we find some hobby we enjoy for its own sake, not because there’s some benefit we think we can derive from it.
  • When we somewhat sheepishly admit to a hobby, he writes, “that’s a sign you’re doing it for its own sake, rather than some socially sanctioned outcome.”
  • he joy we find in our hobbies can bleed into other parts of our lives as well, and if they’re more social in nature that can help build relationships unrelated to politics and policy that are necessary to make democracy work.
  • “Consolidate your caring” and think small. “To make a difference,” Burkeman argues, “you must focus your finite capacity for care.”
  • What matters is that we make things slightly better with our contributions and actions, not that we solve all the world’s at once.
Javier E

Why Is It So Hard to Be Rational? | The New Yorker - 0 views

  • an unusually large number of books about rationality were being published this year, among them Steven Pinker’s “Rationality: What It Is, Why It Seems Scarce, Why It Matters” (Viking) and Julia Galef’s “The Scout Mindset: Why Some People See Things Clearly and Others Don’t” (Portfolio).
  • When the world changes quickly, we need strategies for understanding it. We hope, reasonably, that rational people will be more careful, honest, truthful, fair-minded, curious, and right than irrational ones.
  • And yet rationality has sharp edges that make it hard to put at the center of one’s life
  • ...43 more annotations...
  • You might be well-intentioned, rational, and mistaken, simply because so much in our thinking can go wrong. (“RATIONAL, adj.: Devoid of all delusions save those of observation, experience and reflection,”
  • You might be rational and self-deceptive, because telling yourself that you are rational can itself become a source of bias. It’s possible that you are trying to appear rational only because you want to impress people; or that you are more rational about some things (your job) than others (your kids); or that your rationality gives way to rancor as soon as your ideas are challenged. Perhaps you irrationally insist on answering difficult questions yourself when you’d be better off trusting the expert consensus.
  • Not just individuals but societies can fall prey to false or compromised rationality. In a 2014 book, “The Revolt of the Public and the Crisis of Authority in the New Millennium,” Martin Gurri, a C.I.A. analyst turned libertarian social thinker, argued that the unmasking of allegedly pseudo-rational institutions had become the central drama of our age: people around the world, having concluded that the bigwigs in our colleges, newsrooms, and legislatures were better at appearing rational than at being so, had embraced a nihilist populism that sees all forms of public rationality as suspect.
  • modern life would be impossible without those rational systems; we must improve them, not reject them. We have no choice but to wrestle with rationality—an ideal that, the sociologist Max Weber wrote, “contains within itself a world of contradictions.”
  • Where others might be completely convinced that G.M.O.s are bad, or that Jack is trustworthy, or that the enemy is Eurasia, a Bayesian assigns probabilities to these propositions. She doesn’t build an immovable world view; instead, by continually updating her probabilities, she inches closer to a more useful account of reality. The cooking is never done.
  • Rationality is one of humanity’s superpowers. How do we keep from misusing it?
  • Start with the big picture, fixing it firmly in your mind. Be cautious as you integrate new information, and don’t jump to conclusions. Notice when new data points do and do not alter your baseline assumptions (most of the time, they won’t alter them), but keep track of how often those assumptions seem contradicted by what’s new. Beware the power of alarming news, and proceed by putting it in a broader, real-world context.
  • Bayesian reasoning implies a few “best practices.”
  • Keep the cooked information over here and the raw information over there; remember that raw ingredients often reduce over heat
  • We want to live in a more rational society, but not in a falsely rationalized one. We want to be more rational as individuals, but not to overdo it. We need to know when to think and when to stop thinking, when to doubt and when to trust.
  • But the real power of the Bayesian approach isn’t procedural; it’s that it replaces the facts in our minds with probabilities.
  • Applied to specific problems—Should you invest in Tesla? How bad is the Delta variant?—the techniques promoted by rationality writers are clarifying and powerful.
  • the rationality movement is also a social movement; rationalists today form what is sometimes called the “rationality community,” and, as evangelists, they hope to increase its size.
  • In “Rationality,” “The Scout Mindset,” and other similar books, irrationality is often presented as a form of misbehavior, which might be rectified through education or socialization.
  • Greg tells me that, in his business, it’s not enough to have rational thoughts. Someone who’s used to pondering questions at leisure might struggle to learn and reason when the clock is ticking; someone who is good at reaching rational conclusions might not be willing to sign on the dotted line when the time comes. Greg’s hedge-fund colleagues describe as “commercial”—a compliment—someone who is not only rational but timely and decisive.
  • You can know what’s right but still struggle to do it.
  • Following through on your own conclusions is one challenge. But a rationalist must also be “metarational,” willing to hand over the thinking keys when someone else is better informed or better trained. This, too, is harder than it sounds.
  • For all this to happen, rationality is necessary, but not sufficient. Thinking straight is just part of the work. 
  • I found it possible to be metarational with my dad not just because I respected his mind but because I knew that he was a good and cautious person who had my and my mother’s best interests at heart.
  • between the two of us, we had the right ingredients—mutual trust, mutual concern, and a shared commitment to reason and to act.
  • Intellectually, we understand that our complex society requires the division of both practical and cognitive labor. We accept that our knowledge maps are limited not just by our smarts but by our time and interests. Still, like Gurri’s populists, rationalists may stage their own contrarian revolts, repeatedly finding that no one’s opinions but their own are defensible. In letting go, as in following through, one’s whole personality gets involved.
  • in truth, it maps out a series of escalating challenges. In search of facts, we must make do with probabilities. Unable to know it all for ourselves, we must rely on others who care enough to know. We must act while we are still uncertain, and we must act in time—sometimes individually, but often together.
  • The realities of rationality are humbling. Know things; want things; use what you know to get what you want. It sounds like a simple formula.
  • The real challenge isn’t being right but knowing how wrong you might be.By Joshua RothmanAugust 16, 2021
  • Writing about rationality in the early twentieth century, Weber saw himself as coming to grips with a titanic force—an ascendant outlook that was rewriting our values. He talked about rationality in many different ways. We can practice the instrumental rationality of means and ends (how do I get what I want?) and the value rationality of purposes and goals (do I have good reasons for wanting what I want?). We can pursue the rationality of affect (am I cool, calm, and collected?) or develop the rationality of habit (do I live an ordered, or “rationalized,” life?).
  • Weber worried that it was turning each individual into a “cog in the machine,” and life into an “iron cage.” Today, rationality and the words around it are still shadowed with Weberian pessimism and cursed with double meanings. You’re rationalizing the org chart: are you bringing order to chaos, or justifying the illogical?
  • For Aristotle, rationality was what separated human beings from animals. For the authors of “The Rationality Quotient,” it’s a mental faculty, parallel to but distinct from intelligence, which involves a person’s ability to juggle many scenarios in her head at once, without letting any one monopolize her attention or bias her against the rest.
  • In “The Rationality Quotient: Toward a Test of Rational Thinking” (M.I.T.), from 2016, the psychologists Keith E. Stanovich, Richard F. West, and Maggie E. Toplak call rationality “a torturous and tortured term,” in part because philosophers, sociologists, psychologists, and economists have all defined it differently
  • Galef, who hosts a podcast called “Rationally Speaking” and co-founded the nonprofit Center for Applied Rationality, in Berkeley, barely uses the word “rationality” in her book on the subject. Instead, she describes a “scout mindset,” which can help you “to recognize when you are wrong, to seek out your blind spots, to test your assumptions and change course.” (The “soldier mindset,” by contrast, encourages you to defend your positions at any cost.)
  • Galef tends to see rationality as a method for acquiring more accurate views.
  • Pinker, a cognitive and evolutionary psychologist, sees it instrumentally, as “the ability to use knowledge to attain goals.” By this definition, to be a rational person you have to know things, you have to want things, and you have to use what you know to get what you want.
  • Introspection is key to rationality. A rational person must practice what the neuroscientist Stephen Fleming, in “Know Thyself: The Science of Self-Awareness” (Basic Books), calls “metacognition,” or “the ability to think about our own thinking”—“a fragile, beautiful, and frankly bizarre feature of the human mind.”
  • A successful student uses metacognition to know when he needs to study more and when he’s studied enough: essentially, parts of his brain are monitoring other parts.
  • In everyday life, the biggest obstacle to metacognition is what psychologists call the “illusion of fluency.” As we perform increasingly familiar tasks, we monitor our performance less rigorously; this happens when we drive, or fold laundry, and also when we think thoughts we’ve thought many times before
  • The trick is to break the illusion of fluency, and to encourage an “awareness of ignorance.”
  • metacognition is a skill. Some people are better at it than others. Galef believes that, by “calibrating” our metacognitive minds, we can improve our performance and so become more rational
  • There are many calibration methods
  • nowing about what you know is Rationality 101. The advanced coursework has to do with changes in your knowledge.
  • Most of us stay informed straightforwardly—by taking in new information. Rationalists do the same, but self-consciously, with an eye to deliberately redrawing their mental maps.
  • The challenge is that news about distant territories drifts in from many sources; fresh facts and opinions aren’t uniformly significant. In recent decades, rationalists confronting this problem have rallied behind the work of Thomas Bayes
  • So-called Bayesian reasoning—a particular thinking technique, with its own distinctive jargon—has become de rigueur.
  • the basic idea is simple. When new information comes in, you don’t want it to replace old information wholesale. Instead, you want it to modify what you already know to an appropriate degree. The degree of modification depends both on your confidence in your preëxisting knowledge and on the value of the new data. Bayesian reasoners begin with what they call the “prior” probability of something being true, and then find out if they need to adjust it.
  • Bayesian reasoning is an approach to statistics, but you can use it to interpret all sorts of new information.
Javier E

On the Shortness of Life 2.0 - by Peter Juul - The Liberal Patriot - 0 views

  • It’s a deft and eclectic synthesis of ancient and modern thinking about how humanity can come to terms with our limited time on Earth – the title derives from the length of the average human lifespan – ranging intellectually from ancient Greek and Roman philosophers like Seneca to modern-day Buddhist and existentialist thinkers. Stuffed with valuable and practical insights on life and how we use – or misuse – it, Four Thousand Weeks is an impressive and compact volume well worth the time and attention of even the most casual readers.
  • As Burkeman notes, our preoccupation with productivity allows us to evade “the anxiety that might arise if we were to ask ourselves whether we’re on the right path.” The end result is a lot of dedicated and talented people in politics and policy burning themselves out for no discernable or meaningful purpose.
  • Then there’s social media, defined by Burkeman as “a machine for misusing your life.” Social media platforms like Twitter and Facebook don’t just distract us from more important matters, he argues, “they change how we’re defining ‘important matters’ in the first place.”
  • ...15 more annotations...
  • Social media also amounts to “a machine for getting you to care about too many things, even if they’re each indisputably worthwhile.” Hence the urge to depict every policy problem as an urgent if not existential crisis
  • social media has turned all of us into “angrier, less empathetic, more anxious or more numbed out” versions of ourselves.
  • Finally, our political and policy debates tend towards what Burkeman calls “paralyzing grandiosity” – the false notion that in the face of problems like climate change, economic inequality, and ongoing threats to democracy “only the most revolutionary, world-transforming causes are worth fighting for.” It’s a sentiment that derives from and reinforces catastrophism and absolutism as ways of thinking about politics and policy
  • That sentiment also often results in impotent impatience, which in turn leads to frustration, anger, and cynicism when things don’t turn out exactly as we’ve hoped. But it also allows us to avoid hard choices required in order to pull together the political coalitions necessary to effect actual change.
  • Four Thousand Weeks is filled to the brim with practical advice
  • Embrace “radical incrementalism.”
  • Burkeman suggests we find some hobby we enjoy for its own sake, not because there’s some benefit we think we can derive from it
  • Take a break
  • rest for rest’s sake, “to spend some of our time, that is, on activities in which the only thing we’re trying to get from them is the doing itself.”
  • we should cultivate the patience to see our goals through step-by-step over the long term. We’ve got to resist the need for speed and desire for rapid resolution of problems, letting them instead take the time they take.
  • “To make a difference,” Burkeman argues, “you must focus your finite capacity for care.”
  • “Consolidate your caring” and think small.
  • it’s perfectly fine to dedicate your time to a limited subset of issues that you care deeply about. We’re only mortal, and as Burkeman points out it’s important to “consciously pick your battles in charity, activism, and politics.”
  • our lives are just as meaningful and worthwhile if we spend our time “on, say caring for an elderly relative with dementia or volunteering at the local community garden” as they are if we’re up to our eyeballs in the minutiae of politics and policy. What matters is that we make things slightly better with our contributions and actions
  • once we give up on the illusion of perfection, Burkeman observes, we “get to roll up [our] sleeves and start work on what’s gloriously possible instead.”
Javier E

Technopoly-Chs. 9,10--Scientism, the great symbol drain - 0 views

  • By Scientism, I mean three interrelated ideas that, taken together, stand as one of the pillars of Technopoly.
  • The first and indispensable idea is, as noted, that the methods of the natural sciences can be applied to the study of human behavior. This idea is the backbone of much of psychology and sociology as practiced at least in America, and largely accounts for the fact that social science, to quote F. A. Hayek, "has cont~ibuted scarcely anything to our understanding of social phenomena." 2
  • The second idea is, as also noted, that social science generates specific principles which can be used to organize society on a rational and humane basis. This implies that technical meansmostly "invisible technologies" supervised by experts-can be designed to control human behavior and set it on the proper course.
  • ...63 more annotations...
  • The third idea is that faith in science can serve as a comprehensive belief system that gives meaning to life, as well. as a sense of well-being, morality, and even immortality.
  • the spirit behind this scientific ideal inspired several men to believe that the reliable and predictable knowledge that could be obtained about stars and atoms could also be obtained about human behavior.
  • Among the best known of these early "social scientists" were Claude-Henri de Saint-Simon, Prosper Enfantin, and, of course, Auguste Comte.
  • They held in common two beliefs to which T echnopoly is deeply indebted: that the natural sciences provide a method to unlock the secrets of both the human heart and the direction of social life; that society can be rationally and humanely reorganized according to principles that social science will uncover. It is with these men that the idea of "social engineering" begins and the seeds of Scientism are planted.
  • Information produced by counting may sometimes be valuable in helping a person get an idea, or, even more so, in providing support for an idea. But the mere activity of counting does not make science.
  • Nor does observing th_ings, though it is sometimes said that if one is empirical, one is scientific. To be empirical means to look at things before drawing conclusions. Everyone, therefore, is an empiricist, with the possible exception of paranoid schizophrenics.
  • What we may call science, then, is the quest to find the immutable and universal laws that govern processes, presuming that there are cause-and-effect relations among these processes. It follows that the quest to understand human behavior and feeling can in no sense except the most trivial be called science.
  • Scientists do strive to be empirical and where possible precise, but it is also basic to their enterprise that they maintain a high degree of objectivity, which means that they study things independently of what people think or do about them.
  • I do not say, incidentally, that the Oedipus complex and God do not exist. Nor do I say that to believe in them is harmful-far from it. I say only that, there being no tests that could, in principle, show them to be false, they fall outside the purview Scientism 151 of science, as do almost all theories that make up the content of "social science."
  • in the nineteenth centu~, novelists provided us with most of the powerful metaphors and images of our culture.
  • This fact relieves the scientist of inquiring into their values and motivations and for this reason alone separates science from what is called social science, consigning the methodology of the latter (to quote Gunnar Myrdal) to the status of the "metaphysical and pseudo-objective." 3
  • The status of social-science methods is further reduced by the fact that there are almost no experiments that will reveal a social-science theory to be false.
  • et us further suppose that Milgram had found that 100 percent of his 1 subjecl:s did what they were told, with or without Hannah Arendt. And now let us suppose that I tell you a story of a Scientism 153 group of people who in some real situation refused to comply with the orders of a legitimate authority-let us say, the Danes who in the face of Nazi occupation helped nine thousand Jews escape to Sweden. Would you say to me that this cannot be so because Milgram' s study proves otherwise? Or would you say that this overturns Milgram's work? Perhaps you would say that the Danish response is not relevant, since the Danes did not regard the Nazi occupation as constituting legitimate autho!ity. But then, how would we explain the cooperative response to Nazi authority of the French, the Poles, and the Lithuanians? I think you would say none of these things, because Milgram' s experiment qoes not confirm or falsify any theory that might be said to postulate a law of human nature. His study-which, incidentally, I find both fascinating and terrifying-is not science. It is something else entirely.
  • Freud, could not imagine how the book could be judged exemplary: it was science or it was nothing. Well, of course, Freud was wrong. His work is exemplary-indeed, monumental-but scarcely anyone believes today that Freud was doing science, any more than educated people believe that Marx was doing science, or Max Weber or Lewis Mumford or Bruno Bettelheim or Carl Jung or Margaret Mead or Arnold Toynbee. What these people were doing-and Stanley Milgram was doing-is documenting the behavior and feelings of people as they confront problems posed by their culture.
  • the stories of social r~searchers are much closer in structure and purpose to what is called imaginative literature; that is to say, both a social researcher and a novelist give unique interpretations to a set of human events and support their interpretations with examples in various forms. Their interpretations cannot be proved or disproved but will draw their appeal from the power of their language, the depth of their explanations, the relevance of their examples, and the credibility of their themes.
  • And all of this has, in both cases, an identifiable moral purpose.
  • The words "true" and "false" do not apply here in the sense that they are used in mathematics or science. For there is nothing universally and irrevocably true or false about these interpretations. There are no critical tests to confirm or falsify them. There are no natural laws from which they are derived. They are bound by time, by situation, and above all by the cultural prejudices of the researcher or writer.
  • Both the novelist and the social researcher construct their stories by the use of archetypes and metaphors.
  • Cervantes, for example, gave us the enduring archetype of the incurable dreamer and idealist in Don Quixote. The social historian Marx gave us the archetype of the ruthless and conspiring, though nameless, capitalist. Flaubert gave us the repressed b~urgeois romantic in Emma Bovary. And Margaret Mead gave us the carefree, guiltless Samoan adolescent. Kafka gave us the alienated urbanite driven to self-loathing. And Max Weber gave us hardworking men driven by a mythology he called the Protestant Ethic. Dostoevsky gave us the egomaniac redeemed by love and religious fervor. And B. F. Skinner gave us the automaton redeemed by a benign technology.
  • Why do such social researchers tell their stories? Essentially for didactic and moralistic purposes. These men and women tell their stories for the same reason the Buddha, Confucius, Hillel, and Jesus told their stories (and for the same reason D. H. Lawrence told his).
  • Moreover, in their quest for objectivity, scientists proceed on the assumption that the objects they study are indifferent to the fact that they are being studied.
  • If, indeed, the price of civilization is repressed sexuality, it was not Sigmund Freud who discovered it. If the consciousness of people is formed by their material circumstances, it was not Marx who discovered it. If the medium is the message, it was not McLuhan who discovered it. They have merely retold ancient stories in a modem style.
  • Unlike science, social research never discovers anything. It only rediscovers what people once were told and need to be told again.
  • Only in knowing ~omething of the reasons why they advocated education can we make sense of the means they suggest. But to understand their reas.ons we must also understand the narratives that governed their view of the world. By narrative, I mean a story of human history that gives meaning to the past, explains the present, and provides guidance for the future.
  • In Technopoly, it is not Scientism 159 enough to say, it is immoral and degrading to allow people to be homeless. You cannot get anywhere by asking a judge, a politician, or a bureaucrat to r~ad Les Miserables or Nana or, indeed, the New Testament. Y 01.i must show that statistics have produced data revealing the homeless to be unhappy and to be a drain on the economy. Neither Dostoevsky nor Freud, Dickens nor Weber, Twain nor Marx, is now a dispenser of legitimate knowledge. They are interesting; they are ''.worth reading"; they are artifacts of our past. But as for "truth," we must tum to "science."
  • In Technopoly, it is not enough for social research to rediscover ancient truths or to comment on and criticize the moral behavior of people. In T echnopoly, it is an insult to call someone a "moralizer." Nor is it sufficient for social research to put forward metaphors, images, and ideas that can help people live with some measure of understanding and dignity.
  • Such a program lacks the aura of certain knowledge that only science can provide. It becomes necessary, then, to transform psychology, sociology, and anthropology into "sciences," in which humanity itself becomes an object, much like plants, planets, or ice cubes.
  • That is why the commonplaces that people fear death and that children who come from stable families valuing scholarship will do well in school must be announced as "discoveries" of scientific enterprise. In this way, social resear~hers can see themselves, and can be seen, as scientists, researchers without bias or values, unburdened by mere opinion. In this way, social policies can be claimed to rest on objectively determined facts.
  • given the psychological, social, and material benefits that attach to the label "scientist," it is not hard to see why social researchers should find it hard to give it up.
  • Our social "s'cientists" have from the beginning been less tender of conscience, or less rigorous in their views of science, or perhaps just more confused about the questions their procedures can answer and those they cannot. In any case, they have not been squeamish about imputing to their "discoveries" and the rigor of their procedures the power to direct us in how we ought rightly to behave.
  • It is less easy to see why the rest of us have so willingly, even eagerly, cooperated in perpetuating the same illusion.
  • When the new technologies and techniques and spirit of men like Galileo, Newton, and Bacon laid the foundations of natural science, they also discredited the authority of earlier accounts of the physical world, as found, for example, in the great tale of Genesis. By calling into question the truth of such accounts in one realm, science undermined the whole edifice of belief in sacred stories and ultimately swept away with it the source to which most humans had looked for moral authority. It is not too much to say, I think, that the desacralized world has been searching for an alternative source of moral authority ever since.
  • We welcome them gladly, and the claim explicitly made or implied, because we need so desperately to find some source outside the frail and shaky judgments of mortals like ourselves to authorize our moral decisions and behavior. And outside of the authority of brute force, which can scarcely be called moral, we seem to have little left but the authority of procedures.
  • It is not merely the misapplication of techniques such as quantification to questions where numbers have nothing to say; not merely the confusion of the material and social realms of human experience; not merely the claim of social researchers to be applying the aims and procedures of natural scien\:e to the human world.
  • This, then, is what I mean by Scientism.
  • It is the desperate hope, and wish, and ultimately the illusory belief that some standardized set of procedures called "science" can provide us with an unimpeachable source of moral authority, a suprahuman basis for answers to questions like "What is life, and when, and why?" "Why is death, and suffering?" 'What is right and wrong to do?" "What are good and evil ends?" "How ought we to think and feel and behave?
  • Science can tell us when a heart begins to beat, or movement begins, or what are the statistics on the survival of neonates of different gestational ages outside the womb. But science has no more authority than you do or I do to establish such criteria as the "true" definition of "life" or of human state or of personhood.
  • Social research can tell us how some people behave in the presence of what they believe to be legitimate authority. But it cannot tell us when authority is "legitimate" and when not, or how we must decide, or when it may be right or wrong to obey.
  • To ask of science, or expect of science, or accept unchallenged from science the answers to such questions is Scientism. And it is Technopoly's grand illusion.
  • In the institutional form it has taken in the United States, advertising is a symptom of a world-view 'that sees tradition as an obstacle to its claims. There can, of course, be no functioning sense of tradition without a measure of respect for symbols. Tradition is, in fact, nothing but the acknowledgment of the authority of symbols and the relevance of the narratives that gave birth to them. With the erosion of symbols there follows a loss of narrative, which is one of the most debilitating consequences of Technopoly' s power.
  • What the advertiser needs to know is not what is right about the product but what is wrong about the buyer. And so the balance of business expenditures shifts from product research to market research, which meahs orienting business away from making products of value and toward making consumers feel valuable. The business of business becomes pseudo-therapy; the consumer, a patient reassl.,lred by psychodramas.
  • At the moment, 1t 1s considered necessary to introduce computers to the classroom, as it once was thought necessary to bring closed-circuit television and film to the classroom. To the question "Why should we do this?" the answer is: "To make learning more efficient and more interesting." Such an answer is considered entirely adequate, since in T ~chnopoly efficiency and interest need no justification. It is, therefore, usually not noticed that this answer does not address the question "What is learning for?"
  • What this means is that somewhere near the core of Technopoly is a vast industry with license to use all available symbols to further the interests of commerce, by devouring the psyches of consumers.
  • In the twentieth century, such metaphors and images have come largely from the pens of social historians and researchers. ·Think of John Dewey, William James, Erik Erikson, Alfred Kinsey, Thorstein Veblen, Margaret Mead, Lewis Mumford, B. F. Skinner, Carl Rogers, Marshall McLuhan, Barbara Tuchman, Noam Chomsky, Robert Coles, even Stanley Milgram, and you must acknowledge that our ideas of what we are like and what kind of country we live in come from their stories to a far greater extent than from the stories of our most renowned novelists.
  • social idea that must be advanced through education.
  • Confucius advocated teaching "the Way" because in tradition he saw the best hope for social order. As our first systematic fascist, Plato wished education to produce philosopher kings. Cicero argued that education must free the student from the tyranny of the present. Jefferson thought the purpose of education is to teach the young how to protect their liberties. Rousseau wished education to free the young from the unnatural constraints of a wicked and arbitrary social order. And among John Dewey's aims was to help the student function without certainty in a world of constant change and puzzling· ambiguities.
  • The point is that cultures must have narratives and will find them where they will, even if they lead to catastrophe. The alternative is to live without meaning, the ultimate negation of life itself.
  • It is also to the point to say that each narrative is given its form and its emotional texture through a cluster of symbols that call for respect and allegiance, even devotion.
  • by definition, there can be no education philosophy that does not address what learning is for. Confucius, Plato, Quintilian, Cicero, Comenius, Erasmus, Locke, Rousseau, Jefferson, Russell, Montessori, Whitehead, and Dewey--each believed that there was some transcendent political, spiritual, or
  • The importance of the American Constitution is largely in its function as a symbol of the story of our origins. It is our political equivalent of Genesis. To mock it, to• ignore it, to circwnvent it is to declare the irrelevance of the story of the United States as a moral light unto the world. In like fashion, the Statue of Liberty is the key symbol of the story of America as the natural home of the teeming masses, from anywhere, yearning to be free.
  • There are those who believe--as did the great historian Arnold Toynbee-that without a comprehensive religious narrative at its center a culture must decline. Perhaps. There are, after all, other sources-mythology, politics, philosophy, and science; for example--but it is certain that no culture can flourish without narratives of transcendent orjgin and power.
  • This does not mean that the mere existence of such a narrative ensures a culture's stability and strength. There are destructive narratives. A narrative provides meaning, not necessarily survival-as, for example, the story provided by Adolf Hitler to the German nation in t:he 1930s.
  • What story does American education wish to tell now? In a growing Technopoly, what do we believe education is for?
  • The answers are discouraging, and one of. them can be inferred from any television commercial urging the young to stay in school. The commercial will either imply or state explicitly that education will help the persevering student to get a ·good job. And that's it. Well, not quite. There is also the idea that we educate ourselves to compete with the Japanese or the Germans in an economic struggle to be number one.
  • Young men, for example, will learn how to make lay-up shots when they play basketball. To be able to make them is part of the The Great Symbol Drain 177 definition of what good players are. But they do not play basketball for that purpose. There is usually a broader, deeper, and more meaningful reason for wanting to play-to assert their manhood, to please their fathers, to be acceptable to their peers, even for the sheer aesthetic pleasure of the game itself. What you have to do to be a success must be addressed only after you have found a reason to be successful.
  • Bloom's solution is that we go back to the basics of Western thought.
  • He wants us to teach our students what Plato, Aristotle, Cicero, Saint Augustine, and other luminaries have had to say on the great ethical and epistemological questions. He believes that by acquainting themselves with great books our students will acquire a moral and intellectual foundation that will give meaning and texture to their lives.
  • Hirsch's encyclopedic list is not a solution but a description of the problem of information glut. It is therefore essentially incoherent. But it also confuses a consequence of education with a purpose. Hirsch attempted to answer the question "What is an educated person?" He left unanswered the question "What is an education for?"
  • Those who reject Bloom's idea have offered several arguments against it. The first is that such a purpose for education is elitist: the mass of students would not find the great story of
  • Western civilization inspiring, are too deeply alienated from the past to find it so, and would therefore have difficulty connecting the "best that has been thought and said" to their own struggles to find q1eaning in their lives.
  • A second argument, coming from what is called a "leftist" perspective, is even more discouraging. In a sense, it offers a definition of what is meant by elitism. It asserts that the "story of Western civilization" is a partial, biased, and even oppressive one. It is not the story of blacks, American Indians, Hispanics, women, homosexuals-of any people who are not white heterosexual males of Judea-Christian heritage. This claim denies that there is or can be a national culture, a narrative of organizing power and inspiring symbols which all citizens can identify with and draw sustenance from. If this is true, it means nothing less than that our national symbols have been drained of their power to unite, and that education must become a tribal affair; that is, each subculture must find its own story and symbols, and use them as the moral basis of education.
  • nto this void comes the Technopoly story, with its emphasis on progress without limits, rights without responsibilities, and technology without cost. The T echnopoly story is without a moral center. It puts in its place efficiency, interest, and economic advance. It promises heaven on earth through the conveniences of technological progress. It casts aside all traditional narratives and symbols that· suggest stability and orderliness, and tells, instead, of a life of skills, technical expertise, and the ecstasy of consumption. Its purpose is to produce functionaries for an ongoing Technopoly.
  • It answers Bloom by saying that the story of Western civilization is irrelevant; it answers the political left by saying there is indeed a common culture whose name is T echnopoly and whose key symbol is now the computer, toward which there must be neither irreverence nor blasphemy. It even answers Hirsch by saying that there are items on his list that, if thought about too deeply and taken too seriously, will interfere with the progress of technology.
Javier E

Opinion | Your Kid's Existential Dread Is Normal - The New York Times - 0 views

  • my daughter said: “When the pandemic started, I was only 7, and I wasn’t scared. Now I’m 9 and I really understand.”
  • I called Sally Beville Hunter, a clinical associate professor of child and family studies at the University of Tennessee, to see if this kind of philosophical musing was typical for a young tween. “There’s a huge cognitive transition happening” around this age, Hunter told me.
  • It’s the stage when children develop the capacity for abstract thought, she said. The pioneering developmental psychologist Jean Piaget called this transition the “formal operational stage,” and in his research he found it began around age 11, but Hunter said subsequent research has found that it may begin earlier. “It’s the first time children can consider multiple possibilities and test them against each other,” she said. Which helps explain why my daughter has begun thinking about whether Covid will linger into her college years, a decade from now.
  • ...1 more annotation...
  • Another aspect of development that may be happening for her is a stage that the psychologist Erik Erikson called “identity versus role diffusion” (also referred to as “role confusion”), which is shorthand for children figuring out their position in the world. “This is the first time when kids have questions about their own existence, questions about self-identity, the meaning of life and the changing role of authority,” Hunter said.
Javier E

J. Robert Oppenheimer's Defense of Humanity - WSJ - 0 views

  • Von Neumann, too, was deeply concerned about the inability of humanity to keep up with its own inventions. “What we are creating now,” he said to his wife Klári in 1945, “is a monster whose influence is going to change history, provided there is any history left.” Moving to the subject of future computing machines he became even more agitated, foreseeing disaster if “people” could not “keep pace with what they create.”
  • Oppenheimer, Einstein, von Neumann and other Institute faculty channeled much of their effort toward what AI researchers today call the “alignment” problem: how to make sure our discoveries serve us instead of destroying us. Their approaches to this increasingly pressing problem remain instructive.
  • Von Neumann focused on applying the powers of mathematical logic, taking insights from games of strategy and applying them to economics and war planning. Today, descendants of his “game theory” running on von Neumann computing architecture are applied not only to our nuclear strategy, but also many parts of our political, economic and social lives. This is one approach to alignment: humanity survives technology through more technology, and it is the researcher’s role to maximize progress.
  • ...5 more annotations...
  • he also thought that this approach was not enough. “What are we to make of a civilization,” he asked in 1959, a few years after von Neumann’s death, “which has always regarded ethics as an essential part of human life, and…which has not been able to talk about the prospect of killing almost everybody, except in prudential and game-theoretical terms?”
  • to design a “fairness algorithm” we need to know what fairness is. Fairness is not a mathematical constant or even a variable. It is a human value, meaning that there are many often competing and even contradictory visions of it on offer in our societies.
  • Hence Oppenheimer set out to make the Institute for Advanced Study a place for thinking about humanistic subjects like Russian culture, medieval history, or ancient philosophy, as well as about mathematics and the theory of the atom. He hired scholars like George Kennan, the diplomat who designed the Cold War policy of Soviet “containment”; Harold Cherniss, whose work on the philosophies of Plato and Aristotle influenced many Institute colleagues; and the mathematical physicist Freeman Dyson, who had been one of the youngest collaborators in the Manhattan Project. Traces of their conversations and collaborations are preserved not only in their letters and biographies, but also in their research, their policy recommendations, and in their ceaseless efforts to help the public understand the dangers and opportunities technology offers the world.
  • In their biography “American Prometheus,” which inspired Nolan’s film, Martin Sherwin and Kai Bird document Oppenheimer’s conviction that “the safety” of a nation or the world “cannot lie wholly or even primarily in its scientific or technical prowess.” If humanity wants to survive technology, he believed, it needs to pay attention not only to technology but also to ethics, religions, values, forms of political and social organization, and even feelings and emotions.
  • Preserving any human value worthy of the name will therefore require not only a computer scientist, but also a sociologist, psychologist, political scientist, philosopher, historian, theologian. Oppenheimer even brought the poet T.S. Eliot to the Institute, because he believed that the challenges of the future could only be met by bringing the technological and the human together. The technological challenges are growing, but the cultural abyss separating STEM from the arts, humanities, and social sciences has only grown wider. More than ever, we need institutions capable of helping them think together.
Javier E

Reality is your brain's best guess - Big Think - 0 views

  • Andy Clark admits it’s strange that he took up “predictive processing,” an ambitious leading theory of how the brain works. A philosopher of mind at the University of Sussex, he has devoted his career to how thinking doesn’t occur just between the ears—that it flows through our bodies, tools, and environments. “The external world is functioning as part of our cognitive machinery
  • But 15 years ago, he realized that had to come back to the center of the system: the brain. And he found that predictive processing provided the essential links among the brain, body, and world.
  • There’s a traditional view that goes back at least to Descartes that perception was about the imprinting of the outside world onto the sense organs. In 20th-century artificial intelligence and neuroscience, vision was a feed-forward process in which you took in pixel-level information, refined it into a two and a half–dimensional sketch, and then refined that into a full world model.
  • ...9 more annotations...
  • a new book, The Experience Machine: How Our Minds Predict and Shape Reality, which is remarkable for how it connects the high-level concepts to everyday examples of how our brains make predictions, how that process can lead us astray, and what we can do about it.
  • being driven to stay within your own viability envelope is crucial to the kind of intelligence that we know about—the kind of intelligence that we are
  • If you ask what is a predictive brain for, the answer has to be: staying alive. Predictive brains are a way of staying within your viability envelope as an embodied biological organism: getting food when you need it, getting water when you need it.
  • in predictive processing, perception is structured around prediction. Perception is about the brain having a guess at what’s most likely to be out there and then using sensory information to refine the guess.
  • artificial curiosity. Predictive-processing systems automatically have that. They’re set up so that they predict the conditions of their own survival, and they’re always trying to get rid of prediction errors. But if they’ve solved all their practical problems and they’ve got nothing else to do, then they’ll just explore. Getting rid of any error is going to be a good thing for them. If you’re a creature like that, you’re going to be a really good learning system. You’re going to love to inhabit the environments that you can learn most from, where the problems are not too simple, not too hard, but just right.
  • It’s an effect that you also see in Marieke Jepma et al.’s work on pain. They showed that if you predict intense pain, the signal that you get will be interpreted as more painful than it would otherwise be, and vice versa. Then they asked why you don’t correct your misimpression. If it’s my expectation that is making it feel more painful, why don’t I get prediction errors that correct it?
  • The reason is that there are no errors. You’re expecting a certain level of pain, and your prediction helps bring that level about; there is nothing for you to correct. In fact, you’ve got confirmation of your own prediction. So it can be a vicious circle
  • Do you think this self-fulfilling loop in psychosis and pain perception helps to account for misinformation in our society’s and people’s susceptibility to certain narratives?Absolutely. We all have these vulnerabilities and self-fulfilling cycles. We look at the places that tend to support the models that we already have, because that’s often how we judge whether the information is good or not
  • Given that we know we’re vulnerable to self-fulfilling information loops, how can we make sure we don’t get locked into a belief?Unfortunately, it’s really difficult. The most potent intervention is to remind ourselves that we sample the world in ways that are guided by the models that we’ve currently got. The structures of science are there to push back against our natural tendency to cherry-pick.
peterconnelly

Where Will We Be in 20 Years? - The New York Times - 0 views

  • “Demographics are destiny.”It is a phrase, often attributed to the French philosopher Auguste Comte, that suggests much of the future is preordained by the very simple trend lines of populations. Want to understand how the power dynamic between the United States and China will change over the next 20 years? An economist would tell you to look at the demographics of both countries. (China’s economy is likely to overtake the U.S. economy by 2028, but remain smaller on a per capita basis.)
  • Predicting the future may be a fool’s errand. But using demographic data to assess the opportunities and challenges of the next two decades is something that business and political leaders don’t do enough. We’re all too swept up in the here and now, the next quarter and the next year.
  • More people around the world had more disposable income and increasingly chose to live closer to cities with greater access to airports. That, married with the human condition that people like to be around other people, makes forecasting certain elements of the future almost mathematical.
  • ...6 more annotations...
  • One aspect of the future that demographics can’t help predict are technological innovations.
  • About 70 percent of the world population is expected to live in urban areas by 2050, according to data from the United Nations.
  • The U.S. Energy Information Administration projects that the world will need about 28 percent more energy in 2040 than it did in 2015 based on the number of people in the country and consumption patterns; on our current trajectory, about 42 percent of electricity in the United States will come from renewable sources.
  • Technology has led us to expect that goods and services will be delivered at the push of a button, often within minutes.
  • Entrepreneurs, industry leaders and policymakers are already at work solving some of the problems that demographic data suggest are ahead of us, whether it’s figuring out how to incentivize farmers to sequester carbon, use insurance as a tool for reducing coal production, reinvent the motors that power heavy industry so they use less energy, or write laws that help govern code.
  • What about the metaverse? Or crypto technology? Or robots taking our jobs? Or A.I. taking over everything? Demographics can’t answer those questions. All of those things may happen, but life in 2041 may also look a lot like it does today — maybe with the exception of those flying cars.
criscimagnael

Living better with algorithms | MIT News | Massachusetts Institute of Technology - 0 views

  • At a talk on ethical artificial intelligence, the speaker brought up a variation on the famous trolley problem, which outlines a philosophical choice between two undesirable outcomes.
  • Say a self-driving car is traveling down a narrow alley with an elderly woman walking on one side and a small child on the other, and no way to thread between both without a fatality. Who should the car hit?
  • To get a sense of what this means, suppose that regulators require that any public health content — for example, on vaccines — not be vastly different for politically left- and right-leaning users. How should auditors check that a social media platform complies with this regulation? Can a platform be made to comply with the regulation without damaging its bottom line? And how does compliance affect the actual content that users do see?
  • ...12 more annotations...
  • a self-driving car could have avoided choosing between two bad outcomes by making a decision earlier on — the speaker pointed out that, when entering the alley, the car could have determined that the space was narrow and slowed to a speed that would keep everyone safe.
  • Auditors have to inspect the algorithm without accessing sensitive user data.
  • Other considerations come into play as well, such as balancing the removal of misinformation with the protection of free speech.
  • To meet these challenges, Cen and Shah developed an auditing procedure that does not need more than black-box access to the social media algorithm (which respects trade secrets), does not remove content (which avoids issues of censorship), and does not require access to users (which preserves users’ privacy).
  • which is known to help reduce the spread of misinformation
  • In labor markets, for example, workers learn their preferences about what kinds of jobs they want, and employers learn their preferences about the qualifications they seek from workers.
  • But learning can be disrupted by competition
  • it is indeed possible to get to a stable outcome (workers aren’t incentivized to leave the matching market), with low regret (workers are happy with their long-term outcomes), fairness (happiness is evenly distributed), and high social welfare.
  • For instance, when Covid-19 cases surged in the pandemic, many cities had to decide what restrictions to adopt, such as mask mandates, business closures, or stay-home orders. They had to act fast and balance public health with community and business needs, public spending, and a host of other considerations.
  • But of course, no county exists in a vacuum.
  • These complex interactions matter,
  • “Accountability, legitimacy, trust — these principles play crucial roles in society and, ultimately, will determine which systems endure with time.” 
Javier E

Reality Is Broken. We Have AI Photos to Blame. - WSJ - 0 views

  • AI headshots aren’t yet perfect, but they’re so close I expect we’ll start seeing them on LinkedIn, Tinder and other social profiles. Heck, we may already see them. How would we know?
  • Welcome to our new reality, where nothing is real. We now have photos initially captured with cameras that AI changes into something that never was
  • Or, like the headshot above, there are convincingly photographic images AI generates out of thin air.
  • ...11 more annotations...
  • Adobe ADBE 7.19%increase; green up pointing triangle, maker of the Photoshop, released a new tool in Firefly, its generative-AI image suite, that lets you change and add in parts of a photo with AI imagery. Earlier this month, Google showed off a new Magic Editor, initially for Pixel phones, that allows you to easily manipulate a scene. And people are all over TikTok posting the results of AI headshot services like Try It On.
  • After testing a mix of AI editing and generating tools, I just have one question for all of you armchair philosophers: What even is a photo anymore?
  • I have always wondered what I’d look like as a naval officer. Now I don’t have to. I snapped a selfie and uploaded it to Adobe Firefly’s generative-fill tool. One click of the Background button and my cluttered office was wiped out. I typed “American flag” and in it went. Then I selected the Add tool, erased my torso and typed in “naval uniform.” Boom! Adobe even found me worthy of numerous awards and decorations.
  • Astronaut, fighter pilot, pediatrician. I turned myself into all of them in under a minute each. The AI-generated images did have noticeable issues: The uniforms were strange and had odd lettering, the stethoscope seemed to be cut in half and the backgrounds were warped and blurry. Yet the final images are fun, and the quality will only get better. 
  • In FaceApp, for iOS and Android, I was able to change my frown to a smile—with the right amount of teeth! I was also able to add glasses and change my hair color. Some said it looked completely real, others who know me well figured something was up. “Your teeth look too perfect.”
  • The real reality-bending happens in Midjourney, which can turn text prompts into hyper-realistic images and blend existing images in new ways. The image quality of generated images exceeds OpenAI’s Dall-E and Adobe’s Firefly.
  • it’s more complicated to use, since it runs through the chat app Discord. Sign up for service, access the Midjourney bot through your Discord account (via web or app), then start typing in prompts. My video producer Kenny Wassus started working with a more advanced Midjourney plugin called Insight Face Swap-Bot, which allows you to sub in a face to a scene you’ve already made. He’s become a master—making me a Game of Thrones warrior and a Star Wars rebel, among other things.
  • We’re headed for a time when we won’t be able to tell how manipulated a photo is, what parts are real or fake.
  • when influential messages are conveyed through images—be they news or misinformation—people have reason to know a photo’s origin and what’s been done to it.
  • Firefly adds a “content credential,” digital information baked into the file, that says the image was manipulated with AI. Adobe is pushing to get news, tech and social-media platforms to use this open-source standard so we can all understand where the images we see came from.
  • So, yeah, our ability to spot true photos might depend on the cooperation of the entire internet. And by “true photo,” I mean one that captures a real moment—where you’re wearing your own boring clothes and your hair is just so-so, but you have the exact right number of teeth in your head.
Javier E

Netanyahu's Dark Worldview - The Atlantic - 0 views

  • as Netanyahu soon made clear, when it comes to AI, he believes that bad outcomes are the likely outcomes. The Israeli leader interrogated OpenAI’s Brockman about the impact of his company’s creations on the job market. By replacing more and more workers, Netanyahu argued, AI threatens to “cannibalize a lot more jobs than you create,” leaving many people adrift and unable to contribute to the economy. When Brockman suggested that AI could usher in a world where people would not have to work, Netanyahu countered that the benefits of the technology were unlikely to accrue to most people, because the data, computational power, and engineering talent required for AI are concentrated in a few countries.
  • “You have these trillion-dollar [AI] companies that are produced overnight, and they concentrate enormous wealth and power with a smaller and smaller number of people,” the Israeli leader said, noting that even a free-market evangelist like himself was unsettled by such monopolization. “That will create a bigger and bigger distance between the haves and the have-nots, and that’s another thing that causes tremendous instability in our world. And I don’t know if you have an idea of how you overcome that?”
  • The other panelists did not. Brockman briefly pivoted to talk about OpenAI’s Israeli employees before saying, “The world we should shoot for is one where all the boats are rising.” But other than mentioning the possibility of a universal basic income for people living in an AI-saturated society, Brockman agreed that “creative solutions” to this problem were needed—without providing any.
  • ...10 more annotations...
  • The AI boosters emphasized the incredible potential of their innovation, and Netanyahu raised practical objections to their enthusiasm. They cited futurists such as Ray Kurzweil to paint a bright picture of a post-AI world; Netanyahu cited the Bible and the medieval Jewish philosopher Maimonides to caution against upending human institutions and subordinating our existence to machines.
  • Musk matter-of-factly explained that the “very positive scenario of AI” is “actually in a lot of ways a description of heaven,” where “you can have whatever you want, you don’t need to work, you have no obligations, any illness you have can be cured,” and death is “a choice.” Netanyahu incredulously retorted, “You want this world?”
  • By the time the panel began to wind down, the Israeli leader had seemingly made up his mind. “This is like having nuclear technology in the Stone Age,” he said. “The pace of development [is] outpacing what solutions we need to put in place to maximize the benefits and limit the risks.”
  • Netanyahu was a naysayer about the Arab Spring, unwilling to join the rapturous ranks of hopeful politicians, activists, and democracy advocates. But he was also right.
  • This was less because he is a prophet and more because he is a pessimist. When it comes to grandiose predictions about a better tomorrow—whether through peace with the Palestinians, a nuclear deal with Iran, or the advent of artificial intelligence—Netanyahu always bets against. Informed by a dark reading of Jewish history, he is a cynic about human nature and a skeptic of human progress.
  • fter all, no matter how far civilization has advanced, it has always found ways to persecute the powerless, most notably, in his mind, the Jews. For Netanyahu, the arc of history is long, and it bends toward whoever is bending it.
  • This is why the Israeli leader puts little stock in utopian promises, whether they are made by progressive internationalists or Silicon Valley futurists, and places his trust in hard power instead
  • “The weak crumble, are slaughtered and are erased from history while the strong, for good or for ill, survive. The strong are respected, and alliances are made with the strong, and in the end peace is made with the strong.”
  • To his many critics, myself included, Netanyahu’s refusal to envision a different future makes him a “creature of the bunker,” perpetually governed by fear. Although his pessimism may sometimes be vindicated, it also holds his country hostag
  • In other words, the same cynicism that drives Netanyahu’s reactionary politics is the thing that makes him an astute interrogator of AI and its promoters. Just as he doesn’t trust others not to use their power to endanger Jews, he doesn’t trust AI companies or AI itself to police its rapidly growing capabilities.
Javier E

Book review - The Dawn of Everything: A New History of Humanity | The Inquisitive Biolo... - 0 views

  • Every few years, it seems, there is a new bestselling Big History book. And not infrequently, they have rather grandiose titles.
  • , I hope to convince you why I think this book will stand the test of time better.
  • First, rather than one author’s pet theory, The Dawn of Everything is the brainchild of two outspoken writers: anthropologist David Graeber (a figurehead in the Occupy Wall Street movement and author of e.g. Bullshit Jobs) and archaeologist David Wengrow (author of e.g. What Makes Civilization?). I expect a large part of their decade-long collaboration consisted of shooting holes in each other’s arguments
  • ...24 more annotations...
  • Colonisation exposed us to new ideas that shocked and confused us. Graeber & Wengrow focus on the French coming into contact with Native Americans in Canada, and in particular on Wendat Confederacy philosopher–statesman Kandiaronk as an example of European traders, missionaries, and intellectuals debating with, and being criticized by indigenous people. Historians have downplayed how much these encounters shaped Enlightenment ideas.
  • this thought-provoking book is armed to the teeth with fascinating ideas and interpretations that go against mainstream thinking
  • ather than yet another history book telling you how humanity got here, they take their respective disciplines to task for dealing in myths.
  • Its legacy, shaped via several iterations, is the modern textbook narrative: hunter-gathering was replaced by pastoralism and then farming; the agricultural revolution resulted in larger populations producing material surpluses; these allowed for specialist occupations but also needed bureaucracies to share and administer them to everyone; and this top-down control led to today’s nation states. Ta-daa!
  • this simplistic tale of progress ignores and downplays that there was nothing linear or inevitable about where we have ended up.
  • ake agriculture. Rather than humans enthusiastically entering into what Harari in Sapiens called a Faustian bargain with crops, there were many pathways and responses
  • Experiments show that plant domestication could have been achieved in as little as 20–30 years, so the fact that cereal domestication here took some 3,000 years questions the notion of an agricultural “revolution”. Lastly, this book includes many examples of areas where agriculture was purposefully rejected. Designating such times and places as “pre-agricultural” is misleading, write the authors, they were anti-agricultural.
  • The idea that agriculture led to large states similarly needs revision
  • correlation is not causation, and some 15–20 additional centres of domestication have since been identified that followed different paths. Some cities have previously remained hidden in the sediments of ancient river deltas until revealed by modern remote-sensing technology.
  • “extensive agriculture may thus have been an outcome, not a cause, of urbanization”
  • And cities did not automatically imply social stratification. The Dawn of Everything fascinates with its numerous examples of large settlements without ruling classes, such as Ukrainian mega-sites, the Harappan civilization, or Mexican city-states.
  • These instead relied on collective decision-making through assemblies or councils, which questions some of the assumptions of evolutionary psychology about scale: that larger human groups require complex (i.e. hierarchical) systems to organize them.
  • e what is staring them in the face
  • humans have always been very capable of consciously experimenting with different social arrangements. And—this is rarely acknowledged—they did so on a seasonal basis, spending e.g. part of the year settled in large communal groups under a leader, and another part as small, independently roving bands.
  • Throughout, Graeber & Wengrow convincingly argue that the only thing we can say about our ancestors is that “there is no single pattern. The only consistent phenomenon is the very fact of alteration […] If human beings, through most of our history, have moved back and forth fluidly between different social arrangements […] maybe the real question should be ‘how did we get stuck?
  • Next to criticism, the authors put out some interesting ideas of their own, of which I want to quickly highlight two.
  • The first is that some of the observed variations in social arrangements resulted from schismogenesis. Anthropologist Gregory Bateson coined this term in the 1930s to describe how people define themselves against or in opposition to others, adopting behaviours and attitudes that are different.
  • The second idea is that states can be described in terms of three elementary forms of domination: control of violence, control of information, and individual charisma, which express themselves as sovereignty, administration, and competitive politics.
  • Our current states combine these three, and thus we have state-endorsed violence in the form of law enforcement and armies, bureaucracy, and the popularity contests we call elections in some countries, and monarchs, oligarchs, or tyrants in other countries. But looking at history, there is no reason why this should be and the authors provide examples of societies that showed only one or two such forms of control
  • Asking which past society most resembles today’s is the wrong question to ask. It risks slipping into an exercise in retrofitting, “which makes us scour the ancient world for embryonic versions of our modern nation states”
  • I have left unmentioned several other topics: the overlooked role of women, the legacy of Rousseau’s and Hobbes’s ideas, the origins of inequality and the flawed assumptions hiding behind that question
  • There are so many historical details and delights hiding between these covers that I was thoroughly enthralle
  • If you have any interest in big history, archaeology, or anthropology, this book is indispensable. I am confident that the questions and critiques raised here will remain relevant for a long time to come.
  • I was particularly impressed by the in-depth critique by worbsintowords on his YouTube channel What is Politics? of (so far) five videos
Javier E

Among the Disrupted - The New York Times - 0 views

  • even as technologism, which is not the same as technology, asserts itself over more and more precincts of human life, so too does scientism, which is not the same as science.
  • The notion that the nonmaterial dimensions of life must be explained in terms of the material dimensions, and that nonscientific understandings must be translated into scientific understandings if they are to qualify as knowledge, is increasingly popular inside and outside the university,
  • So, too, does the view that the strongest defense of the humanities lies not in the appeal to their utility — that literature majors may find good jobs, that theaters may economically revitalize neighborhoods
  • ...27 more annotations...
  • The contrary insistence that the glories of art and thought are not evolutionary adaptations, or that the mind is not the brain, or that love is not just biology’s bait for sex, now amounts to a kind of heresy.
  • Greif’s book is a prehistory of our predicament, of our own “crisis of man.” (The “man” is archaic, the “crisis” is not.) It recognizes that the intellectual history of modernity may be written in part as the epic tale of a series of rebellions against humanism
  • We are not becoming transhumanists, obviously. We are too singular for the Singularity. But are we becoming posthumanists?
  • In American culture right now, as I say, the worldview that is ascendant may be described as posthumanism.
  • The posthumanism of the 1970s and 1980s was more insular, an academic affair of “theory,” an insurgency of professors; our posthumanism is a way of life, a social fate.
  • In “The Age of the Crisis of Man: Thought and Fiction in America, 1933-1973,” the gifted essayist Mark Greif, who reveals himself to be also a skillful historian of ideas, charts the history of the 20th-century reckonings with the definition of “man.
  • Here is his conclusion: “Anytime your inquiries lead you to say, ‘At this moment we must ask and decide who we fundamentally are, our solution and salvation must lie in a new picture of ourselves and humanity, this is our profound responsibility and a new opportunity’ — just stop.” Greif seems not to realize that his own book is a lasting monument to precisely such inquiry, and to its grandeur
  • “Answer, rather, the practical matters,” he counsels, in accordance with the current pragmatist orthodoxy. “Find the immediate actions necessary to achieve an aim.” But before an aim is achieved, should it not be justified? And the activity of justification may require a “picture of ourselves.” Don’t just stop. Think harder. Get it right.
  • — but rather in the appeal to their defiantly nonutilitarian character, so that individuals can know more than how things work, and develop their powers of discernment and judgment, their competence in matters of truth and goodness and beauty, to equip themselves adequately for the choices and the crucibles of private and public life.
  • Who has not felt superior to humanism? It is the cheapest target of all: Humanism is sentimental, flabby, bourgeois, hypocritical, complacent, middlebrow, liberal, sanctimonious, constricting and often an alibi for power
  • what is humanism? For a start, humanism is not the antithesis of religion, as Pope Francis is exquisitely demonstrating
  • The worldview takes many forms: a philosophical claim about the centrality of humankind to the universe, and about the irreducibility of the human difference to any aspect of our animality
  • Here is a humanist proposition for the age of Google: The processing of information is not the highest aim to which the human spirit can aspire, and neither is competitiveness in a global economy. The character of our society cannot be determined by engineers.
  • And posthumanism? It elects to understand the world in terms of impersonal forces and structures, and to deny the importance, and even the legitimacy, of human agency.
  • There have been humane posthumanists and there have been inhumane humanists. But the inhumanity of humanists may be refuted on the basis of their own worldview
  • the condemnation of cruelty toward “man the machine,” to borrow the old but enduring notion of an 18th-century French materialist, requires the importation of another framework of judgment. The same is true about universalism, which every critic of humanism has arraigned for its failure to live up to the promise of a perfect inclusiveness
  • there has never been a universalism that did not exclude. Yet the same is plainly the case about every particularism, which is nothing but a doctrine of exclusion; and the correction of particularism, the extension of its concept and its care, cannot be accomplished in its own name. It requires an idea from outside, an idea external to itself, a universalistic idea, a humanistic idea.
  • Asking universalism to keep faith with its own principles is a perennial activity of moral life. Asking particularism to keep faith with its own principles is asking for trouble.
  • there is no more urgent task for American intellectuals and writers than to think critically about the salience, even the tyranny, of technology in individual and collective life
  • a methodological claim about the most illuminating way to explain history and human affairs, and about the essential inability of the natural sciences to offer a satisfactory explanation; a moral claim about the priority, and the universal nature, of certain values, not least tolerance and compassion
  • “Our very mastery seems to escape our mastery,” Michel Serres has anxiously remarked. “How can we dominate our domination; how can we master our own mastery?”
  • universal accessibility is not the end of the story, it is the beginning. The humanistic methods that were practiced before digitalization will be even more urgent after digitalization, because we will need help in navigating the unprecedented welter
  • Searches for keywords will not provide contexts for keywords. Patterns that are revealed by searches will not identify their own causes and reasons
  • The new order will not relieve us of the old burdens, and the old pleasures, of erudition and interpretation.
  • Is all this — is humanism — sentimental? But sentimentality is not always a counterfeit emotion. Sometimes sentiment is warranted by reality.
  • The persistence of humanism through the centuries, in the face of formidable intellectual and social obstacles, has been owed to the truth of its representations of our complexly beating hearts, and to the guidance that it has offered, in its variegated and conflicting versions, for a soulful and sensitive existence
  • a complacent humanist is a humanist who has not read his books closely, since they teach disquiet and difficulty. In a society rife with theories and practices that flatten and shrink and chill the human subject, the humanist is the dissenter.
Javier E

How to Argue Fairly and Without Rancor (Hello, Thanksgiving!) - The New York Times - 1 views

  • this may be a good time to explore what psychologists and philosophers say are the most effective ways to argue
  • And by “argue” they do not mean “quarrel,” but communicate without rancor or faulty reasoning with someone who has an opposing viewpoint, with the hope of broadening one’s understanding of people and ideas.
  • Listen carefully
  • ...16 more annotations...
  • The aim of an argument should not be proving who is right, but conveying that you care about the issues,
  • Show the person with whom you are speaking that you care about what he or she says.
  • The goal should be to state your views and to hear theirs. It should not be: “I am not leaving until you admit that you are wrong, or here is what I believe, and I am not budging from this,”
  • And when you listen, go all in
  • “Don’t half-listen while figuring out what you’re going to say next,”
  • Don’t ‘drop the anchor’
  • Some people start an argument by staking their position and refusing to budge, an impulse that Dr. Cuddy called “dropping the anchor.”Instead, try to understand the other person’s point of view; it does not mean you have to agree with him or her, or that you are abandoning deeply felt objections to, for example, racism or sexism, she said.
  • Mind your body languageYour body language can send messages that are more compelling than the words coming out of your mouth.Try to avoid gestures that are patronizing or defensive, like crossing your arms or clenching your jaw.
  • Maintain eye contact in a way that is not a stare-down.Lean forward slightly to show you are interested.And no eye-rolling,
  • Don’t argue to winDr. Gutting says it helps to use neutral or charitable language when acknowledging opposing viewpoints, especially during arguments over politics. It lays the groundwork for a more effective argument on points of genuine weakness.
  • Don’t think of an argument as an opportunity to convince the other person of your view; think of it as a way to test and improve your opinions, and to gain a better understanding of the other side.
  • “People do give up views because of rational arguments against them,” Dr. Gutting said in the interview. “But this is almost always a long process, not the outcome of a single decisive encounter.”
  • Know the factsA good argument is supported by evidence, but that is just a starting point. Sometimes, especially with political back-and-forths, one side will look only at evidence supporting its own position, conveniently leaving out the full picture,
  • Speak and listen fearlessly
  • “So for me, the condition for a conversation has to be that you are unafraid to speak courageously, and you are unafraid to tell your partner exactly what it is that you think about the world.”But a two-way argument also requires fearless listening, “even if it is me talking to a white supremacist who is trying to tell me that I am inferior,” he added. “One of the conditions for the possibility of a fruitful argument is to allow for some kind of opening up in myself to hear.”
  • “What you need to be able to do is to speak the same language,” he said. “They believe in God, and you would say: ‘You and I believe the same thing. How is it that this God who loves you can’t possibly love me?’ Is it possible that we can agree to disagree on some issues?”
Javier E

Opinion | How Behavioral Economics Took Over America - The New York Times - 0 views

  • Some behavioral interventions do seem to lead to positive changes, such as automatically enrolling children in school free lunch programs or simplifying mortgage information for aspiring homeowners. (Whether one might call such interventions “nudges,” however, is debatable.)
  • it’s not clear we need to appeal to psychology studies to make some common-sense changes, especially since the scientific rigor of these studies is shaky at best.
  • Nudges are related to a larger area of research on “priming,” which tests how behavior changes in response to what we think about or even see without noticing
  • ...16 more annotations...
  • Behavioral economics is at the center of the so-called replication crisis, a euphemism for the uncomfortable fact that the results of a significant percentage of social science experiments can’t be reproduced in subsequent trials
  • this key result was not replicated in similar experiments, undermining confidence in a whole area of study. It’s obvious that we do associate old age and slower walking, and we probably do slow down sometimes when thinking about older people. It’s just not clear that that’s a law of the mind.
  • And these attempts to “correct” human behavior are based on tenuous science. The replication crisis doesn’t have a simple solution
  • Journals have instituted reforms like having scientists preregister their hypotheses to avoid the possibility of results being manipulated during the research. But that doesn’t change how many uncertain results are already out there, with a knock-on effect that ripples through huge segments of quantitative social scienc
  • The Johns Hopkins science historian Ruth Leys, author of a forthcoming book on priming research, points out that cognitive science is especially prone to building future studies off disputed results. Despite the replication crisis, these fields are a “train on wheels, the track is laid and almost nothing stops them,” Dr. Leys said.
  • These cases result from lax standards around data collection, which will hopefully be corrected. But they also result from strong financial incentives: the possibility of salaries, book deals and speaking and consulting fees that range into the millions. Researchers can get those prizes only if they can show “significant” findings.
  • It is no coincidence that behavioral economics, from Dr. Kahneman to today, tends to be pro-business. Science should be not just reproducible, but also free of obvious ideology.
  • Technology and modern data science have only further entrenched behavioral economics. Its findings have greatly influenced algorithm design.
  • The collection of personal data about our movements, purchases and preferences inform interventions in our behavior from the grocery store to who is arrested by the police.
  • Setting people up for safety and success and providing good default options isn’t bad in itself, but there are more sinister uses as well. After all, not everyone who wants to exploit your cognitive biases has your best interests at heart.
  • Despite all its flaws, behavioral economics continues to drive public policy, market research and the design of digital interfaces.
  • One might think that a kind of moratorium on applying such dubious science would be in order — except that enacting one would be practically impossible. These ideas are so embedded in our institutions and everyday life that a full-scale audit of the behavioral sciences would require bringing much of our society to a standstill.
  • There is no peer review for algorithms that determine entry to a stadium or access to credit. To perform even the most banal, everyday actions, you have to put implicit trust in unverified scientific results.
  • We can’t afford to defer questions about human nature, and the social and political policies that come from them, to commercialized “research” that is scientifically questionable and driven by ideology. Behavioral economics claims that humans aren’t rational.
  • That’s a philosophical claim, not a scientific one, and it should be fought out in a rigorous marketplace of ideas. Instead of unearthing real, valuable knowledge of human nature, behavioral economics gives us “one weird trick” to lose weight or quit smoking.
  • Humans may not be perfectly rational, but we can do better than the predictably irrational consequences that behavioral economics has left us with today.
« First ‹ Previous 241 - 260 of 268 Next ›
Showing 20 items per page