Skip to main content

Home/ TOK Friends/ Group items tagged disagreement

Rss Feed Group items tagged

Javier E

George Packer: Is Amazon Bad for Books? : The New Yorker - 0 views

  • Amazon is a global superstore, like Walmart. It’s also a hardware manufacturer, like Apple, and a utility, like Con Edison, and a video distributor, like Netflix, and a book publisher, like Random House, and a production studio, like Paramount, and a literary magazine, like The Paris Review, and a grocery deliverer, like FreshDirect, and someday it might be a package service, like U.P.S. Its founder and chief executive, Jeff Bezos, also owns a major newspaper, the Washington Post. All these streams and tributaries make Amazon something radically new in the history of American business
  • Amazon is not just the “Everything Store,” to quote the title of Brad Stone’s rich chronicle of Bezos and his company; it’s more like the Everything. What remains constant is ambition, and the search for new things to be ambitious about.
  • It wasn’t a love of books that led him to start an online bookstore. “It was totally based on the property of books as a product,” Shel Kaphan, Bezos’s former deputy, says. Books are easy to ship and hard to break, and there was a major distribution warehouse in Oregon. Crucially, there are far too many books, in and out of print, to sell even a fraction of them at a physical store. The vast selection made possible by the Internet gave Amazon its initial advantage, and a wedge into selling everything else.
  • ...38 more annotations...
  • it’s impossible to know for sure, but, according to one publisher’s estimate, book sales in the U.S. now make up no more than seven per cent of the company’s roughly seventy-five billion dollars in annual revenue.
  • A monopoly is dangerous because it concentrates so much economic power, but in the book business the prospect of a single owner of both the means of production and the modes of distribution is especially worrisome: it would give Amazon more control over the exchange of ideas than any company in U.S. history.
  • “The key to understanding Amazon is the hiring process,” one former employee said. “You’re not hired to do a particular job—you’re hired to be an Amazonian. Lots of managers had to take the Myers-Briggs personality tests. Eighty per cent of them came in two or three similar categories, and Bezos is the same: introverted, detail-oriented, engineer-type personality. Not musicians, designers, salesmen. The vast majority fall within the same personality type—people who graduate at the top of their class at M.I.T. and have no idea what to say to a woman in a bar.”
  • According to Marcus, Amazon executives considered publishing people “antediluvian losers with rotary phones and inventory systems designed in 1968 and warehouses full of crap.” Publishers kept no data on customers, making their bets on books a matter of instinct rather than metrics. They were full of inefficiences, starting with overpriced Manhattan offices.
  • For a smaller house, Amazon’s total discount can go as high as sixty per cent, which cuts deeply into already slim profit margins. Because Amazon manages its inventory so well, it often buys books from small publishers with the understanding that it can’t return them, for an even deeper discount
  • According to one insider, around 2008—when the company was selling far more than books, and was making twenty billion dollars a year in revenue, more than the combined sales of all other American bookstores—Amazon began thinking of content as central to its business. Authors started to be considered among the company’s most important customers. By then, Amazon had lost much of the market in selling music and videos to Apple and Netflix, and its relations with publishers were deteriorating
  • In its drive for profitability, Amazon did not raise retail prices; it simply squeezed its suppliers harder, much as Walmart had done with manufacturers. Amazon demanded ever-larger co-op fees and better shipping terms; publishers knew that they would stop being favored by the site’s recommendation algorithms if they didn’t comply. Eventually, they all did.
  • Brad Stone describes one campaign to pressure the most vulnerable publishers for better terms: internally, it was known as the Gazelle Project, after Bezos suggested “that Amazon should approach these small publishers the way a cheetah would pursue a sickly gazelle.”
  • ithout dropping co-op fees entirely, Amazon simplified its system: publishers were asked to hand over a percentage of their previous year’s sales on the site, as “marketing development funds.”
  • The figure keeps rising, though less for the giant pachyderms than for the sickly gazelles. According to the marketing executive, the larger houses, which used to pay two or three per cent of their net sales through Amazon, now relinquish five to seven per cent of gross sales, pushing Amazon’s percentage discount on books into the mid-fifties. Random House currently gives Amazon an effective discount of around fifty-three per cent.
  • In December, 1999, at the height of the dot-com mania, Time named Bezos its Person of the Year. “Amazon isn’t about technology or even commerce,” the breathless cover article announced. “Amazon is, like every other site on the Web, a content play.” Yet this was the moment, Marcus said, when “content” people were “on the way out.”
  • In 2004, he set up a lab in Silicon Valley that would build Amazon’s first piece of consumer hardware: a device for reading digital books. According to Stone’s book, Bezos told the executive running the project, “Proceed as if your goal is to put everyone selling physical books out of a job.”
  • By 2010, Amazon controlled ninety per cent of the market in digital books—a dominance that almost no company, in any industry, could claim. Its prohibitively low prices warded off competition
  • Lately, digital titles have levelled off at about thirty per cent of book sales.
  • The literary agent Andrew Wylie (whose firm represents me) says, “What Bezos wants is to drag the retail price down as low as he can get it—a dollar-ninety-nine, even ninety-nine cents. That’s the Apple play—‘What we want is traffic through our device, and we’ll do anything to get there.’ ” If customers grew used to paying just a few dollars for an e-book, how long before publishers would have to slash the cover price of all their titles?
  • As Apple and the publishers see it, the ruling ignored the context of the case: when the key events occurred, Amazon effectively had a monopoly in digital books and was selling them so cheaply that it resembled predatory pricing—a barrier to entry for potential competitors. Since then, Amazon’s share of the e-book market has dropped, levelling off at about sixty-five per cent, with the rest going largely to Apple and to Barnes & Noble, which sells the Nook e-reader. In other words, before the feds stepped in, the agency model introduced competition to the market
  • But the court’s decision reflected a trend in legal thinking among liberals and conservatives alike, going back to the seventies, that looks at antitrust cases from the perspective of consumers, not producers: what matters is lowering prices, even if that goal comes at the expense of competition. Barry Lynn, a market-policy expert at the New America Foundation, said, “It’s one of the main factors that’s led to massive consolidation.”
  • The combination of ceaseless innovation and low-wage drudgery makes Amazon the epitome of a successful New Economy company. It’s hiring as fast as it can—nearly thirty thousand employees last year.
  • brick-and-mortar retailers employ forty-seven people for every ten million dollars in revenue earned; Amazon employs fourteen.
  • Since the arrival of the Kindle, the tension between Amazon and the publishers has become an open battle. The conflict reflects not only business antagonism amid technological change but a division between the two coasts, with different cultural styles and a philosophical disagreement about what techies call “disruption.”
  • Bezos told Charlie Rose, “Amazon is not happening to bookselling. The future is happening to bookselling.”
  • n Grandinetti’s view, the Kindle “has helped the book business make a more orderly transition to a mixed print and digital world than perhaps any other medium.” Compared with people who work in music, movies, and newspapers, he said, authors are well positioned to thrive. The old print world of scarcity—with a limited number of publishers and editors selecting which manuscripts to publish, and a limited number of bookstores selecting which titles to carry—is yielding to a world of digital abundance. Grandinetti told me that, in these new circumstances, a publisher’s job “is to build a megaphone.”
  • it offers an extremely popular self-publishing platform. Authors become Amazon partners, earning up to seventy per cent in royalties, as opposed to the fifteen per cent that authors typically make on hardcovers. Bezos touts the biggest successes, such as Theresa Ragan, whose self-published thrillers and romances have been downloaded hundreds of thousands of times. But one survey found that half of all self-published authors make less than five hundred dollars a year.
  • The business term for all this clear-cutting is “disintermediation”: the elimination of the “gatekeepers,” as Bezos calls the professionals who get in the customer’s way. There’s a populist inflection to Amazon’s propaganda, an argument against élitist institutions and for “the democratization of the means of production”—a common line of thought in the West Coast tech world
  • “Book publishing is a very human business, and Amazon is driven by algorithms and scale,” Sargent told me. When a house gets behind a new book, “well over two hundred people are pushing your book all over the place, handing it to people, talking about it. A mass of humans, all in one place, generating tremendous energy—that’s the magic potion of publishing. . . . That’s pretty hard to replicate in Amazon’s publishing world, where they have hundreds of thousands of titles.”
  • By producing its own original work, Amazon can sell more devices and sign up more Prime members—a major source of revenue. While the company was building the
  • Like the publishing venture, Amazon Studios set out to make the old “gatekeepers”—in this case, Hollywood agents and executives—obsolete. “We let the data drive what to put in front of customers,” Carr told the Wall Street Journal. “We don’t have tastemakers deciding what our customers should read, listen to, and watch.”
  • book publishers have been consolidating for several decades, under the ownership of media conglomerates like News Corporation, which squeeze them for profits, or holding companies such as Rivergroup, which strip them to service debt. The effect of all this corporatization, as with the replacement of independent booksellers by superstores, has been to privilege the blockbuster.
  • Publishers sometimes pass on this cost to authors, by redefining royalties as a percentage of the publisher’s receipts, not of the book’s list price. Recently, publishers say, Amazon began demanding an additional payment, amounting to approximately one per cent of net sales
  • the long-term outlook is discouraging. This is partly because Americans don’t read as many books as they used to—they are too busy doing other things with their devices—but also because of the relentless downward pressure on prices that Amazon enforces.
  • he digital market is awash with millions of barely edited titles, most of it dreck, while r
  • Amazon believes that its approach encourages ever more people to tell their stories to ever more people, and turns writers into entrepreneurs; the price per unit might be cheap, but the higher number of units sold, and the accompanying royalties, will make authors wealthier
  • In Friedman’s view, selling digital books at low prices will democratize reading: “What do you want as an author—to sell books to as few people as possible for as much as possible, or for as little as possible to as many readers as possible?”
  • The real talent, the people who are writers because they happen to be really good at writing—they aren’t going to be able to afford to do it.”
  • Seven-figure bidding wars still break out over potential blockbusters, even though these battles often turn out to be follies. The quest for publishing profits in an economy of scarcity drives the money toward a few big books. So does the gradual disappearance of book reviewers and knowledgeable booksellers, whose enthusiasm might have rescued a book from drowning in obscurity. When consumers are overwhelmed with choices, some experts argue, they all tend to buy the same well-known thing.
  • These trends point toward what the literary agent called “the rich getting richer, the poor getting poorer.” A few brand names at the top, a mass of unwashed titles down below, the middle hollowed out: the book business in the age of Amazon mirrors the widening inequality of the broader economy.
  • “If they did, in my opinion they would save the industry. They’d lose thirty per cent of their sales, but they would have an additional thirty per cent for every copy they sold, because they’d be selling directly to consumers. The industry thinks of itself as Procter & Gamble*. What gave publishers the idea that this was some big goddam business? It’s not—it’s a tiny little business, selling to a bunch of odd people who read.”
  • Bezos is right: gatekeepers are inherently élitist, and some of them have been weakened, in no small part, because of their complacency and short-term thinking. But gatekeepers are also barriers against the complete commercialization of ideas, allowing new talent the time to develop and learn to tell difficult truths. When the last gatekeeper but one is gone, will Amazon care whether a book is any good? ♦
Javier E

Our Dangerous Inability to Agree on What is TRUE | Risk: Reason and Reality | Big Think - 1 views

  • Given that human cognition is never the product of pure dispassionate reason, but a subjective interpretation of the facts based on our feelings and biases and instincts, when can we ever say that we know who is right and who is wrong, about anything? When can we declare a fact so established that it’s fair to say, without being called arrogant, that those who deny this truth don’t just disagree…that they’re just plain wrong
  • This isn’t about matters of faith, or questions of ultimately unknowable things which by definition can not be established by fact. This is a question about what is knowable, and provable by careful objective scientific inquiry, a process which includes challenging skepticism rigorously applied precisely to establish what, beyond any reasonable doubt, is in fact true. The way evolution has been established
  • With enough careful investigation and scrupulously challenged evidence, we can establish knowable truths that are not just the product of our subjective motivated reasoning. We can apply our powers of reason and our ability to objectively analyze the facts and get beyond the point where what we 'know' is just an interpretation of the evidence through the subconscious filters of who we trust and our biases and instincts. We can get to the point where if someone wants to continue believe that the sun revolves around the earth, or that vaccines cause autism, or that evolution is a deceit, it is no longer arrogant - though it may still be provocative - to call those people wrong.
  • ...6 more annotations...
  • here is a truth with which I hope we can all agree. Our subjective system of cognition can be dangerous. It can produce perceptions that conflict with the evidence, what I call The Perception Gap, which can in turn produce profound harm.
  • The Perception Gap can lead to disagreements that create destructive and violent social conflict, to dangerous personal choices that feel safe but aren’t, and to policies more consistent with how we feel than what is in fact in our best interest. The Perception Gap may in fact be potentially more dangerous than any individual risk we face.
  • We need to recognize the greater threat that our subjective system of cognition can pose, and in the name of our own safety and the welfare of the society on which we depend, do our very best to rise above it or, when we can’t, account for this very real danger in the policies we adopt.
  • we have an obligation to confront our own ideological priors. We have an obligation to challenge ourselves, to push ourselves, to be suspicious of conclusions that are too convenient, to be sure that we're getting it right.
  • subjective cognition is built-in, subconscious, beyond free will, and unavoidably leads to different interpretations of the same facts.
  • Views that have more to do with competing tribal biases than objective interpretations of the evidence create destructive and violent conflict.
Javier E

Wikipedia China Becomes Front Line for Views on Language and Culture - NYTimes.com - 1 views

  • Wikipedia editors, all volunteers, present opposing views on politics, history and traditional Chinese culture — in essence, different versions of China. Compounding the issue are language differences: Mandarin is the official language in mainland China and Taiwan, while the majority in Hong Kong speak Cantonese. But mainland China uses simplified characters, while Taiwan and Hong Kong use traditional script.
  • users across the region have experienced “some form of cultural shock,” which triggers arguments. “Users from different areas have received different education, and have been influenced by different political ideologies,” Mr. Wong said. “We discovered that the things we learned as a kid were totally different from each other.”
  • Today, the site has five settings: simplified Chinese for mainland China; orthodox Chinese for Taiwan; traditional Chinese for Hong Kong; traditional Chinese for Macau; and simplified Chinese for Singapore and Malaysia.
  • ...5 more annotations...
  • “This software feature could also be seen as an embodiment of Wikipedia’s neutrality principle, in that it brings together editors from different political systems and enables productive discussion and collaboration between them,”
  • most Internet forums have a parochial focus, but that Chinese Wikipedia offers rare opportunities for all Chinese-speaking people to engage in discussion. “There is some form of integration across the region,” he said. “But that does not mean that mainland China assimilates Taiwan or Hong Kong. Every area stands on the same ground.” He pointed out that the flexible language options put all countries on an equal footing. For example, this year Macau was given its own setting, despite having a population of only 500,000.
  • “There was more bickering in the early days, but the discussion matured at a quick pace after 2009,”
  • “When I first joined the Chinese Wikipedia, I was an ‘angry youth,”’ said Wilson Ye, a 17-year-old Wikipedia editor from Shanghai who started writing entries four years ago. “I was furious when I came across terms like Taiwan and the Republic of China. But after more interactions, I understand how people in Taiwan think, and I become much more tolerant.”
  • Isaac Mao, a fellow at the Berkman Center for Internet and Society at Harvard University, attributed the maturation to the fact that more users are learning what Wikipedia is all about. “It all came back to the ‘five principles’ of Wikipedia, including authenticity, accuracy, neutral point of view and the use of references,” Mr. Mao said. “If there are disagreements over management and editing, people can engage in discussion based on these principles. Such atmosphere has been built up in the Chinese Wikipedia community gradually.”
Javier E

How the Internet Gets Inside Us : The New Yorker - 0 views

  • It isn’t just that we’ve lived one technological revolution among many; it’s that our technological revolution is the big social revolution that we live with
  • The idea, for instance, that the printing press rapidly gave birth to a new order of information, democratic and bottom-up, is a cruel cartoon of the truth. If the printing press did propel the Reformation, one of the biggest ideas it propelled was Luther’s newly invented absolutist anti-Semitism. And what followed the Reformation wasn’t the Enlightenment, a new era of openness and freely disseminated knowledge. What followed the Reformation was, actually, the Counter-Reformation, which used the same means—i.e., printed books—to spread ideas about what jerks the reformers were, and unleashed a hundred years of religious warfare.
  • Robert K. Logan’s “The Sixth Language,” begins with the claim that cognition is not a little processing program that takes place inside your head, Robby the Robot style. It is a constant flow of information, memory, plans, and physical movements, in which as much thinking goes on out there as in here. If television produced the global village, the Internet produces the global psyche: everyone keyed in like a neuron, so that to the eyes of a watching Martian we are really part of a single planetary brain. Contraptions don’t change consciousness; contraptions are part of consciousness.
  • ...14 more annotations...
  • In a practical, immediate way, one sees the limits of the so-called “extended mind” clearly in the mob-made Wikipedia, the perfect product of that new vast, supersized cognition: when there’s easy agreement, it’s fine, and when there’s widespread disagreement on values or facts, as with, say, the origins of capitalism, it’s fine, too; you get both sides. The trouble comes when one side is right and the other side is wrong and doesn’t know it. The Shakespeare authorship page and the Shroud of Turin page are scenes of constant conflict and are packed with unreliable information. Creationists crowd cyberspace every bit as effectively as evolutionists, and extend their minds just as fully. Our trouble is not the over-all absence of smartness but the intractable power of pure stupidity, and no machine, or mind, seems extended enough to cure that.
  • “The medium does matter,” Carr has written. “As a technology, a book focuses our attention, isolates us from the myriad distractions that fill our everyday lives. A networked computer does precisely the opposite. It is designed to scatter our attention. . . . Knowing that the depth of our thought is tied directly to the intensity of our attentiveness, it’s hard not to conclude that as we adapt to the intellectual environment of the Net our thinking becomes shallower.”
  • when people struggle to describe the state that the Internet puts them in they arrive at a remarkably familiar picture of disassociation and fragmentation. Life was once whole, continuous, stable; now it is fragmented, multi-part, shimmering around us, unstable and impossible to fix.
  • The odd thing is that this complaint, though deeply felt by our contemporary Better-Nevers, is identical to Baudelaire’s perception about modern Paris in 1855, or Walter Benjamin’s about Berlin in 1930, or Marshall McLuhan’s in the face of three-channel television (and Canadian television, at that) in 1965.
  • If all you have is a hammer, the saying goes, everything looks like a nail; and, if you think the world is broken, every machine looks like the hammer that broke it.
  • What we live in is not the age of the extended mind but the age of the inverted self. The things that have usually lived in the darker recesses or mad corners of our mind—sexual obsessions and conspiracy theories, paranoid fixations and fetishes—are now out there: you click once and you can read about the Kennedy autopsy or the Nazi salute or hog-tied Swedish flight attendants. But things that were once external and subject to the social rules of caution and embarrassment—above all, our interactions with other people—are now easily internalized, made to feel like mere workings of the id left on its own.
  • Anyway, the crucial revolution was not of print but of paper: “During the later Middle Ages a staggering growth in the production of manuscripts, facilitated by the use of paper, accompanied a great expansion of readers outside the monastic and scholastic contexts.” For that matter, our minds were altered less by books than by index slips. Activities that seem quite twenty-first century, she shows, began when people cut and pasted from one manuscript to another; made aggregated news in compendiums; passed around précis. “Early modern finding devices” were forced into existence: lists of authorities, lists of headings.
  • The book index was the search engine of its era, and needed to be explained at length to puzzled researchers—as, for that matter, did the Hermione-like idea of “looking things up.” That uniquely evil and necessary thing the comprehensive review of many different books on a related subject, with the necessary oversimplification of their ideas that it demanded, was already around in 1500, and already being accused of missing all the points.
  • at any given moment, our most complicated machine will be taken as a model of human intelligence, and whatever media kids favor will be identified as the cause of our stupidity. When there were automatic looms, the mind was like an automatic loom; and, since young people in the loom period liked novels, it was the cheap novel that was degrading our minds. When there were telephone exchanges, the mind was like a telephone exchange, and, in the same period, since the nickelodeon reigned, moving pictures were making us dumb. When mainframe computers arrived and television was what kids liked, the mind was like a mainframe and television was the engine of our idiocy. Some machine is always showing us Mind; some entertainment derived from the machine is always showing us Non-Mind.
  • Blair argues that the sense of “information overload” was not the consequence of Gutenberg but already in place before printing began.
  • A social network is crucially different from a social circle, since the function of a social circle is to curb our appetites and of a network to extend them.
  • And so the peacefulness, the serenity that we feel away from the Internet, and which all the Better-Nevers rightly testify to, has less to do with being no longer harried by others than with being less oppressed by the force of your own inner life. Shut off your computer, and your self stops raging quite as much or quite as loud.
  • Now television is the harmless little fireplace over in the corner, where the family gathers to watch “Entourage.” TV isn’t just docile; it’s positively benevolent. This makes you think that what made television so evil back when it was evil was not its essence but its omnipresence. Once it is not everything, it can be merely something. The real demon in the machine is the tirelessness of the user.
  • the Internet screen has always been like the palantír in Tolkien’s “Lord of the Rings”—the “seeing stone” that lets the wizards see the entire world. Its gift is great; the wizard can see it all. Its risk is real: evil things will register more vividly than the great mass of dull good. The peril isn’t that users lose their knowledge of the world. It’s that they can lose all sense of proportion. You can come to think that the armies of Mordor are not just vast and scary, which they are, but limitless and undefeatable, which they aren’t.
Javier E

The New Atlantis » Science and the Left - 0 views

  • A casual observer of American politics in recent years could be forgiven for imagining that the legitimacy of scientific inquiry and empirical knowledge are under assault by the right, and that the left has mounted a heroic defense. Science is constantly on the lips of Democratic politicians and liberal activists, and is generally treated by them as a vulnerable and precious inheritance being pillaged by Neanderthals.
  • But beneath these grave accusations, it turns out, are some remarkably flimsy grievances, most of which seem to amount to political disputes about policy questions in which science plays a role.
  • But if this notion of a “war on science” tells us little about the right, it does tell us something important about the American left and its self-understanding. That liberals take attacks against their own political preferences to be attacks against science helps us see the degree to which they identify themselves—their ideals, their means, their ends, their cause, and their culture—with the modern scientific enterprise.
  • ...10 more annotations...
  • There is indeed a deep and well-established kinship between science and the left, one that reaches to the earliest days of modern science and politics and has grown stronger with time. Even though they go astray in caricaturing conservatives as anti-science Luddites, American liberals and progressives are not mistaken to think of themselves as the party of science. They do, however, tend to focus on only a few elements and consequences of that connection, and to look past some deep and complicated problems in the much-valued relationship. The profound ties that bind science and the left can teach us a great deal about both.
  • It is not unfair to suggest that the right emerged in response to the left, as the anti-traditional theory and practice of the French Revolution provoked a powerful reaction in defense of a political order built to suit human nature and tested and tried through generations of practice and reform.
  • The left, however, did not emerge in response to the right. It emerged in response to a new set of ideas and intellectual possibilities that burst onto the European scene in the seventeenth and eighteenth centuries—ideas and possibilities that we now think of as modern scientific thought.
  • Both as action and as knowledge, then, science has been a source of inspiration for progressives and for liberals, and its advancement has been one of their great causes. That does not mean that science captures all there is to know about the left. Far from it. The left has always had a deeply romantic and even anti-rationalist side too, reaching back almost as far as its scientism. But in its basic view of knowledge, power, nature, and man, the left owes much to science. And in the causes it chooses to advance in our time, it often looks to scientific thought and practice for guidance. In its most essential disagreements with the right—in particular, about tradition—the vision defended by the left is also a vision of scientific progress.
  • Not all environmentalism indulges in such anti-humanism, to be sure. But in all of its forms, the environmentalist ethic calls for a science of beholding nature, not of mastering it. Far from viewing nature as the oppressor, this new vision sees nature as a precious, vulnerable, and almost benevolent passive environment, held in careful balance, and under siege by human action and human power. This view of nature calls for human restraint and humility—and for diminished expectations of human power and potential.The environmental movement is, in this sense, not a natural fit for the progressive and forward-looking mentality of the left. Indeed, in many important respects environmentalism is deeply conservative. It takes no great feat of logic to show that conservation is conservative, of course, but the conservatism of the environmental movement runs far deeper than that. The movement seeks to preserve a given balance which we did not create, are not capable of fully understanding, and should not delude ourselves into imagining we can much improve—in other words, its attitude toward nature is much like the attitude of conservatism toward society.
  • Moreover, contemporary environmentalism is deeply moralistic. It speaks of duties and responsibilities, of curbing arrogance and vice.
  • But whatever the reason, environmentalism, and with it a worldview deeply at odds with that behind the scientific enterprise, has come to play a pivotal role in the thinking of the left.
  • The American left seeks to be both the party of science and the party of equality. But in the coming years, as the biotechnology revolution progresses, it will increasingly be forced to confront the powerful tension between these two aspirations.
  • To choose well, the American left will need first to understand that a choice is even needed at all—that this tension exists between the ideals of progressives, and the ideology of science.
  • The answer, as ever, is moderation. The American left, like the American right, must understand science as a human endeavor with ethical purposes and practical limits, one which must be kept within certain boundaries by a self-governing people. In failing to observe and to enforce those boundaries, the left threatens its own greatest assets, and exacerbates tensions at the foundations of American political life. To make the most of the benefits scientific advancement can bring us, we must be alert to the risks it may pose. That awareness is endangered by the closing of the gap between science and the left—and the danger is greatest for the left itself.
Javier E

The Central Question: Is It 1938? - The Atlantic - 0 views

  • differences on Iran policy correspond to answers to this one question: Whether the world of 2015 is fundamentally similar to, or different from, the world of 1938.
  • the idea of recurring historic episodes has a powerful effect on decision-making in the here and now. Disagreements over policy often come down to the search for the right historic pattern to apply.
  • the idea that Europe on the eve of the Holocaust is the most useful guide to the world in 2015 runs through arguments about Iran policy. And if that is the correct model to apply, the right "picture in our heads" as Walter Lippmann put it in Public Opinion, then these conclusions naturally follow:
  • ...8 more annotations...
  • • The threatening power of the time—Nazi Germany then, the Islamists' Iran now—is a force of unalloyed evil whose very existence threatens decent life everywhere.
  • • That emerging power cannot be reasoned or bargained with but must ultimately be stopped and broken
  • • "Compromisers" are in fact appeasers who are deluding themselves about these realities
  • • The appeasers' blindness endangers people all around the world but poses an especially intolerable threat to Jews
  • • As a result of all these factors, no deal with such an implacable enemy is preferable to an inevitably flawed and Munich-like false-hope deal.
  • Also, and crucially, it means that the most obvious criticism of the speech—what's Netanyahu's plan for getting Iran to agree?—is irrelevant. What was the Allies' "plan" for getting Hitler to agree? The plan was to destroy his regime.
  • If, on the other hand, you think that the contrasts with 1938 are more striking than the similarities, you see things differently. As a brief reminder of the contrasts: the Germany of 1938 was much richer and more powerful than the Iran of today. Germany was rapidly expansionist; Iran, despite its terrorist work through proxies, has not been. The Nazi leaders had engulfed the world in war less than a decade after taking power. Iran's leaders, oppressive and destructive, have not shown similar suicidal recklessness. European Jews of 1938 were stateless, unarmed, and vulnerable. Modern Israel is a powerful, nuclear-armed force. Moreover, the world after the first wartime use of nuclear weapons, of course by the United States, is different from the world before that point.
  • Here's what I understand the more clearly after these past few weeks' drama over Prime Minister Netanyahu's speech. These differences in historic model are deep and powerful, and people with one model in mind are not going to convince people with the other mental picture.
Javier E

How Poor Are the Poor? - NYTimes.com - 0 views

  • “Anyone who studies the issue seriously understands that material poverty has continued to fall in the U.S. in recent decades, primarily due to the success of anti-poverty programs” and the declining cost of “food, air-conditioning, communications, transportation, and entertainment,”
  • Despite the rising optimism, there are disagreements over how many poor people there are and the conditions they live under. There are also questions about the problem of relative poverty, what we are now calling inequality
  • Jencks argues that the actual poverty rate has dropped over the past five decades – far below the official government level — if poverty estimates are adjusted for food and housing benefits, refundable tax credits and a better method of determining inflation rates. In Jencks’s view, the war on poverty worked.
  • ...15 more annotations...
  • Democratic supporters of safety net programs can use Jencks’s finding that poverty has dropped below 5 percent as evidence that the war on poverty has been successful.
  • At the same time liberals are wary of positive news because, as Jencks notes:It is easier to rally support for such an agenda by saying that the problem in question is getting worse
  • The plus side for conservatives of Jencks’s low estimate of the poverty rate is the implication that severe poverty has largely abated, which then provides justification for allowing enemies of government entitlement programs to further cut social spending.
  • At the same time, however, Jencks’s data undermines Republican claims that the war on poverty has been a failure – a claim exemplified by Ronald Reagan’s famous 1987 quip: “In the sixties we waged a war on poverty, and poverty won.”
  • Jencks’s conclusion: “The absolute poverty rate has declined dramatically since President Johnson launched his war on poverty in 1964.” At 4.8 percent, Jencks’s calculation is the lowest poverty estimate by a credible expert in the field.
  • his conclusion — that instead of the official count of 45.3 million people living in poverty, the number of poor people in America is just under 15 million — understates the scope of hardship in this country.
  • There are strong theoretical justifications for the use of a relative poverty measure. The Organization for Economic Cooperation and Development puts it this way:In order to participate fully in the social life of a community, individuals may need a level of resources that is not too inferior to the norms of a community. For example, the clothing budget that allows a child not to feel ashamed of his school attire is much more related to national living standards than to strict requirements for physical survival
  • using a relative measure shows that the United States lags well behind other developed countries:If you use the O.E.C.D. standard of 50 percent of median income as a poverty line, the United States looks pretty bad in cross-national relief. We have a relative poverty rate exceeded only by Chile, Turkey, Mexico and Israel (which has seen a big increase in inequality in recent years). And that rate in 2010 was essentially where it was in 1995
  • While the United States “has achieved real progress in reducing absolute poverty over the past 50 years,” according to Burtless, “the country may have made no progress at all in reducing the relative economic deprivation of folks at the bottom.”
  • the heart of the dispute: How severe is the problem of poverty?
  • Kathryn Edin, a professor of sociology at Johns Hopkins, and Luke Schaefer, a professor of social work at the University of Michigan, contend that the poverty debate overlooks crucial changes that have taken place within the population of the poor.
  • welfare reform, signed into law by President Clinton in 1996 (the Personal Responsibility and Work Opportunity Act), which limited eligibility for welfare benefits to five years. The limitation has forced many of the poor off welfare: over the past 19 years, the percentage of families falling under the official poverty line who receive welfare benefits has fallen from to 26 percent from 68 percent. Currently, three-quarters of those in poverty, under the official definition, receive no welfare payments.
  • he enactment of expanded benefits for the working poor through the earned-income tax credit and the child tax credit.According to Edin and Schaefer, the consequence of these changes, taken together, has been to divide the poor who no longer receive welfare into two groups. The first group is made up of those who have gone to work and have qualified for tax credits. Expanded tax credits lifted about 3.2 million children out of poverty in 2013
  • he second group, though, has really suffered. These are the very poor who are without work, part of a population that is struggling desperately. Edin and Schaefer write that among the losers are an estimated 3.4 million “children who over the course of a year live for at least three months under a $2 per person per day threshold.”
  • ocusing on these findings, Mishel argues, diverts attention from the more serious problem of “the failure of the labor market to adequately reward low-wage workers.”To support his case, Mishel points out that hourly pay for those in the bottom fifth grew only 7.7 percent from 1979 to 2007, while productivity grew by 64 percent, and education levels among workers in this quintile substantially improved.
kushnerha

The Data Against Kant - The New York Times - 0 views

  • THE history of moral philosophy is a history of disagreement, but on one point there has been virtual unanimity: It would be absurd to suggest that we should do what we couldn’t possibly do.
  • This principle — that “ought” implies “can,” that our moral obligations can’t exceed our abilities — played a central role in the work of Immanuel Kant and has been widely accepted since.
  • His thought experiments go something like this: Suppose that you and a friend are both up for the same job in another city. She interviewed last weekend, and your flight for the interview is this evening. Your car is in the shop, though, so your friend promises to drive you to the airport. But on the way, her car breaks down — the gas tank is leaking — so you miss your flight and don’t get the job.Would it make any sense to tell your friend, stranded at the side of the road, that she ought to drive you to the airport? The answer seems to be an obvious no (after all, she can’t drive you), and most philosophers treat this as all the confirmation they need for the principle.Suppose, however, that the situation is slightly different. What if your friend intentionally punctures her own gas tank to make sure that you miss the flight and she gets the job? In this case, it makes perfect sense to insist that your friend still has an obligation to drive you to the airport. In other words, we might indeed say that someone ought to do what she can’t — if we’re blaming her.
  • ...5 more annotations...
  • In our study, we presented hundreds of participants with stories like the one above and asked them questions about obligation, ability and blame. Did they think someone should keep a promise she made but couldn’t keep? Was she even capable of keeping her promise? And how much was she to blame for what happened?
  • We found a consistent pattern, but not what most philosophers would expect. “Ought” judgments depended largely on concerns about blame, not ability. With stories like the one above, in which a friend intentionally sabotages you, 60 percent of our participants said that the obligation still held — your friend still ought to drive you to the airport. But with stories in which the inability to help was accidental, the obligation all but disappeared. Now, only 31 percent of our participants said your friend still ought to drive you.
  • Professor Sinnott-Armstrong’s unorthodox intuition turns out to be shared by hundreds of nonphilosophers. So who is right? The vast majority of philosophers, or our participants?One possibility is that our participants were wrong, perhaps because their urge to blame impaired the accuracy of their moral judgments. To test this possibility, we stacked the deck in the favor of philosophical orthodoxy: We had the participants look at cases in which the urge to assign blame would be lowest — that is, only the cases in which the car accidentally broke down. Even still, we found no relationship between “ought” and “can.” The only significant relationship was between “ought” and “blame.”
  • This finding has an important implication: Even when we say that someone has no obligation to keep a promise (as with your friend whose car accidentally breaks down), it seems we’re saying it not because she’s unable to do it, but because we don’t want to unfairly blame her for not keeping it. Again, concerns about blame, not about ability, dictate how we understand obligation.
  • While this one study alone doesn’t refute Kant, our research joins a recent salvo of experimental work targeting the principle that “ought” implies “can.” At the very least, philosophers can no longer treat this principle as obviously true.
Javier E

The Real Victims of Victimhood - The New York Times - 0 views

  • BACK in 1993, the misanthropic art critic Robert Hughes published a grumpy, entertaining book called “Culture of Complaint,” in which he predicted that America was doomed to become increasingly an “infantilized culture” of victimhood. It was a rant against what he saw as a grievance industry appearing all across the political spectrum.
  • Members of one group were prompted to write a short essay about a time when they felt bored; the other to write about “a time when your life seemed unfair. Perhaps you felt wronged or slighted by someone.” After writing the essay, the participants were interviewed and asked if they wanted to help the scholars in a simple, easy task. The results were stark. Those who wrote the essays about being wronged were 26 percent less likely to help the researchers, and were rated by the researchers as feeling 13 percent more entitled.
  • “Victimhood culture” has now been identified as a widening phenomenon by mainstream sociologists. And it is impossible to miss the obvious examples all around us.
  • ...9 more annotations...
  • On campuses, activists interpret ordinary interactions as “microaggressions” and set up “safe spaces” to protect students from certain forms of speech. And presidential candidates on both the left and the right routinely motivate supporters by declaring that they are under attack by immigrants or wealthy people.
  • victimhood makes it more and more difficult for us to resolve political and social conflicts. The culture feeds a mentality that crowds out a necessary give and take — the very concept of good-faith disagreement — turning every policy difference into a pitched battle between good (us) and evil (them).
  • Consider a 2014 study in the Proceedings of the National Academy of Sciences, which examined why opposing groups, including Democrats and Republicans, found compromise so difficult. The researchers concluded that there was a widespread political “motive attribution asymmetry,” in which both sides attributed their own group’s aggressive behavior to love, but the opposite side’s to hatred. Today, millions of Americans believe that their side is basically benevolent while the other side is evil and out to get them.
  • the intervening two decades have made Mr. Hughes look prophetic
  • In a separate experiment, the researchers found that members of the unfairness group were 11 percent more likely to express selfish attitudes. In a comical and telling aside, the researchers noted that the victims were more likely than the nonvictims to leave trash behind on the desks and to steal the experimenters’ pens.
  • Does this mean that we should reject all claims that people are victims? Of course not. Some people are indeed victims in America — of crime, discrimination or deprivation. They deserve our empathy and require justice.
  • The problem is that the line is fuzzy between fighting for victimized people and promoting a victimhood culture.
  • look at the role of free speech in the debate. Victims and their advocates always rely on free speech and open dialogue to articulate unpopular truths. They rely on free speech to assert their right to speak. Victimhood culture, by contrast, generally seeks to restrict expression in order to protect the sensibilities of its advocates
  • look at a movement’s leadership. The fight for victims is led by aspirational leaders who challenge us to cultivate higher values. They insist that everyone is capable of — and has a right to — earned success. They articulate visions of human dignity. But the organizations and people who ascend in a victimhood culture are very different. Some set themselves up as saviors; others focus on a common enemy. In all cases, they treat people less as individuals and more as aggrieved masses.
Javier E

The National Book Awards Haul Translators Out of Obscurity - The Atlantic - 0 views

  • In 2018, American literature no longer means literature written by Americans, for Americans, about America. It means literature that, wherever it comes from, whatever nation it describes, American readers recognize as relevant to them, as familiar. Foreign is no longer foreign
  • the question of how “foreign” a translation should “feel” provokes fierce disagreement. When you open a translated novel from overseas, do you want to sense its author’s French, German, Swedish, Spanish or Italian sensibility, even if that breaks the spell of your reading experience? Or do you want to feel as if the book had magically converted itself into flawless, easeful English, attuned to your own idiom? (This is called the “foreignization vs. domestication” debate.)
  • And should a translation hew closely to the language and structure of the original, or should it recraft the language to appeal to the target audience? (This is the “faithfulness” question.) Hardly anyone agrees—not editors, not scholars, not translators, and not readers.
Javier E

Can truth survive this president? An honest investigation. - The Washington Post - 0 views

  • in the summer of 2002, long before “fake news” or “post-truth” infected the vernacular, one of President George W. Bush’s top advisers mocked a journalist for being part of the “reality-based community.” Seeking answers in reality was for suckers, the unnamed adviser explained. “We’re an empire now, and when we act, we create our own reality.”
  • This was the hubris and idealism of a post-Cold War, pre-Iraq War superpower: If you exert enough pressure, events will bend to your will.
  • the deceit emanating from the White House today is lazier, more cynical. It is not born of grand strategy or ideology; it is impulsive and self-serving. It is not arrogant, but shameless.
  • ...26 more annotations...
  • Bush wanted to remake the world. President Trump, by contrast, just wants to make it up as he goes along
  • Through all their debates over who is to blame for imperiling truth (whether Trump, postmodernism, social media or Fox News), as well as the consequences (invariably dire) and the solutions (usually vague), a few conclusions materialize, should you choose to believe them.
  • There is a pattern and logic behind the dishonesty of Trump and his surrogates; however, it’s less multidimensional chess than the simple subordination of reality to political and personal ambition
  • Trump’s untruth sells best precisely when feelings and instincts overpower facts, when America becomes a safe space for fabrication.
  • Rand Corp. scholars Jennifer Kavanagh and Michael D. Rich point to the Gilded Age, the Roaring Twenties and the rise of television in the mid-20th century as recent periods of what they call “Truth Decay” — marked by growing disagreement over facts and interpretation of data; a blurring of lines between opinion, fact and personal experience; and diminishing trust in once-respected sources of information.
  • In eras of truth decay, “competing narratives emerge, tribalism within the U.S. electorate increases, and political paralysis and dysfunction grow,”
  • intelligent-design proponents and later climate deniers drew from postmodernism to undermine public perceptions of evolution and climate change. “Even if right-wing politicians and other science deniers were not reading Derrida and Foucault, the germ of the idea made its way to them: science does not have a monopoly on the truth,
  • To interpret our era’s debasement of language, Kakutani reflects perceptively on the World War II-era works of Victor Klemperer, who showed how the Nazis used “words as ‘tiny doses of arsenic’ to poison and subvert the German culture,” and of Stefan Zweig, whose memoir “The World of Yesterday” highlights how ordinary Germans failed to grasp the sudden erosion of their freedoms.
  • Kakutani calls out lefty academics who for decades preached postmodernism and social constructivism, which argued that truth is not universal but a reflection of relative power, structural forces and personal vantage points.
  • postmodernists rejected Enlightenment ideals as “vestiges of old patriarchal and imperialist thinking,” Kakutani writes, paving the way for today’s violence against fact in politics and science.
  • “dumbed-down corollaries” of postmodernist thought have been hijacked by Trump’s defenders, who use them to explain away his lies, inconsistencies and broken promises.
  • Once you add the silos of social media as well as deeply polarized politics and deteriorating civic education, it becomes “nearly impossible to have the types of meaningful policy debates that form the foundation of democracy.”
  • McIntyre quotes at length from mea culpas by postmodernist and social constructivist writers agonizing over what their theories have wrought, shocked that conservatives would use them for nefarious purposes
  • pro-Trump troll and conspiracy theorist Mike Cernovich , who helped popularize the “Pizzagate” lie, has forthrightly cited his unlikely influences. “Look, I read postmodernist theory in college,” Cernovich told the New Yorker in 2016. “If everything is a narrative, then we need alternatives to the dominant narrative. I don’t seem like a guy who reads [Jacques] Lacan, do I?
  • When truth becomes malleable and contestable regardless of evidence, a mere tussle of manufactured narratives, it becomes less about conveying facts than about picking sides, particularly in politics.
  • In “On Truth,” Cambridge University philosopher Simon Blackburn writes that truth is attainable, if at all, “only at the vanishing end points of enquiry,” adding that, “instead of ‘facts first’ we may do better if we think of ‘enquiry first,’ with the notion of fact modestly waiting to be invited to the feast afterward.
  • He is concerned, but not overwhelmingly so, about the survival of truth under Trump. “Outside the fevered world of politics, truth has a secure enough foothold,” Blackburn writes. “Perjury is still a serious crime, and we still hope that our pilots and surgeons know their way about.
  • Kavanaugh and Rich offer similar consolation: “Facts and data have become more important in most other fields, with political and civil discourse being striking exceptions. Thus, it is hard to argue that the world is truly ‘post-fact.’ ”
  • McIntyre argues persuasively that our methods of ascertaining truth — not just the facts themselves — are under attack, too, and that this assault is especially dangerous.
  • Ideologues don’t just disregard facts they disagree with, he explains, but willingly embrace any information, however dubious, that fits their agenda. “This is not the abandonment of facts, but a corruption of the process by which facts are credibly gathered and reliably used to shape one’s beliefs about reality. Indeed, the rejection of this undermines the idea that some things are true irrespective of how we feel about them.”
  • “It is hardly a depressing new phenomenon that people’s beliefs are capable of being moved by their hopes, grievances and fears,” Blackburn writes. “In order to move people, objective facts must become personal beliefs.” But it can’t work — or shouldn’t work — in reverse.
  • More than fearing a post-truth world, Blackburn is concerned by a “post-shame environment,” in which politicians easily brush off their open disregard for truth.
  • it is human nature to rationalize away the dissonance. “Why get upset by his lies, when all politicians lie?” Kakutani asks, distilling the mind-set. “Why get upset by his venality, when the law of the jungle rules?”
  • So any opposition is deemed a witch hunt, or fake news, rigged or just so unfair. Trump is not killing the truth. But he is vandalizing it, constantly and indiscriminately, diminishing its prestige and appeal, coaxing us to look away from it.
  • the collateral damage includes the American experiment.
  • “One of the most important ways to fight back against post-truth is to fight it within ourselves,” he writes, whatever our particular politics may be. “It is easy to identify a truth that someone else does not want to see. But how many of us are prepared to do this with our own beliefs? To doubt something that we want to believe, even though a little piece of us whispers that we do not have all the facts?”
Javier E

Here's what the government's dietary guidelines should really say - The Washington Post - 0 views

  • If I were writing the dietary guidelines, I would give them a radical overhaul. I’d go so far as to radically overhaul the way we evaluate diet. Here’s why and how.
  • Lately, as scientists try, and fail, to reproduce results, all of science is taking a hard look at funding biases, statistical shenanigans and groupthink. All that criticism, and then some, applies to nutrition.
  • Prominent in the charge to change the way we do science is John Ioannidis, professor of health research and policy at Stanford University. In 2005, he published “Why Most Research Findings Are False” in the journal PLOS Medicin
  • ...15 more annotations...
  • He came down hard on nutrition in a pull-no-punches 2013 British Medical Journal editorial titled, “Implausible results in human nutrition research,” in which he noted, “Almost every single nutrient imaginable has peer reviewed publications associating it with almost any outcome.”
  • Ioannidis told me that sussing out the connection between diet and health — nutritional epidemiology — is enormously challenging, and “the tools that we’re throwing at the problem are not commensurate with the complexity and difficulty of the problem.” The biggest of those tools is observational research, in which we collect data on what people eat, and track what happens to them.
  • He lists plant-based foods — fruit, veg, whole grains, legumes — but acknowledges that we don’t understand enough to prescribe specific combinations or numbers of servings.
  • funding bias isn’t the only kind. “Fanatical opinions abound in nutrition,” Ioannidis wrote in 2013, and those have bias power too.
  • “Definitive solutions won’t come from another million observational papers or small randomized trials,” reads the subtitle of Ioannidis’s paper. His is a burn-down-the-house ethos.
  • When it comes to actual dietary recommendations, the disagreement is stark. “Ioannidis and others say we have no clue, the science is so bad that we don’t know anything,” Hu told me. “I think that’s completely bogus. We know a lot about the basic elements of a healthy diet.”
  • Give tens of thousands of people that FFQ, and you end up with a ginormous repository of possible correlations. You can zero in on a vitamin, macronutrient or food, and go to town. But not only are you starting with flawed data, you’ve got a zillion possible confounding variables — dietary, demographic, socioeconomic. I’ve heard statisticians call it “noise mining,” and Ioannidis is equally skeptical. “With this type of data, you can get any result you want,” he said. “You can align it to your beliefs.”
  • Big differences in what people eat track with other differences. Heavy plant-eaters are different from, say, heavy meat-eaters in all kinds of ways (income, education, physical activity, BMI). Red meat consumption correlates with increased risk of dying in an accident as much as dying from heart disease. The amount of faith we put in observational studies is a judgment call.
  • I find myself in Ioannidis’s camp. What have we learned, unequivocally enough to build a consensus in the nutrition community, about how diet affects health? Well, trans-fats are bad.
  • Over and over, large population studies get sliced and diced, and it’s all but impossible to figure out what’s signal and what’s noise. Researchers try to do that with controlled trials to test the connections, but those have issues too. They’re expensive, so they’re usually small and short-term. People have trouble sticking to the diet being studied. And scientists are generally looking for what they call “surrogate endpoints,” like increased cholesterol rather than death from heart disease, since it’s impractical to keep a trial going until people die.
  • , what do we do? Hu and Ioannidis actually have similar suggestions. For starters, they both think we should be looking at dietary patterns rather than single foods or nutrients. They also both want to look across the data sets. Ioannidis emphasizes transparency. He wants to open data to the world and analyze all the data sets in the same way to see if “any signals survive.” Hu is more cautious (partly to safeguard confidentiality
  • I have a suggestion. Let’s give up on evidence-based eating. It’s given us nothing but trouble and strife. Our tools can’t find any but the most obvious links between food and health, and we’ve found those already.
  • Instead, let’s acknowledge the uncertainty and eat to hedge against what we don’t know
  • We’ve got two excellent hedges: variety and foods with nutrients intact (which describes such diets as the Mediterranean, touted by researchers). If you severely limit your foods (vegan, keto), you might miss out on something. Ditto if you eat foods with little nutritional value (sugar, refined grains). Oh, and pay attention to the two things we can say with certainty: Keep your weight down, and exercise.
  • I used to say I could tell you everything important about diet in 60 seconds. Over the years, my spiel got shorter and shorter as truisms fell by the wayside, and my confidence waned in a field where we know less, rather than more, over time. I’m down to five seconds now: Eat a wide variety of foods with their nutrients intact, keep your weight down and get some exercise.
Javier E

The 'Safe, Legal, Rare' Illusion - NYTimes.com - 1 views

  • it’s easy to forget that there is at least some common ground in American politics on sex, pregnancy, marriage and abortion.
  • Even the most pro-choice politicians, for instance, usually emphasize that they want to reduce the need for abortion, and make the practice rare as well as safe and legal
  • both Democrats and Republicans generally agree that the country would be better off with fewer pregnant teenagers, fewer unwanted children, fewer absent fathers, fewer out-of-wedlock births.
  • ...6 more annotations...
  • The problem with the conservative story is that it doesn’t map particularly well onto contemporary mores and life patterns. A successful chastity-centric culture seems to depend on a level of social cohesion, religious intensity and shared values that exists only in small pockets of the country. Mormon Utah, for instance, largely lives up to the conservative ideal, with some of America’s lowest rates of teenage pregnancies, out-of-wedlock births and abortions. But many other socially conservative regions (particularly in the South) feature higher rates of unwed and teenage parenthood than in the country as a whole.
  • The conservative narrative, by contrast, argues that it’s more important to promote chastity, monogamy and fidelity than to worry about whether there’s a prophylactic in every bedroom drawer or bathroom cabinet. To the extent that contraceptive use has a significant role in the conservative vision (and obviously there’s some Catholic-Protestant disagreement), it’s in the context of already stable, already committed relationships. Monogamy, not chemicals or latex, is the main line of defense against unwanted pregnancies.
  • The liberal vision tends to emphasize access to contraception as the surest path to stable families, wanted children and low abortion rates. The more direct control that women have over when and whether sex makes babies, liberals argue, the less likely they’ll be to get pregnant at the wrong time and with the wrong partner — and the less likely they’ll be to even consider having an abortion
  • if liberal social policies really led inexorably to fewer unplanned pregnancies and thus fewer abortions, you would expect “blue” regions of the country to have lower teen pregnancy rates and fewer abortions per capita than demographically similar “red” regions. But that isn’t what the data show. Instead, abortion rates are frequently higher in more liberal states, where access is often largely unrestricted, than in more conservative states, which are more likely to have parental consent laws, waiting periods, and so on.
  • liberal states don’t necessarily do better than conservative ones at preventing teenagers from getting pregnant in the first place. Instead, the lower teenage birth rates in many blue states are mostly just a consequence of (again) their higher abortion rates.
  • These are realities liberals should keep in mind when tempted to rail against conservatives for rejecting the intuitive-seeming promise of “more condoms, fewer abortions.” What’s intuitive isn’t always true, and if social conservatives haven’t figured out how to make all good things go together in post-sexual-revolution America, neither have social liberals.
Javier E

At the Existentialist Café: Freedom, Being, and Apricot Cocktails with Jean-P... - 0 views

  • The phenomenologists’ leading thinker, Edmund Husserl, provided a rallying cry, ‘To the things themselves!’ It meant: don’t waste time on the interpretations that accrue upon things, and especially don’t waste time wondering whether the things are real. Just look at this that’s presenting itself to you, whatever this may be, and describe it as precisely as possible.
  • You might think you have defined me by some label, but you are wrong, for I am always a work in progress. I create myself constantly through action, and this is so fundamental to my human condition that, for Sartre, it is the human condition, from the moment of first consciousness to the moment when death wipes it out. I am my own freedom: no more, no less.
  • Sartre wrote like a novelist — not surprisingly, since he was one. In his novels, short stories and plays as well as in his philosophical treatises, he wrote about the physical sensations of the world and the structures and moods of human life. Above all, he wrote about one big subject: what it meant to be free. Freedom, for him, lay at the heart of all human experience, and this set humans apart from all other kinds of object.
  • ...97 more annotations...
  • Sartre listened to his problem and said simply, ‘You are free, therefore choose — that is to say, invent.’ No signs are vouchsafed in this world, he said. None of the old authorities can relieve you of the burden of freedom. You can weigh up moral or practical considerations as carefully as you like, but ultimately you must take the plunge and do something, and it’s up to you what that something is.
  • Even if the situation is unbearable — perhaps you are facing execution, or sitting in a Gestapo prison, or about to fall off a cliff — you are still free to decide what to make of it in mind and deed. Starting from where you are now, you choose. And in choosing, you also choose who you will be.
  • The war had made people realise that they and their fellow humans were capable of departing entirely from civilised norms; no wonder the idea of a fixed human nature seemed questionable.
  • If this sounds difficult and unnerving, it’s because it is. Sartre does not deny that the need to keep making decisions brings constant anxiety. He heightens this anxiety by pointing out that what you do really matters. You should make your choices as though you were choosing on behalf of the whole of humanity, taking the entire burden of responsibility for how the human race behaves. If you avoid this responsibility by fooling yourself that you are the victim of circumstance or of someone else’s bad advice, you are failing to meet the demands of human life and choosing a fake existence, cut off from your own ‘authenticity’.
  • Along with the terrifying side of this comes a great promise: Sartre’s existentialism implies that it is possible to be authentic and free, as long as you keep up the effort.
  • almost all agreed that it was, as an article in Les nouvelles littéraires phrased it, a ‘sickening mixture of philosophic pretentiousness, equivocal dreams, physiological technicalities, morbid tastes and hesitant eroticism … an introspective embryo that one would take distinct pleasure in crushing’.
  • he offered a philosophy designed for a species that had just scared the hell out of itself, but that finally felt ready to grow up and take responsibility.
  • In this rebellious world, just as with the Parisian bohemians and Dadaists in earlier generations, everything that was dangerous and provocative was good, and everything that was nice or bourgeois was bad.
  • Such interweaving of ideas and life had a long pedigree, although the existentialists gave it a new twist. Stoic and Epicurean thinkers in the classical world had practised philosophy as a means of living well, rather than of seeking knowledge or wisdom for their own sake. By reflecting on life’s vagaries in philosophical ways, they believed they could become more resilient, more able to rise above circumstances, and better equipped to manage grief, fear, anger, disappointment or anxiety.
  • In the tradition they passed on, philosophy is neither a pure intellectual pursuit nor a collection of cheap self-help tricks, but a discipline for flourishing and living a fully human, responsible life.
  • For Kierkegaard, Descartes had things back to front. In his own view, human existence comes first: it is the starting point for everything we do, not the result of a logical deduction. My existence is active: I live it and choose it, and this precedes any statement I can make about myself.
  • Studying our own moral genealogy cannot help us to escape or transcend ourselves. But it can enable us to see our illusions more clearly and lead a more vital, assertive existence.
  • What was needed, he felt, was not high moral or theological ideals, but a deeply critical form of cultural history or ‘genealogy’ that would uncover the reasons why we humans are as we are, and how we came to be that way. For him, all philosophy could even be redefined as a form of psychology, or history.
  • For those oppressed on grounds of race or class, or for those fighting against colonialism, existentialism offered a change of perspective — literally, as Sartre proposed that all situations be judged according to how they appeared in the eyes of those most oppressed, or those whose suffering was greatest.
  • She observed that we need not expect moral philosophers to ‘live by’ their ideas in a simplistic way, as if they were following a set of rules. But we can expect them to show how their ideas are lived in. We should be able to look in through the windows of a philosophy, as it were, and see how people occupy it, how they move about and how they conduct themselves.
  • the existentialists inhabited their historical and personal world, as they inhabited their ideas. This notion of ‘inhabited philosophy’ is one I’ve borrowed from the English philosopher and novelist Iris Murdoch, who wrote the first full-length book on Sartre and was an early adopter of existentialism
  • What is existentialism anyway?
  • An existentialist who is also phenomenological provides no easy rules for dealing with this condition, but instead concentrates on describing lived experience as it presents itself. — By describing experience well, he or she hopes to understand this existence and awaken us to ways of living more authentic lives.
  • Existentialists concern themselves with individual, concrete human existence. — They consider human existence different from the kind of being other things have. Other entities are what they are, but as a human I am whatever I choose to make of myself at every moment. I am free — — and therefore I’m responsible for everything I do, a dizzying fact which causes — an anxiety inseparable from human existence itself.
  • On the other hand, I am only free within situations, which can include factors in my own biology and psychology as well as physical, historical and social variables of the world into which I have been thrown. — Despite the limitations, I always want more: I am passionately involved in personal projects of all kinds. — Human existence is thus ambiguous: at once boxed in by borders and yet transcendent and exhilarating. —
  • The first part of this is straightforward: a phenomenologist’s job is to describe. This is the activity that Husserl kept reminding his students to do. It meant stripping away distractions, habits, clichés of thought, presumptions and received ideas, in order to return our attention to what he called the ‘things themselves’. We must fix our beady gaze on them and capture them exactly as they appear, rather than as we think they are supposed to be.
  • Husserl therefore says that, to phenomenologically describe a cup of coffee, I should set aside both the abstract suppositions and any intrusive emotional associations. Then I can concentrate on the dark, fragrant, rich phenomenon in front of me now. This ‘setting aside’ or ‘bracketing out’ of speculative add-ons Husserl called epoché — a term borrowed from the ancient Sceptics,
  • The point about rigour is crucial; it brings us back to the first half of the command to describe phenomena. A phenomenologist cannot get away with listening to a piece of music and saying, ‘How lovely!’ He or she must ask: is it plaintive? is it dignified? is it colossal and sublime? The point is to keep coming back to the ‘things themselves’ — phenomena stripped of their conceptual baggage — so as to bail out weak or extraneous material and get to the heart of the experience.
  • Husserlian ‘bracketing out’ or epoché allows the phenomenologist to temporarily ignore the question ‘But is it real?’, in order to ask how a person experiences his or her world. Phenomenology gives a formal mode of access to human experience. It lets philosophers talk about life more or less as non-philosophers do, while still being able to tell themselves they are being methodical and rigorous.
  • Besides claiming to transform the way we think about reality, phenomenologists promised to change how we think about ourselves. They believed that we should not try to find out what the human mind is, as if it were some kind of substance. Instead, we should consider what it does, and how it grasps its experiences.
  • For Brentano, this reaching towards objects is what our minds do all the time. Our thoughts are invariably of or about something, he wrote: in love, something is loved, in hatred, something is hated, in judgement, something is affirmed or denied. Even when I imagine an object that isn’t there, my mental structure is still one of ‘about-ness’ or ‘of-ness’.
  • Except in deepest sleep, my mind is always engaged in this aboutness: it has ‘intentionality’. Having taken the germ of this from Brentano, Husserl made it central to his whole philosophy.
  • Husserl saw in the idea of intentionality a way to sidestep two great unsolved puzzles of philosophical history: the question of what objects ‘really’ are, and the question of what the mind ‘really’ is. By doing the epoché and bracketing out all consideration of reality from both topics, one is freed to concentrate on the relationship in the middle. One can apply one’s descriptive energies to the endless dance of intentionality that takes place in our lives: the whirl of our minds as they seize their intended phenomena one after the other and whisk them around the floor,
  • Understood in this way, the mind hardly is anything at all: it is its aboutness. This makes the human mind (and possibly some animal minds) different from any other naturally occurring entity. Nothing else can be as thoroughly about or of things as the mind is:
  • Some Eastern meditation techniques aim to still this scurrying creature, but the extreme difficulty of this shows how unnatural it is to be mentally inert. Left to itself, the mind reaches out in all directions as long as it is awake — and even carries on doing it in the dreaming phase of its sleep.
  • a mind that is experiencing nothing, imagining nothing, or speculating about nothing can hardly be said to be a mind at all.
  • Three simple ideas — description, phenomenon, intentionality — provided enough inspiration to keep roomfuls of Husserlian assistants busy in Freiburg for decades. With all of human existence awaiting their attention, how could they ever run out of things to do?
  • For Sartre, this gives the mind an immense freedom. If we are nothing but what we think about, then no predefined ‘inner nature’ can hold us back. We are protean.
  • way of this interpretation. Real, not real; inside, outside; what difference did it make? Reflecting on this, Husserl began turning his phenomenology into a branch of ‘idealism’ — the philosophical tradition which denied external reality and defined everything as a kind of private hallucination.
  • For Sartre, if we try to shut ourselves up inside our own minds, ‘in a nice warm room with the shutters closed’, we cease to exist. We have no cosy home: being out on the dusty road is the very definition of what we are.
  • One might think that, if Heidegger had anything worth saying, he could have communicated it in ordinary language. The fact is that he does not want to be ordinary, and he may not even want to communicate in the usual sense. He wants to make the familiar obscure, and to vex us. George Steiner thought that Heidegger’s purpose was less to be understood than to be experienced through a ‘felt strangeness’.
  • He takes Dasein in its most ordinary moments, then talks about it in the most innovative way he can. For Heidegger, Dasein’s everyday Being is right here: it is Being-in-the-world, or In-der-Welt-sein. The main feature of Dasein’s everyday Being-in-the-world right here is that it is usually busy doing something.
  • Thus, for Heidegger, all Being-in-the-world is also a ‘Being-with’ or Mitsein. We cohabit with others in a ‘with-world’, or Mitwelt. The old philosophical problem of how we prove the existence of other minds has now vanished. Dasein swims in the with-world long before it wonders about other minds.
  • Sometimes the best-educated people were those least inclined to take the Nazis seriously, dismissing them as too absurd to last. Karl Jaspers was one of those who made this mistake, as he later recalled, and Beauvoir observed similar dismissive attitudes among the French students in Berlin.
  • In any case, most of those who disagreed with Hitler’s ideology soon learned to keep their view to themselves. If a Nazi parade passed on the street, they would either slip out of view or give the obligatory salute like everyone else, telling themselves that the gesture meant nothing if they did not believe in it. As the psychologist Bruno Bettelheim later wrote of this period, few people will risk their life for such a small thing as raising an arm — yet that is how one’s powers of resistance are eroded away, and eventually one’s responsibility and integrity go with them.
  • for Arendt, if you do not respond adequately when the times demand it, you show a lack of imagination and attention that is as dangerous as deliberately committing an abuse. It amounts to disobeying the one command she had absorbed from Heidegger in those Marburg days: Think!
  • ‘Everything takes place under a kind of anaesthesia. Objectively dreadful events produce a thin, puny emotional response. Murders are committed like schoolboy pranks. Humiliation and moral decay are accepted like minor incidents.’ Haffner thought modernity itself was partly to blame: people had become yoked to their habits and to mass media, forgetting to stop and think, or to disrupt their routines long enough to question what was going on.
  • Heidegger’s former lover and student Hannah Arendt would argue, in her 1951 study The Origins of Totalitarianism, that totalitarian movements thrived at least partly because of this fragmentation in modern lives, which made people more vulnerable to being swept away by demagogues. Elsewhere, she coined the phrase ‘the banality of evil’ to describe the most extreme failures of personal moral awareness.
  • His communicative ideal fed into a whole theory of history: he traced all civilisation to an ‘Axial Period’ in the fifth century BC, during which philosophy and culture exploded simultaneously in Europe, the Middle East and Asia, as though a great bubble of minds had erupted from the earth’s surface. ‘True philosophy needs communion to come into existence,’ he wrote, and added, ‘Uncommunicativeness in a philosopher is virtually a criterion of the untruth of his thinking.’
  • The idea of being called to authenticity became a major theme in later existentialism, the call being interpreted as saying something like ‘Be yourself!’, as opposed to being phony. For Heidegger, the call is more fundamental than that. It is a call to take up a self that you didn’t know you had: to wake up to your Being. Moreover, it is a call to action. It requires you to do something: to take a decision of some sort.
  • Being and Time contained at least one big idea that should have been of use in resisting totalitarianism. Dasein, Heidegger wrote there, tends to fall under the sway of something called das Man or ‘the they’ — an impersonal entity that robs us of the freedom to think for ourselves. To live authentically requires resisting or outwitting this influence, but this is not easy because das Man is so nebulous. Man in German does not mean ‘man’ as in English (that’s der Mann), but a neutral abstraction, something like ‘one’ in the English phrase ‘one doesn’t do that’,
  • for Heidegger, das Man is me. It is everywhere and nowhere; it is nothing definite, but each of us is it. As with Being, it is so ubiquitous that it is difficult to see. If I am not careful, however, das Man takes over the important decisions that should be my own. It drains away my responsibility or ‘answerability’. As Arendt might put it, we slip into banality, failing to think.
  • Jaspers focused on what he called Grenzsituationen — border situations, or limit situations. These are the moments when one finds oneself constrained or boxed in by what is happening, but at the same time pushed by these events towards the limits or outer edge of normal experience. For example, you might have to make a life-or-death choice, or something might remind you suddenly of your mortality,
  • Jaspers’ interest in border situations probably had much to do with his own early confrontation with mortality. From childhood, he had suffered from a heart condition so severe that he always expected to die at any moment. He also had emphysema, which forced him to speak slowly, taking long pauses to catch his breath. Both illnesses meant that he had to budget his energies with care in order to get his work done without endangering his life.
  • If I am to resist das Man, I must become answerable to the call of my ‘voice of conscience’. This call does not come from God, as a traditional Christian definition of the voice of conscience might suppose. It comes from a truly existentialist source: my own authentic self. Alas, this voice is one I do not recognise and may not hear, because it is not the voice of my habitual ‘they-self’. It is an alien or uncanny version of my usual voice. I am familiar with my they-self, but not with my unalienated voice — so, in a weird twist, my real voice is the one that sounds strangest to me.
  • Marcel developed a strongly theological branch of existentialism. His faith distanced him from both Sartre and Heidegger, but he shared a sense of how history makes demands on individuals. In his essay ‘On the Ontological Mystery’, written in 1932 and published in the fateful year of 1933, Marcel wrote of the human tendency to become stuck in habits, received ideas, and a narrow-minded attachment to possessions and familiar scenes. Instead, he urged his readers to develop a capacity for remaining ‘available’ to situations as they arise. Similar ideas of disponibilité or availability had been explored by other writers,
  • Marcel made it his central existential imperative. He was aware of how rare and difficult it was. Most people fall into what he calls ‘crispation’: a tensed, encrusted shape in life — ‘as though each one of us secreted a kind of shell which gradually hardened and imprisoned him’.
  • Bettelheim later observed that, under Nazism, only a few people realised at once that life could not continue unaltered: these were the ones who got away quickly. Bettelheim himself was not among them. Caught in Austria when Hitler annexed it, he was sent first to Dachau and then to Buchenwald, but was then released in a mass amnesty to celebrate Hitler’s birthday in 1939 — an extraordinary reprieve, after which he left at once for America.
  • we are used to reading philosophy as offering a universal message for all times and places — or at least as aiming to do so. But Heidegger disliked the notion of universal truths or universal humanity, which he considered a fantasy. For him, Dasein is not defined by shared faculties of reason and understanding, as the Enlightenment philosophers thought. Still less is it defined by any kind of transcendent eternal soul, as in religious tradition. We do not exist on a higher, eternal plane at all. Dasein’s Being is local: it has a historical situation, and is constituted in time and place.
  • For Marcel, learning to stay open to reality in this way is the philosopher’s prime job. Everyone can do it, but the philosopher is the one who is called on above all to stay awake, so as to be the first to sound the alarm if something seems wrong.
  • Second, it also means understanding that we are historical beings, and grasping the demands our particular historical situation is making on us. In what Heidegger calls ‘anticipatory resoluteness’, Dasein discovers ‘that its uttermost possibility lies in giving itself up’. At that moment, through Being-towards-death and resoluteness in facing up to one’s time, one is freed from the they-self and attains one’s true, authentic self.
  • If we are temporal beings by our very nature, then authentic existence means accepting, first, that we are finite and mortal. We will die: this all-important realisation is what Heidegger calls authentic ‘Being-towards-Death’, and it is fundamental to his philosophy.
  • Hannah Arendt, instead, left early on: she had the benefit of a powerful warning. Just after the Nazi takeover, in spring 1933, she had been arrested while researching materials on anti-Semitism for the German Zionist Organisation at Berlin’s Prussian State Library. Her apartment was searched; both she and her mother were locked up briefly, then released. They fled, without stopping to arrange travel documents. They crossed to Czechoslovakia (then still safe) by a method that sounds almost too fabulous to be true: a sympathetic German family on the border had a house with its front door in Germany and its back door in Czechoslovakia. The family would invite people for dinner, then let them leave through the back door at night.
  • As Sartre argued in his 1943 review of The Stranger, basic phenomenological principles show that experience comes to us already charged with significance. A piano sonata is a melancholy evocation of longing. If I watch a soccer match, I see it as a soccer match, not as a meaningless scene in which a number of people run around taking turns to apply their lower limbs to a spherical object. If the latter is what I’m seeing, then I am not watching some more essential, truer version of soccer; I am failing to watch it properly as soccer at all.
  • Much as they liked Camus personally, neither Sartre nor Beauvoir accepted his vision of absurdity. For them, life is not absurd, even when viewed on a cosmic scale, and nothing can be gained by saying it is. Life for them is full of real meaning, although that meaning emerges differently for each of us.
  • For Sartre, we show bad faith whenever we portray ourselves as passive creations of our race, class, job, history, nation, family, heredity, childhood influences, events, or even hidden drives in our subconscious which we claim are out of our control. It is not that such factors are unimportant: class and race, in particular, he acknowledged as powerful forces in people’s lives, and Simone de Beauvoir would soon add gender to that list.
  • Sartre takes his argument to an extreme point by asserting that even war, imprisonment or the prospect of imminent death cannot take away my existential freedom. They form part of my ‘situation’, and this may be an extreme and intolerable situation, but it still provides only a context for whatever I choose to do next. If I am about to die, I can decide how to face that death. Sartre here resurrects the ancient Stoic idea that I may not choose what happens to me, but I can choose what to make of it, spiritually speaking.
  • But the Stoics cultivated indifference in the face of terrible events, whereas Sartre thought we should remain passionately, even furiously engaged with what happens to us and with what we can achieve. We should not expect freedom to be anything less than fiendishly difficult.
  • Freedom does not mean entirely unconstrained movement, and it certainly does not mean acting randomly. We often mistake the very things that enable us to be free — context, meaning, facticity, situation, a general direction in our lives — for things that define us and take away our freedom. It is only with all of these that we can be free in a real sense.
  • Nor did he mean that privileged groups have the right to pontificate to the poor and downtrodden about the need to ‘take responsibility’ for themselves. That would be a grotesque misreading of Sartre’s point, since his sympathy in any encounter always lay with the more oppressed side. But for each of us — for me — to be in good faith means not making excuses for myself.
  • Camus’ novel gives us a deliberately understated vision of heroism and decisive action compared to those of Sartre and Beauvoir. One can only do so much. It can look like defeatism, but it shows a more realistic perception of what it takes to actually accomplish difficult tasks like liberating one’s country.
  • Camus just kept returning to his core principle: no torture, no killing — at least not with state approval. Beauvoir and Sartre believed they were taking a more subtle and more realistic view. If asked why a couple of innocuous philosophers had suddenly become so harsh, they would have said it was because the war had changed them in profound ways. It had shown them that one’s duties to humanity could be more complicated than they seemed. ‘The war really divided my life in two,’ Sartre said later.
  • Poets and artists ‘let things be’, but they also let things come out and show themselves. They help to ease things into ‘unconcealment’ (Unverborgenheit), which is Heidegger’s rendition of the Greek term alētheia, usually translated as ‘truth’. This is a deeper kind of truth than the mere correspondence of a statement to reality, as when we say ‘The cat is on the mat’ and point to a mat with a cat on it. Long before we can do this, both cat and mat must ‘stand forth out of concealedness’. They must un-hide themselves.
  • Heidegger does not use the word ‘consciousness’ here because — as with his earlier work — he is trying to make us think in a radically different way about ourselves. We are not to think of the mind as an empty cavern, or as a container filled with representations of things. We are not even supposed to think of it as firing off arrows of intentional ‘aboutness’, as in the earlier phenomenology of Brentano. Instead, Heidegger draws us into the depths of his Schwarzwald, and asks us to imagine a gap with sunlight filtering in. We remain in the forest, but we provide a relatively open spot where other beings can bask for a moment. If we did not do this, everything would remain in the thickets, hidden even to itself.
  • The astronomer Carl Sagan began his 1980 television series Cosmos by saying that human beings, though made of the same stuff as the stars, are conscious and are therefore ‘a way for the cosmos to know itself’. Merleau-Ponty similarly quoted his favourite painter Cézanne as saying, ‘The landscape thinks itself in me, and I am its consciousness.’ This is something like what Heidegger thinks humanity contributes to the earth. We are not made of spiritual nothingness; we are part of Being, but we also bring something unique with us. It is not much: a little open space, perhaps with a path and a bench like the one the young Heidegger used to sit on to do his homework. But through us, the miracle occurs.
  • Beauty aside, Heidegger’s late writing can also be troubling, with its increasingly mystical notion of what it is to be human. If one speaks of a human being mainly as an open space or a clearing, or a means of ‘letting beings be’ and dwelling poetically on the earth, then one doesn’t seem to be talking about any recognisable person. The old Dasein has become less human than ever. It is now a forestry feature.
  • Even today, Jaspers, the dedicated communicator, is far less widely read than Heidegger, who has influenced architects, social theorists, critics, psychologists, artists, film-makers, environmental activists, and innumerable students and enthusiasts — including the later deconstructionist and post-structuralist schools, which took their starting point from his late thinking. Having spent the late 1940s as an outsider and then been rehabilitated, Heidegger became the overwhelming presence in university philosophy all over the European continent from then on.
  • As Levinas reflected on this experience, it helped to lead him to a philosophy that was essentially ethical, rather than ontological like Heidegger’s. He developed his ideas from the work of Jewish theologian Martin Buber, whose I and Thou in 1923 had distinguished between my relationship with an impersonal ‘it’ or ‘them’, and the direct personal encounter I have with a ‘you’. Levinas took it further: when I encounter you, we normally meet face-to-face, and it is through your face that you, as another person, can make ethical demands on me. This is very different from Heidegger’s Mitsein or Being-with, which suggests a group of people standing alongside one another, shoulder to shoulder as if in solidarity — perhaps as a unified nation or Volk.
  • For Levinas, we literally face each other, one individual at a time, and that relationship becomes one of communication and moral expectation. We do not merge; we respond to one another. Instead of being co-opted into playing some role in my personal drama of authenticity, you look me in the eyes — and you remain Other. You remain you.
  • This relationship is more fundamental than the self, more fundamental than consciousness, more fundamental even than Being — and it brings an unavoidable ethical obligation. Ever since Husserl, phenomenologists and existentialists had being trying to stretch the definition of existence to incorporate our social lives and relationships. Levinas did more: he turned philosophy around entirely so that these relationships were the foundation of our existence, not an extension of it.
  • Her last work, The Need for Roots, argues, among other things, that none of us has rights, but each one of us has a near-infinite degree of duty and obligation to the other. Whatever the underlying cause of her death — and anorexia nervosa seems to have been involved — no one could deny that she lived out her philosophy with total commitment. Of all the lives touched on in this book, hers is surely the most profound and challenging application of Iris Murdoch’s notion that a philosophy can be ‘inhabited’.
  • Other thinkers took radical ethical turns during the war years. The most extreme was Simone Weil, who actually tried to live by the principle of putting other people’s ethical demands first. Having returned to France after her travels through Germany in 1932, she had worked in a factory so as to experience the degrading nature of such work for herself. When France fell in 1940, her family fled to Marseilles (against her protests), and later to the US and to Britain. Even in exile, Weil made extraordinary sacrifices. If there were people in the world who could not sleep in a bed, she would not do so either, so she slept on the floor.
  • The mystery tradition had roots in Kierkegaard’s ‘leap of faith’. It owed much to the other great nineteenth-century mystic of the impossible, Dostoevsky, and to older theological notions. But it also grew from the protracted trauma that was the first half of the twentieth century. Since 1914, and especially since 1939, people in Europe and elsewhere had come to the realisation that we cannot fully know or trust ourselves; that we have no excuses or explanations for what we do — and yet that we must ground our existence and relationships on something firm, because otherwise we cannot survive.
  • One striking link between these radical ethical thinkers, all on the fringes of our main story, is that they had religious faith. They also granted a special role to the notion of ‘mystery’ — that which cannot be known, calculated or understood, especially when it concerns our relationships with each other. Heidegger was different from them, since he rejected the religion he grew up with and had no real interest in ethics — probably as a consequence of his having no real interest in the human.
  • Meanwhile, the Christian existentialist Gabriel Marcel was also still arguing, as he had since the 1930s, that ethics trumps everything else in philosophy and that our duty to each other is so great as to play the role of a transcendent ‘mystery’. He too had been led to this position partly by a wartime experience: during the First World War he had worked for the Red Cross’ Information Service, with the unenviable job of answering relatives’ inquiries about missing soldiers. Whenever news came, he passed it on, and usually it was not good. As Marcel later said, this task permanently inoculated him against warmongering rhetoric of any kind, and it made him aware of the power of what is unknown in our lives.
  • As the play’s much-quoted and frequently misunderstood final line has it: ‘Hell is other people.’ Sartre later explained that he did not mean to say that other people were hellish in general. He meant that after death we become frozen in their view, unable any longer to fend off their interpretation. In life, we can still do something to manage the impression we make; in death, this freedom goes and we are left entombed in other’s people’s memories and perceptions.
  • We have to do two near-impossible things at once: understand ourselves as limited by circumstances, and yet continue to pursue our projects as though we are truly in control. In Beauvoir’s view, existentialism is the philosophy that best enables us to do this, because it concerns itself so deeply with both freedom and contingency. It acknowledges the radical and terrifying scope of our freedom in life, but also the concrete influences that other philosophies tend to ignore: history, the body, social relationships and the environment.
  • The aspects of our existence that limit us, Merleau-Ponty says, are the very same ones that bind us to the world and give us scope for action and perception. They make us what we are. Sartre acknowledged the need for this trade-off, but he found it more painful to accept. Everything in him longed to be free of bonds, of impediments and limitations
  • Of course we have to learn this skill of interpreting and anticipating the world, and this happens in early childhood, which is why Merleau-Ponty thought child psychology was essential to philosophy. This is an extraordinary insight. Apart from Rousseau, very few philosophers before him had taken childhood seriously; most wrote as though all human experience were that of a fully conscious, rational, verbal adult who has been dropped into this world from the sky — perhaps by a stork.
  • For Merleau-Ponty, we cannot understand our experience if we don’t think of ourselves in part as overgrown babies. We fall for optical illusions because we once learned to see the world in terms of shapes, objects and things relevant to our own interests. Our first perceptions came to us in tandem with our first active experiments in observing the world and reaching out to explore it, and are still linked with those experiences.
  • Another factor in all of this, for Merleau-Ponty, is our social existence: we cannot thrive without others, or not for long, and we need this especially in early life. This makes solipsistic speculation about the reality of others ridiculous; we could never engage in such speculation if we hadn’t already been formed by them.
  • As Descartes could have said (but didn’t), ‘I think, therefore other people exist.’ We grow up with people playing with us, pointing things out, talking, listening, and getting us used to reading emotions and movements; this is how we become capable, reflective, smoothly integrated beings.
  • In general, Merleau-Ponty thinks human experience only makes sense if we abandon philosophy’s time-honoured habit of starting with a solitary, capsule-like, immobile adult self, isolated from its body and world, which must then be connected up again — adding each element around it as though adding clothing to a doll. Instead, for him, we slide from the womb to the birth canal to an equally close and total immersion in the world. That immersion continues as long as we live, although we may also cultivate the art of partially withdrawing from time to time when we want to think or daydream.
  • When he looks for his own metaphor to describe how he sees consciousness, he comes up with a beautiful one: consciousness, he suggests, is like a ‘fold’ in the world, as though someone had crumpled a piece of cloth to make a little nest or hollow. It stays for a while, before eventually being unfolded and smoothed away. There is something seductive, even erotic, in this idea of my conscious self as an improvised pouch in the cloth of the world. I still have my privacy — my withdrawing room. But I am part of the world’s fabric, and I remain formed out of it for as long as I am here.
  • By the time of these works, Merleau-Ponty is taking his desire to describe experience to the outer limits of what language can convey. Just as with the late Husserl or Heidegger, or Sartre in his Flaubert book, we see a philosopher venturing so far from shore that we can barely follow. Emmanuel Levinas would head out to the fringes too, eventually becoming incomprehensible to all but his most patient initiates.
  • Sartre once remarked — speaking of a disagreement they had about Husserl in 1941 — that ‘we discovered, astounded, that our conflicts had, at times, stemmed from our childhood, or went back to the elementary differences of our two organisms’. Merleau-Ponty also said in an interview that Sartre’s work seemed strange to him, not because of philosophical differences, but because of a certain ‘register of feeling’, especially in Nausea, that he could not share. Their difference was one of temperament and of the whole way the world presented itself to them.
  • The two also differed in their purpose. When Sartre writes about the body or other aspects of experience, he generally does it in order to make a different point. He expertly evokes the grace of his café waiter, gliding between the tables, bending at an angle just so, steering the drink-laden tray through the air on the tips of his fingers — but he does it all in order to illustrate his ideas about bad faith. When Merleau-Ponty writes about skilled and graceful movement, the movement itself is his point. This is the thing he wants to understand.
  • We can never move definitively from ignorance to certainty, for the thread of the inquiry will constantly lead us back to ignorance again. This is the most attractive description of philosophy I’ve ever read, and the best argument for why it is worth doing, even (or especially) when it takes us no distance at all from our starting point.
  • By prioritising perception, the body, social life and childhood development, Merleau-Ponty gathered up philosophy’s far-flung outsider subjects and brought them in to occupy the centre of his thought.
  • In his inaugural lecture at the Collège de France on 15 January 1953, published as In Praise of Philosophy, he said that philosophers should concern themselves above all with whatever is ambiguous in our experience. At the same time, they should think clearly about these ambiguities, using reason and science. Thus, he said, ‘The philosopher is marked by the distinguishing trait that he possesses inseparably the taste for evidence and the feeling for ambiguity.’ A constant movement is required between these two
  • As Sartre wrote in response to Hiroshima, humanity had now gained the power to wipe itself out, and must decide every single day that it wanted to live. Camus also wrote that humanity faced the task of choosing between collective suicide and a more intelligent use of its technology — ‘between hell and reason’. After 1945, there seemed little reason to trust in humanity’s ability to choose well.
  • Merleau-Ponty observed in a lecture of 1951 that, more than any previous century, the twentieth century had reminded people how ‘contingent’ their lives were — how at the mercy of historical events and other changes that they could not control. This feeling went on long after the war ended. After the A-bombs were dropped on Hiroshima and Nagasaki, many feared that a Third World War would not be long in coming, this time between the Soviet Union and the United States.
caelengrubb

Insider Trading - Econlib - 0 views

  • Insider trading” refers to transactions in a company’s securities, such as stocks or options, by corporate insiders or their associates based on information originating within the firm that would, once publicly disclosed, affect the prices of such securities.
  • Corporate insiders are individuals whose employment with the firm (as executives, directors, or sometimes rank-and-file employees) or whose privileged access to the firm’s internal affairs (as large shareholders, consultants, accountants, lawyers, etc.) gives them valuable information.
  • Famous examples of insider trading include transacting on the advance knowledge of a company’s discovery of a rich mineral ore (Securities and Exchange Commission v. Texas Gulf Sulphur Co.), on a forthcoming cut in dividends by the board of directors (Cady, Roberts & Co.), and on an unanticipated increase in corporate expenses (Diamond v. Oreamuno).
  • ...18 more annotations...
  • Such trading on information originating outside the company is generally not covered by insider trading regulation.
  • Insider trading is quite different from market manipulation, disclosure of false or misleading information to the market, or direct expropriation of the corporation’s wealth by insiders.
  • Regulation of insider trading began in the United States at the turn of the twentieth century, when judges in several states became willing to rescind corporate insiders’ transactions with uninformed shareholders.
  • One of the earliest (and unsuccessful) federal attempts to regulate insider trading occurred after the 1912–1913 congressional hearings before the Pujo Committee, which concluded that “the scandalous practices of officers and directors in speculating upon inside and advance information as to the action of their corporations may be curtailed if not stopped.”
  • The Securities Acts of 1933–1934, passed by the U.S. Congress in the aftermath of the stock market crash, though aimed primarily at prohibiting fraud and market manipulation, also targeted insider trading.
  • As of 2004, at least ninety-three countries, the vast majority of nations that possess organized securities markets, had laws regulating insider trading
  • Several factors explain the rapid emergence of such regulation, particularly during the last twenty years: namely, the growth of the securities industry worldwide, pressures to make national securities markets look more attractive in the eyes of outside investors, and the pressure the SEC exerted on foreign lawmakers and regulators to increase the effectiveness of domestic enforcement by identifying and punishing offenders and their associates operating outside the United States.
  • Many researchers argue that trading on inside information is a zero-sum game, benefiting insiders at the expense of outsiders. But most outsiders who bought from or sold to insiders would have traded anyway, and possibly at a worse price (Manne 1970). So, for example, if the insider sells stock because he expects the price to fall, the very act of selling may bring the price down to the buyer.
  • A controversial case is that of abstaining from trading on the basis of inside information (Fried 2003).
  • There is little disagreement that insider trading makes securities markets more efficient by moving the current market price closer to the future postdisclosure price. In other words, insiders’ transactions, even if they are anonymous, signal future price trends to others and make the current stock price reflect relevant information sooner.
  • Accurately priced stocks give valuable signals to investors and ensure more efficient allocation of capital.
  • The controversial question is whether insider trading is more or less effective than public disclosure.
  • Insider trading’s advantage is that it introduces individual profit motives, does not directly reveal sensitive intercorporate information, and mitigates the management’s aversion to disclosing negative information (
  • Probably the most controversial issue in the economic analysis of insider trading is whether it is an efficient way to pay managers for their entrepreneurial services to the corporation. Some researchers believe that insider trading gives managers a monetary incentive to innovate, search for, and produce valuable information, as well as to take risks that increase the firm’s value (Carlton and Fischel 1983; Manne 1966).
  • Another economic argument for insider trading is that it provides efficient compensation to holders of large blocks of stock
  • A common contention is that the presence of insider trading decreases public confidence in, and deters many potential investors from, equity markets, making them less liquid (Loss 1970).
  • Empirical research generally supports skepticism that regulation of insider trading has been effective in either the United States or internationally, as evidenced by the persistent trading profits of insiders, behavior of stock prices around corporate announcements, and relatively infrequent prosecution rates (Bhattacharya and Daouk 2002; Bris 2005).
  • Despite numerous and extensive debates, economists and legal scholars do not agree on a desirable government policy toward insider trading. On the one hand, absolute information parity is clearly infeasible, and information-based trading generally increases the pricing efficiency of financial markets. Information, after all, is a scarce economic good that is costly to produce or acquire, and its subsequent use and dissemination are difficult to control. On the other hand, insider trading, as opposed to other forms of informed trading, may produce unintended adverse consequences for the functioning of the corporate enterprise, the market-wide system of publicly mandated disclosure, or the market for information.
katedriscoll

Confirmation bias in the utilization of others' opinion strength | Nature Neuroscience - 0 views

  • Humans tend to discount information that undermines past choices and judgments. This
  • confirmation bias has significant impact on domains ranging from politics to science and education. Little is known about the mechanisms underlying this fundamental characteristic of belief formation. Here we report a mechanism underlying the confirmation bias. Specifically, we provide evidence for a failure to use the strength of others’ disconfirming opinions to alter confidence in judgments, but adequate use when opinions are confirmatory. This bias is related to reduced neural sensitivity to the strength of others’ opinions in the posterior medial prefrontal cortex when opinions are disconfirming. Our results demonstrate that existing judgments alter the neural representation of information strength, leaving the individual less likely to alter opinions in the face of disagreement.
Javier E

My Mom Believes In QAnon. I've Been Trying To Get Her Out. - 0 views

  • An early adopter of the QAnon mass delusion, on board since 2018, she held firm to the claim that a Satan-worshipping cabal of child sex traffickers controlled the world and the only person standing in their way was Trump. She saw him not merely as a politician but a savior, and she expressed her devotion in stark terms.
  • “The prophets have said Trump is anointed,” she texted me once. “God is using him to finally end the evil doings of the cabal which has hurt humanity all these centuries… We are in a war between good & evil.”
  • By 2020, I’d pretty much given up on swaying my mom away from her preferred presidential candidate. We’d spent many hours arguing over basic facts I considered indisputable. Any information I cited to prove Trump’s cruelty, she cut down with a corresponding counterattack. My links to credible news sources disintegrated against a wall of outlets like One America News Network, Breitbart, and Before It’s News. Any cracks I could find in her positions were instantly undermined by the inconvenient fact that I was, in her words, a member of “the liberal media,” a brainwashed acolyte of the sprawling conspiracy trying to take down her heroic leader.
  • ...20 more annotations...
  • The irony gnawed at me: My entire vocation as an investigative reporter was predicated on being able to reveal truths, and yet I could not even rustle up the evidence to convince my own mother that our 45th president was not, in fact, the hero she believed him to be. Or, for that matter, that John F. Kennedy Jr. was dead. Or that Tom Hanks had not been executed for drinking the blood of children.
  • The theories spun from Q’s messages seemed much easier to disprove. Oprah Winfrey couldn’t have been detained during a wave of deep state arrests because we could still see her conducting live interviews on television. Trump’s 4th of July speech at Mount Rushmore came to an end without John F. Kennedy Jr. revealing he was alive and stepping in as the president’s new running mate. The widespread blackouts that her Patriot friend’s “source from the Pentagon” had warned about failed to materialize. And I could testify firsthand that the CIA had no control over my newsroom’s editorial decisions.
  • “I believe the Holy Spirit led me to the QAnons to discover the truth which is being suppressed,” she texted me. “Otherwise, how would I be able to know the truth if the lamestream media suppresses the truth?”
  • Through the years, I’d battled against conspiracy theories my mom threw at me that were far more formidable than QAnon. I’d been stumped when she asked me to prove that Beyoncé wasn’t an Illuminati member, dumbfounded when research studies I sent her weren’t enough to reach an agreement on vaccine efficacy, and too worn down to say anything more than “that’s not true” when confronted with false allegations of murders committed by prominent politicians.
  • Eventually, I accepted the impasse. It didn’t seem healthy that every conversation we had would devolve into a circuitous debate about which one of us was on the side of the bad guys. So I tried to pick my battles.
  • She regretted not taking politics more seriously when I was younger. I’d grown up blinkered by American privilege, trained to ignore the dirty machinations securing my comforts. My mom had shed that luxury long ago.
  • With no overlap between our filters of reality, I was at a loss for any facts that would actually stick.
  • Meanwhile, she wondered where she’d gone wrong with me
  • But what I had dismissed as damaging inconsistencies turned out to be the core strength of the belief system: It was alive, flexible, sprouting more questions than answers, more clues to study, an investigation playing out in real time, with the fate of the world at stake.
  • The year my mom began falling down QAnon rabbit holes, I turned the age she was when she first arrived in the States. By then, I was no longer sure that America was worth the cost of her migration. When the real estate market collapsed under the weight of Wall Street speculation, she had to sell our house at a steep loss to avoid foreclosure and her budding career as a realtor evaporated. Her near–minimum wage jobs weren’t enough to cover her bills, so her credit card debts rose. She delayed retirement plans because she saw no path to breaking even anytime soon, though she was hopeful that a turnaround was on the horizon. Through the setbacks and detours, she drifted into the arms of the people and beliefs I held most responsible for her troubles.
  • With a fervor I knew was futile, I’d tell my mom she was missing the real conspiracy: The powerful people shaping policy to benefit their own interests, to maintain wealth and white predominance, through tax cuts and voter suppression, were commandeering her support solely by catering to her stance on the one issue she cared most about.
  • The voice my mom trusted most now was Trump’s. Our disagreements were no longer ideological to her but part of a celestial conflict.
  • “I love you but you have to be on the side of good,” she texted me. “Im sad cuz u have become part of the deep state. May God have mercy on you...I pray you will see the truth of the evil agenda and be on the side of Trump.”
  • She likened her fellow Patriots to the early Christians who spread the word of Jesus at the risk of persecution. She often sent me a meme with a caption about “ordinary people who spent countless hours researching, debating, meditating and praying” for the truth to be revealed to them. “Although they were mocked, dismissed and cast off, they knew their souls had agreed long ago to do this work.”
  • Last summer, as my mom marched in a pink MAGA hat amid maskless crowds, and armed extremists stalked racial justice protests, and a disputed election loomed like a time bomb, I entertained my darkest thoughts about the fate of our country. Was there any hope in a democracy without a shared set of basic facts? Had my elders fled one authoritarian regime only for their children to face another? Amid the gloom, I found only a single morsel of solace: My mom was as hopeful as she’d ever been.
  • I wish I could offer some evidence showing that the gulf between us might be narrowing, that my love, persistence, and collection of facts might be enough to draw her back into a reality we share, and that when our wager about the storm comes due in a few months, she’ll realize that the voices she trusts have been lying to her. But I don’t think that will happen
  • What can I do but try to limit the damage? Send my mom movie recommendations to occupy the free time she instead spends on conspiracy research. Shift our conversations to the common ground of cooking recipes and family gossip. Raise objections when her beliefs nudge her toward dangerous decisions.
  • I now understand our debates as marks of the very bond I thought was disintegrating. No matter how far she believes I’ve fallen into the deep state, how hard I fight for the forces of evil, how imminent the grand plan’s rapture, my mom will be there on the other side of the line putting in a good word for me with the angels and saints, trying to save me from damnation. And those are the two realities we live in. ●
  • understand
  • now understand our debates as marks of the very bond I thought was disintegrating. No matter how far she believes I’ve fallen into the deep state, how hard I fight for the forces of evil, how imminent the grand plan’s rapture, my mom will be there on the other side of the line putting in a good word for me with the angels and saints, trying to save me from damnation. And those are the two realities we live in. ●
katedriscoll

How Language Influences Emotion - TOK Topics - 0 views

  • “There’s plenty of disagreement over how to define emotions, but at least one thing is certain: They are intensely personal things. A flood of anger, a flash of annoyance—that feeling is yours, is a result of your own unique set of circumstances, is shaping the way you see the world at a given moment.”
Javier E

How Does Science Really Work? | The New Yorker - 1 views

  • I wanted to be a scientist. So why did I find the actual work of science so boring? In college science courses, I had occasional bursts of mind-expanding insight. For the most part, though, I was tortured by drudgery.
  • I’d found that science was two-faced: simultaneously thrilling and tedious, all-encompassing and narrow. And yet this was clearly an asset, not a flaw. Something about that combination had changed the world completely.
  • “Science is an alien thought form,” he writes; that’s why so many civilizations rose and fell before it was invented. In his view, we downplay its weirdness, perhaps because its success is so fundamental to our continued existence.
  • ...50 more annotations...
  • In school, one learns about “the scientific method”—usually a straightforward set of steps, along the lines of “ask a question, propose a hypothesis, perform an experiment, analyze the results.”
  • That method works in the classroom, where students are basically told what questions to pursue. But real scientists must come up with their own questions, finding new routes through a much vaster landscape.
  • Since science began, there has been disagreement about how those routes are charted. Two twentieth-century philosophers of science, Karl Popper and Thomas Kuhn, are widely held to have offered the best accounts of this process.
  • For Popper, Strevens writes, “scientific inquiry is essentially a process of disproof, and scientists are the disprovers, the debunkers, the destroyers.” Kuhn’s scientists, by contrast, are faddish true believers who promulgate received wisdom until they are forced to attempt a “paradigm shift”—a painful rethinking of their basic assumptions.
  • Working scientists tend to prefer Popper to Kuhn. But Strevens thinks that both theorists failed to capture what makes science historically distinctive and singularly effective.
  • Sometimes they seek to falsify theories, sometimes to prove them; sometimes they’re informed by preëxisting or contextual views, and at other times they try to rule narrowly, based on t
  • Why do scientists agree to this scheme? Why do some of the world’s most intelligent people sign on for a lifetime of pipetting?
  • Strevens thinks that they do it because they have no choice. They are constrained by a central regulation that governs science, which he calls the “iron rule of explanation.” The rule is simple: it tells scientists that, “if they are to participate in the scientific enterprise, they must uncover or generate new evidence to argue with”; from there, they must “conduct all disputes with reference to empirical evidence alone.”
  • , it is “the key to science’s success,” because it “channels hope, anger, envy, ambition, resentment—all the fires fuming in the human heart—to one end: the production of empirical evidence.”
  • Strevens arrives at the idea of the iron rule in a Popperian way: by disproving the other theories about how scientific knowledge is created.
  • The problem isn’t that Popper and Kuhn are completely wrong. It’s that scientists, as a group, don’t pursue any single intellectual strategy consistently.
  • Exploring a number of case studies—including the controversies over continental drift, spontaneous generation, and the theory of relativity—Strevens shows scientists exerting themselves intellectually in a variety of ways, as smart, ambitious people usually do.
  • “Science is boring,” Strevens writes. “Readers of popular science see the 1 percent: the intriguing phenomena, the provocative theories, the dramatic experimental refutations or verifications.” But, he says,behind these achievements . . . are long hours, days, months of tedious laboratory labor. The single greatest obstacle to successful science is the difficulty of persuading brilliant minds to give up the intellectual pleasures of continual speculation and debate, theorizing and arguing, and to turn instead to a life consisting almost entirely of the production of experimental data.
  • Ultimately, in fact, it was good that the geologists had a “splendid variety” of somewhat arbitrary opinions: progress in science requires partisans, because only they have “the motivation to perform years or even decades of necessary experimental work.” It’s just that these partisans must channel their energies into empirical observation. The iron rule, Strevens writes, “has a valuable by-product, and that by-product is data.”
  • Science is often described as “self-correcting”: it’s said that bad data and wrong conclusions are rooted out by other scientists, who present contrary findings. But Strevens thinks that the iron rule is often more important than overt correction.
  • Eddington was never really refuted. Other astronomers, driven by the iron rule, were already planning their own studies, and “the great preponderance of the resulting measurements fit Einsteinian physics better than Newtonian physics.” It’s partly by generating data on such a vast scale, Strevens argues, that the iron rule can power science’s knowledge machine: “Opinions converge not because bad data is corrected but because it is swamped.”
  • Why did the iron rule emerge when it did? Strevens takes us back to the Thirty Years’ War, which concluded with the Peace of Westphalia, in 1648. The war weakened religious loyalties and strengthened national ones.
  • Two regimes arose: in the spiritual realm, the will of God held sway, while in the civic one the decrees of the state were paramount. As Isaac Newton wrote, “The laws of God & the laws of man are to be kept distinct.” These new, “nonoverlapping spheres of obligation,” Strevens argues, were what made it possible to imagine the iron rule. The rule simply proposed the creation of a third sphere: in addition to God and state, there would now be science.
  • Strevens imagines how, to someone in Descartes’s time, the iron rule would have seemed “unreasonably closed-minded.” Since ancient Greece, it had been obvious that the best thinking was cross-disciplinary, capable of knitting together “poetry, music, drama, philosophy, democracy, mathematics,” and other elevating human disciplines.
  • We’re still accustomed to the idea that a truly flourishing intellect is a well-rounded one. And, by this standard, Strevens says, the iron rule looks like “an irrational way to inquire into the underlying structure of things”; it seems to demand the upsetting “suppression of human nature.”
  • Descartes, in short, would have had good reasons for resisting a law that narrowed the grounds of disputation, or that encouraged what Strevens describes as “doing rather than thinking.”
  • In fact, the iron rule offered scientists a more supple vision of progress. Before its arrival, intellectual life was conducted in grand gestures.
  • Descartes’s book was meant to be a complete overhaul of what had preceded it; its fate, had science not arisen, would have been replacement by some equally expansive system. The iron rule broke that pattern.
  • Strevens sees its earliest expression in Francis Bacon’s “The New Organon,” a foundational text of the Scientific Revolution, published in 1620. Bacon argued that thinkers must set aside their “idols,” relying, instead, only on evidence they could verify. This dictum gave scientists a new way of responding to one another’s work: gathering data.
  • it also changed what counted as progress. In the past, a theory about the world was deemed valid when it was complete—when God, light, muscles, plants, and the planets cohered. The iron rule allowed scientists to step away from the quest for completeness.
  • The consequences of this shift would become apparent only with time
  • In 1713, Isaac Newton appended a postscript to the second edition of his “Principia,” the treatise in which he first laid out the three laws of motion and the theory of universal gravitation. “I have not as yet been able to deduce from phenomena the reason for these properties of gravity, and I do not feign hypotheses,” he wrote. “It is enough that gravity really exists and acts according to the laws that we have set forth.”
  • What mattered, to Newton and his contemporaries, was his theory’s empirical, predictive power—that it was “sufficient to explain all the motions of the heavenly bodies and of our sea.”
  • Descartes would have found this attitude ridiculous. He had been playing a deep game—trying to explain, at a fundamental level, how the universe fit together. Newton, by those lights, had failed to explain anything: he himself admitted that he had no sense of how gravity did its work
  • by authorizing what Strevens calls “shallow explanation,” the iron rule offered an empirical bridge across a conceptual chasm. Work could continue, and understanding could be acquired on the other side. In this way, shallowness was actually more powerful than depth.
  • Quantum theory—which tells us that subatomic particles can be “entangled” across vast distances, and in multiple places at the same time—makes intuitive sense to pretty much nobody.
  • Without the iron rule, Strevens writes, physicists confronted with such a theory would have found themselves at an impasse. They would have argued endlessly about quantum metaphysics.
  • ollowing the iron rule, they can make progress empirically even though they are uncertain conceptually. Individual researchers still passionately disagree about what quantum theory means. But that hasn’t stopped them from using it for practical purposes—computer chips, MRI machines, G.P.S. networks, and other technologies rely on quantum physics.
  • One group of theorists, the rationalists, has argued that science is a new way of thinking, and that the scientist is a new kind of thinker—dispassionate to an uncommon degree.
  • As evidence against this view, another group, the subjectivists, points out that scientists are as hopelessly biased as the rest of us. To this group, the aloofness of science is a smoke screen behind which the inevitable emotions and ideologies hide.
  • At least in science, Strevens tells us, “the appearance of objectivity” has turned out to be “as important as the real thing.”
  • The subjectivists are right, he admits, inasmuch as scientists are regular people with a “need to win” and a “determination to come out on top.”
  • But they are wrong to think that subjectivity compromises the scientific enterprise. On the contrary, once subjectivity is channelled by the iron rule, it becomes a vital component of the knowledge machine. It’s this redirected subjectivity—to come out on top, you must follow the iron rule!—that solves science’s “problem of motivation,” giving scientists no choice but “to pursue a single experiment relentlessly, to the last measurable digit, when that digit might be quite meaningless.”
  • If it really was a speech code that instigated “the extraordinary attention to process and detail that makes science the supreme discriminator and destroyer of false ideas,” then the peculiar rigidity of scientific writing—Strevens describes it as “sterilized”—isn’t a symptom of the scientific mind-set but its cause.
  • The iron rule—“a kind of speech code”—simply created a new way of communicating, and it’s this new way of communicating that created science.
  • Other theorists have explained science by charting a sweeping revolution in the human mind; inevitably, they’ve become mired in a long-running debate about how objective scientists really are
  • In “The Knowledge Machine: How Irrationality Created Modern Science” (Liveright), Michael Strevens, a philosopher at New York University, aims to identify that special something. Strevens is a philosopher of science
  • Compared with the theories proposed by Popper and Kuhn, Strevens’s rule can feel obvious and underpowered. That’s because it isn’t intellectual but procedural. “The iron rule is focused not on what scientists think,” he writes, “but on what arguments they can make in their official communications.”
  • Like everybody else, scientists view questions through the lenses of taste, personality, affiliation, and experience
  • geologists had a professional obligation to take sides. Europeans, Strevens reports, tended to back Wegener, who was German, while scholars in the United States often preferred Simpson, who was American. Outsiders to the field were often more receptive to the concept of continental drift than established scientists, who considered its incompleteness a fatal flaw.
  • Strevens’s point isn’t that these scientists were doing anything wrong. If they had biases and perspectives, he writes, “that’s how human thinking works.”
  • Eddington’s observations were expected to either confirm or falsify Einstein’s theory of general relativity, which predicted that the sun’s gravity would bend the path of light, subtly shifting the stellar pattern. For reasons having to do with weather and equipment, the evidence collected by Eddington—and by his colleague Frank Dyson, who had taken similar photographs in Sobral, Brazil—was inconclusive; some of their images were blurry, and so failed to resolve the matter definitively.
  • it was only natural for intelligent people who were free of the rule’s strictures to attempt a kind of holistic, systematic inquiry that was, in many ways, more demanding. It never occurred to them to ask if they might illuminate more collectively by thinking about less individually.
  • In the single-sphered, pre-scientific world, thinkers tended to inquire into everything at once. Often, they arrived at conclusions about nature that were fascinating, visionary, and wrong.
  • How Does Science Really Work?Science is objective. Scientists are not. Can an “iron rule” explain how they’ve changed the world anyway?By Joshua RothmanSeptember 28, 2020
‹ Previous 21 - 40 of 48 Next ›
Showing 20 items per page