Skip to main content

Home/ TOK Friends/ Group items tagged conceptual

Rss Feed Group items tagged

Keiko E

Book Review: The Moral Lives of Animals - WSJ.com - 0 views

  • en less to such accounts than meets the eye. What appear on the surface to be instances of insight, reflection, empathy or higher purpose frequently turn out to be a fairly simple learned behavior, of a kind that every sentient species from humans to earthworms exhibits all the time.
  • The deeper problem, as Mr. Peterson more frankly acknowledges, is that it is the height of anthropomorphic absurdity to project human values and behaviors onto other species—and then to judge them by their similarity to us
  • Recognizing the difficulty of boosting animals, his approach is instead to deflate humans: in particular, to suggest that there is much less to even so vaunted a human trait as morality than we like to believe. Rather than a sophisticated system of language-based laws, philosophical arguments and abstract values that sets mankind apart, morality is, in his view, a set of largely primitive psycho logical instincts.
  • ...2 more annotations...
  • And Mr. Peterson simply ignores several decades worth of recent studies in cognitive science by researchers such as David Povinelli, Bruce Hood, Michael Tomasello and Elisabetta Visalberghi, which have elucidated very real differences between human and nonhuman minds in the realm of conceptual reasoning, particularly with respect to what has been termed "theory of mind." This is the uniquely human ability to have thoughts about thoughts and to perceive that other minds exist and that they can hold ideas and beliefs different from one's own. While human and animal minds share a broadly similar ability to learn from experience, formulate intentions and store memories, careful experiments have repeatedly come up empty when attempting to establish the existence of a theory of mind in nonhumans.
  • This not only detracts from the argument Mr. Peterson seeks to make but reinforces the sense of intellectual parochialism that is the book's chief flaw. Modern evolutionary psychology and cognitive science have done much to illuminate the evolutionary instincts that animate complex human mental processes. Unfortunately, in his determination to level the playing field between human and nonhuman minds, Mr. Peterson has ignored at least half his story.
Javier E

The Politics Of Science, Ctd - The Dish | By Andrew Sullivan - The Daily Beast - 0 views

  • This won't do. The "anti-science" charge has little to with morality. When someone like Rick Perry - an avowed anthropogenic climate change and evolution denialist - is accused of rejecting science, it's an attack on Perry's epistemological beliefs rather than moral values. Even though the scientific consensus is clear on both questions, Perry refuses to accept both. By rejecting well-supported scientific truths on, say, theological grounds, he is implicitly denying that the scientific method (rather than, say, theological reasoning) is the best way to determine truths about the natural world. That's what being "anti-science" is.
  • Being pro-science may mean being committed to the idea that advancing scientific knowledge is good for the world, sure, but that scientific knowledge doesn't always say we should try to control the natural world. Science is at its core is a reasoning process - we arrive at certain conclusions through experiments, peer evaluation, etc. So if the best scientific evidence suggests "humans do bad things when they mess with the natural world in fashion X" then the science is telling us not to mess with the natural world in fashion X! Indeed, scientific findings often serve as evidence in debates over the environmental impact of new technology, oftentimes on both sides. There's nothing intrinsic to scientific epistemology or practice that implies a moral commitment to increasing human control over the natural world or to widespread commercial use of the new technologies its discoveries enable
  • Another way to put it is that scientists have a goal of advancing human knowledge. They often do that with particular ends in mind (e.g., cancer scientists want to cure cancer), but there's no reason to believe that end is always increasing human control. It could be that a scientist might want to demonstrate the dangers of certain technologies or the limits of human ability to successfully interfere with the workings of the natural world. 
  • ...1 more annotation...
  • ultimately, it's not whether Levin's broader argument that's really important in this specific case. It's that he's is using obscure conceptual arguments to shield genuinely ignorant people like Perry from criticism. Even if every one of the above arguments is wrong, there's a huge difference between some subtle ethical conflicts and flat-0ut denying the theory of evolution or anthropogenic climate change.
Javier E

Why I Am a Naturalist - NYTimes.com - 1 views

  • Naturalism is the philosophical theory that treats science as our most reliable source of knowledge and scientific method as the most effective route to knowledge.
  • it is now a dominant approach in several areas of philosophy — ethics, epistemology, the philosophy of mind, philosophy of science and, most of in all, metaphysics, the study of the basic constituents of reality.
  • Naturalists have applied this insight to reveal the biological nature of human emotion, perception and cognition, language, moral value, social bonds and political institutions. Naturalistic philosophy has returned the favor, helping psychology, evolutionary anthropology and biology solve their problems by greater conceptual clarity about function, adaptation, Darwinian fitness and individual-versus-group selection.
  • ...3 more annotations...
  • 400 years of scientific success in prediction, control and technology shows that physics has made a good start. We should be confident that it will do better than any other approach at getting things right.
  • The second law of thermodynamics, the periodic table, and the principles of natural selection are unlikely to be threatened by future science. Philosophy can therefore rely on them to answer many of its questions without fear of being overtaken by events.
  • “Why can’t there be things only discoverable by non-scientific means, or not discoverable at all?” Professor Williamson asked in his essay. His question may be rhetorical, but the naturalist has an answer to it: nothing that revelation, inspiration or other non-scientific means ever claimed to discover has yet to withstand the test of knowledge that scientific findings attain. What are those tests of knowledge? They are the experimental/observational methods all the natural sciences share, the social sciences increasingly adopt, and that naturalists devote themselves to making explicit.
Javier E

Tools for Thinking - NYTimes.com - 0 views

  • emergence
  • emergence
  • emergence
  • ...10 more annotations...
  • emergence
  • path dependence. This refers to the notion that often “something that seems normal or inevitable today began with a choice that made sense at a particular time in the past, but survived despite the eclipse of the justification for that choice.
  • Einstellung Effect, the idea that we often try to solve problems by using solutions that worked in the past instead of looking at each situation on its own terms.
  • the Focusing Illusion, which holds that “nothing in life is as important as you think it is while you are thinking about it.”
  • Supervenience. Imagine a picture on a computer screen of a dog sitting in a rowboat. It can be described as a picture of a dog, but at a different level it can be described as an arrangement of pixels and colors. The relationship between the two levels is asymmetric. The same image can be displayed at different sizes with different pixels. The high-level properties (dogness) supervene the low-level properties (pixels).
  • the Fundamental Attribution Error: Don’t try to explain by character traits behavior that is better explained by context.
  • the distinction between emotion and arousal.
  • emergence
  • emergence.
  • We often try to understand problems by taking apart and studying their constituent parts. But emergent problems can’t be understood this way. Emergent systems are ones in which many different elements interact. The pattern of interaction then produces a new element that is greater than the sum of the parts, which then exercises a top-down influence on the constituent elements.
Javier E

Scientific Thought Strains Everyone Should Know - NYTimes.com - 0 views

  • n Tuesday’s column I describe a symposium over at Edge.org on what scientific concepts everyone’s cognitive toolbox should hold.
  • the Pareto Principle. We have the idea in our heads that most distributions fall along a bell curve (most people are in the middle). But this is not how the world is organized in sphere after sphere.
  • altruism
  • ...5 more annotations...
  • We survive because we struggle to be the fittest and also because we are really good at cooperation.
  • “temperament dimensions.” She writes that we have four broad temperament constellations. One, built around the dopamine system, regulates enthusiasm for risk. A second, structured around the serotonin system, regulates sociability. A third, organized around the prenatal testosterone system, regulates attention to detail and aggressiveness. A fourth, organized around the estrogen and oxytocin systems, regulates empathy and verbal fluency.
  • “subselves.” This is the idea that we are not just one personality, but we have many subselves that get aroused by different cues
  • the concept of duality, the idea that it is possible to describe the same phenomenon truthfully from two different perspectives. The most famous duality in physics is the wave-particle duality
  • “Shifting Baseline Syndrome.
Javier E

Rough Type: Nicholas Carr's Blog: Minds like sieves - 0 views

  • They conducted a series of four experiments aimed at answering this question: Does our awareness of our ability to use Google to quickly find any fact or other bit of information influence the way our brains form memories? The answer, they discovered, is yes: "when people expect to have future access to information, they have lower rates of recall of the information itself and enhanced recall instead for where to access it."
  • we seem to have trained our brains to immediately think of using a computer when we're called on to answer a question or otherwise provide some bit of knowledge.
  • people who believed the information would be stored in the computer had a weaker memory of the information than those who assumed that the information would not be available in the computer.
  • ...5 more annotations...
  • Since search engines are continually available to us, we may often be in a state of not feeling we need to encode the information internally. When we need it, we will look it up."
  • when people expect information to remain continuously available (such as we expect with Internet access), we are more likely to remember where to find it than we are to remember the details of the item."
  • we've never had an "external memory" so capacious, so available and so easily searched as the web. If, as this study suggests, the way we form (or fail to form) memories is deeply influenced by the mere existence of external information stores, then we may be entering an era in history in which we will store fewer and fewer memories inside our own brains.
  • If a fact stored externally were the same as a memory of that fact stored in our mind, then the loss of internal memory wouldn't much matter. But external storage and biological memory are not the same thing. When we form, or "consolidate," a personal memory, we also form associations between that memory and other memories that are unique to ourselves and also indispensable to the development of deep, conceptual knowledge. The associations, moreover, continue to change with time, as we learn more and experience more. As Emerson understood, the essence of personal memory is not the discrete facts or experiences we store in our mind but "the cohesion" which ties all those facts and experiences together. What is the self but the unique pattern of that cohesion?
  • as memory shifts from the individual mind to the machine's shared database, what happens to that unique "cohesion" that is the self?
Emilio Ergueta

Nietzsche on Love | Issue 104 | Philosophy Now - 0 views

  • What could Friedrich Nietzsche (1844-1900) have to teach us about love? More than we might suppose.
  • Even during these times, between physical suffering and intense periods of writing, he pursued the company of learned women. Moreover, Nietzsche grew up in a family of women, turned to women for friendship, and witnessed his friends courtin
  • By calling our attention to the base, vulgar and selfish qualities of (heterosexual) erotic or sexual love, Nietzsche aims to strip love of its privileged status and demonstrate that what we conceive to be its opposites, such as egoism and greed, are in many instances inextricably bound up in the experience of love.
  • ...7 more annotations...
  • In doing so, Nietzsche disassociates love from its other-worldly Christian-Platonic heritage, and so asserts his ethical claims concerning the value of the Earth over the other-worldly, and the truth of the body over the sacred.
  • Nietzsche speaks critically about the possessive or tyrannical qualities of masculine love alongside its fictionalising tendencies, stating that the natural functions of a woman’s body disgust men because they prevent him having complete access to her as a possession; they also encroach upon the conceptual perfection of love. He writes, “‘The human being under the skin’ is for all lovers a horror and unthinkable, a blasphemy against God and love.”
  • He proposes that love is close to greed and the lust for possession. Love is an instinctual force related to our biological and cultural drives, and as such, cannot be considered a moral good (GS 363).
  • Nietzsche pointedly distinguishes masculine from feminine love by the notions of devotion and fidelity. Whereas women want to surrender completely to love, to approach it as a faith, “to be taken and accepted as a possession” (363), Nietzsche claims male love hinges upon the possessive thirst to acquire more from the lover, and states that men who are inclined towards complete devotion are “not men.”
  • In other words, the experiences of both greed and love are the same drive or instinct, but depending upon the level of satisfaction one has achieved, this drive will be alternatively named ‘greed’ or ‘love’: satisfied people who feel their possessions (their lover for example) threatened by others will name other’s instinct for gain greed or avarice, whereas those who are still searching out something new to desire will impose a positive evaluation on that instinct and call it ‘love’.
  • In order to be successful in love, he counsels women to “simulate a lack of love” and to enact the roles that men find attractive. Nietzsche finds love comedic because it does not consist in some attempt to know the other deeply, but rather in the confirmation of male fantasies in which women perform their constructed gender roles.
  • Nietzsche’s writings on love have not surprisingly been influential on many feminist reflections on sex/gender. Although he is not making moralising claims about how one should love, his discussion of the difficult impact erotic and romantic relationships have on women, as well as his commentary on the ironies both sexes face in love, force his readers of both sexes to examine the roles that they play in love. It is difficult when reading him not to question one’s own performances in romantic relationships.
Javier E

The Death of Adulthood in American Culture - NYTimes.com - 0 views

  • It seems that, in doing away with patriarchal authority, we have also, perhaps unwittingly, killed off all the grown-ups.
  • , the journalist and critic Ruth Graham published a polemical essay in Slate lamenting the popularity of young-adult fiction among fully adult readers. Noting that nearly a third of Y.A. books were purchased by readers ages 30 to 44 (most of them presumably without teenage children of their own), Graham insisted that such grown-ups “should feel embarrassed about reading literature for children.”
  • In my main line of work as a film critic, I have watched over the past 15 years as the studios committed their vast financial and imaginative resources to the cultivation of franchises (some of them based on those same Y.A. novels) that advance an essentially juvenile vision of the world. Comic-book movies, family-friendly animated adventures, tales of adolescent heroism and comedies of arrested development do not only make up the commercial center of 21st-century Hollywood. They are its artistic heart.
  • ...13 more annotations...
  • At sea or in the wilderness, these friends managed to escape both from the institutions of patriarchy and from the intimate authority of women, the mothers and wives who represent a check on male freedom.
  • What all of these shows grasp at, in one way or another, is that nobody knows how to be a grown-up anymore. Adulthood as we have known it has become conceptually untenable.
  • From the start, American culture was notably resistant to the claims of parental authority and the imperatives of adulthood. Surveying the canon of American literature in his magisterial “Love and Death in the American Novel,” Leslie A. Fiedler suggested, more than half a century before Ruth Graham, that “the great works of American fiction are notoriously at home in the children’s section of the library.”
  • “The typical male protagonist of our fiction has been a man on the run, harried into the forest and out to sea, down the river or into combat — anywhere to avoid ‘civilization,’ which is to say the confrontation of a man and woman which leads to the fall to sex, marriage and responsibility. One of the factors that determine theme and form in our great books is this strategy of evasion, this retreat to nature and childhood which makes our literature (and life!) so charmingly and infuriatingly ‘boyish.’ ”
  • What Fiedler notes, and what most readers of “Huckleberry Finn” will recognize, is Twain’s continual juxtaposition of Huck’s innocence and instinctual decency with the corruption and hypocrisy of the adult world.
  • we’ve also witnessed the erosion of traditional adulthood in any form, at least as it used to be portrayed in the formerly tried-and-true genres of the urban cop show, the living-room or workplace sitcom and the prime-time soap opera. Instead, we are now in the age of “Girls,” “Broad City,” “Masters of Sex” (a prehistory of the end of patriarchy), “Bob’s Burgers” (a loopy post-"Simpsons” family cartoon) and a flood of goofy, sweet, self-indulgent and obnoxious improv-based web videos.
  • we have a literature of boys’ adventures and female sentimentality. Or, to put it another way, all American fiction is young-adult fiction.
  • The bad boys of rock ‘n’ roll and the pouting screen rebels played by James Dean and Marlon Brando proved Fiedler’s point even as he was making it. So did Holden Caulfield, Dean Moriarty, Augie March and Rabbit Angstrom — a new crop of semi-antiheroes
  • We devolve from Lenny Bruce to Adam Sandler, from “Catch-22” to “The Hangover,” from “Goodbye, Columbus” to “The Forty-Year-Old Virgin.”
  • Unlike the antiheroes of eras past, whose rebellion still accepted the fact of adulthood as its premise, the man-boys simply refused to grow up, and did so proudly. Their importation of adolescent and preadolescent attitudes into the fields of adult endeavor (see “Billy Madison,” “Knocked Up,” “Step Brothers,” “Dodgeball”) delivered a bracing jolt of subversion, at least on first viewing. Why should they listen to uptight bosses, stuck-up rich guys and other readily available symbols of settled male authority?
  • That was only half the story, though. As before, the rebellious animus of the disaffected man-child was directed not just against male authority but also against women. I
  • their refusal of maturity also invites some critical reflection about just what adulthood is supposed to mean. In the old, classic comedies of the studio era — the screwbally roller coasters of marriage and remarriage, with their dizzying verbiage and sly innuendo — adulthood was a fact. It was inconvertible and burdensome but also full of opportunity. You could drink, smoke, flirt and spend money.
  • The desire of the modern comic protagonist, meanwhile, is to wallow in his own immaturity, plumbing its depths and reveling in its pleasures.
Javier E

Technology Imperialism, the Californian Ideology, and the Future of Higher Education - 2 views

  • What I hope to make explicit today is how much California – the place, the concept, “the dream machine” – shapes (wants to shape) the future of technology and the future of education.
  • In an announcement on Facebook – of course – Zuckerberg argued that “connectivity is a human right.”
  • As Zuckerberg frames it at least, the “human right” in this case is participation in the global economy
  • ...34 more annotations...
  • This is a revealing definition of “human rights,” I’d argue, particularly as it’s one that never addresses things like liberty, equality, or justice. It never addresses freedom of expression or freedom of assembly or freedom of association.
  • in certain countries, a number of people say they do not use the Internet yet they talk about how much time they spend on Facebook. According to one survey, 11% of Indonesians who said they used Facebook also said they did not use the Internet. A survey in Nigeria had similar results:
  • Evgeny Morozov has described this belief as “Internet-centrism,” an ideology he argues permeates the tech industry, its PR wing the tech blogosphere, and increasingly government policy
  • “Internet-centrism” describes the tendency to see “the Internet” – Morozov uses quotations around the phrase – as a new yet unchanging, autonomous, benevolent, and inevitable socio-technological development. “The Internet” is a master framework for how all institutions will supposedly operate moving forward
  • “The opportunity to connect” as a human right assumes that “connectivity” will hasten the advent of these other rights, I suppose – that the Internet will topple dictatorships, for example, that it will extend participation in civic life to everyone and, for our purposes here at this conference, that it will “democratize education.”
  • Empire is not simply an endeavor of the nation-state – we have empire through technology (that’s not new) and now, the technology industry as empire.
  • Facebook is really just synecdochal here, I should add – just one example of the forces I think are at play, politically, economically, technologically, culturally.
  • it matters at the level of ideology. Infrastructure is ideological, of course. The new infrastructure – “the Internet” if you will – has a particular political, economic, and cultural bent to it. It is not neutral.
  • This infrastructure matters. In this case, this is a French satellite company (Eutelsat). This is an American social network (Facebook). Mark Zuckerberg’s altruistic rhetoric aside, this is their plan – an economic plan – to monetize the world’s poor.
  • The content and the form of “connectivity” perpetuate imperialism, and not only in Africa but in all of our lives. Imperialism at the level of infrastructure – not just cultural imperialism but technological imperialism
  • “The Silicon Valley Narrative,” as I call it, is the story that the technology industry tells about the world – not only the world-as-is but the world-as-Silicon-Valley-wants-it-to-be.
  • To better analyze and assess both technology and education technology requires our understanding of these as ideological, argues Neil Selwyn – “‘a site of social struggle’ through which hegemonic positions are developed, legitimated, reproduced and challenged.”
  • This narrative has several commonly used tropes
  • It often features a hero: the technology entrepreneur. Smart. Independent. Bold. Risk-taking. White. Male
  • “The Silicon Valley narrative” invokes themes like “innovation” and “disruption.” It privileges the new; everything else that can be deemed “old” is viewed as obsolete.
  • It contends that its workings are meritocratic: anyone who hustles can make it.
  • “The Silicon Valley narrative” fosters a distrust of institutions – the government, the university. It is neoliberal. It hates paying taxes.
  • “The Silicon Valley narrative” draws from the work of Ayn Rand; it privileges the individual at all costs; it calls this “personalization.”
  • “The Silicon Valley narrative” does not neatly co-exist with public education. We forget this at our peril. This makes education technology, specifically, an incredibly fraught area.
  • Here’s the story I think we like to hear about ed-tech, about distance education, about “connectivity” and learning: Education technology is supportive, not exploitative. Education technology opens, not forecloses, opportunities. Education technology is driven by a rethinking of teaching and learning, not expanding markets or empire. Education technology meets individual and institutional and community goals.
  • That’s not really what the “Silicon Valley narrative” says about education
  • It is interested in data extraction and monetization and standardization and scale. It is interested in markets and return on investment. “Education is broken,” and technology will fix it
  • If “Silicon Valley” isn’t quite accurate, then I must admit that the word “narrative” is probably inadequate too
  • The better term here is “ideology.”
  • Facebook is “the Internet” for a fairly sizable number of people. They know nothing else – conceptually, experientially. And, let’s be honest, Facebook wants to be “the Internet” for everyone.
  • We tend to not see technology as ideological – its connections to libertarianism, neoliberalism, global capitalism, empire.
  • The California ideology ignores race and labor and the water supply; it is sustained by air and fantasy. It is built upon white supremacy and imperialism.
  • As is the technology sector, which has its own history, of course, in warfare and cryptography.
  • So far this year, some $3.76 billion of venture capital has been invested in education technology – a record-setting figure. That money will change the landscape – that’s its intention. That money carries with it a story about the future; it carries with it an ideology.
  • When a venture capitalist says that “software is eating the world,” we can push back on the inevitability implied in that. We can resist – not in the name of clinging to “the old” as those in educational institutions are so often accused of doing – but we can resist in the name of freedom and justice and a future that isn’t dictated by the wealthiest white men in Hollywood or Silicon Valley.
  • We in education would be naive, I think, to think that the designs that venture capitalists and technology entrepreneurs have for us would be any less radical than creating a new state, like Draper’s proposed state of Silicon Valley, that would enormously wealthy and politically powerful.
  • When I hear talk of “unbundling” in education – one of the latest gerunds you’ll hear venture capitalists and ed-tech entrepreneurs invoke, meaning the disassembling of institutions into products and services – I can’t help but think of the “unbundling” that Draper wished to do to my state: carving up land and resources, shifting tax revenue and tax burdens, creating new markets, privatizing public institutions, redistributing power and doing so explicitly not in the service of equity or justice.
  • I want to show you this map, a proposal – a failed proposal, thankfully – by venture capitalist Tim Draper to split the state of California into six separate states: Jefferson, North California, Silicon Valley, Central California, West California, and South California. The proposal, which Draper tried to collect enough signatures to get on the ballot in California, would have created the richest state in the US – Silicon Valley would be first in per-capita income. It would also have created the nation’s poorest state, Central California, which would rank even below Mississippi.
  • that’s not all that Silicon Valley really does.
Javier E

How Politics Shaped General Relativity - The New York Times - 0 views

  • Less commonly understood, however, is how thoroughly the research into this profound, abstruse and seemingly otherworldly theory was shaped by the messy human dramas of the past century.
  • Some of the barriers to acceptance were conceptual.
  • But other obstacles were political. The turmoil and disruptions of World War I, for example, prevented many people from learning and thinking about general relativity
  • ...3 more annotations...
  • Einstein noted that the public recognition of his accomplishment had a political slant. “Today I am described in Germany as a ‘German servant,’ and in England as a ‘Swiss Jew,’ ” he said. “Should it ever be my fate to be represented as a bête noire, I should, on the contrary, become a ‘Swiss Jew’ for the Germans and a ‘German servant’ for the English.”
  • After World World II, a new generation of physicists in the United States began to focus on relativity from their perch within the “military-industrial complex.” Here, political exigencies accelerated a deeper appreciation of Einstein’s theory, in unanticipated ways.
  • With GPS, the warping of time that Einstein imagined assumed operational significance. (Later, GPS was opened to the commercial market, and now billions of people rely on general relativity to find their place in the world, every single day.)
kushnerha

'Run, Hide, Fight' Is Not How Our Brains Work - The New York Times - 0 views

  • One suggestion, promoted by the Federal Bureau of Investigation and Department of Homeland Security, and now widely disseminated, is “run, hide, fight.” The idea is: Run if you can; hide if you can’t run; and fight if all else fails. This three-step program appeals to common sense, but whether it makes scientific sense is another question.
  • Underlying the idea of “run, hide, fight” is the presumption that volitional choices are readily available in situations of danger. But the fact is, when you are in danger, whether it is a bicyclist speeding at you or a shooter locked and loaded, you may well find yourself frozen, unable to act and think clearly.
  • Freezing is not a choice. It is a built-in impulse controlled by ancient circuits in the brain involving the amygdala and its neural partners, and is automatically set into motion by external threats. By contrast, the kinds of intentional actions implied by “run, hide, fight” require newer circuits in the neocortex.
  • ...7 more annotations...
  • Contemporary science has refined the old “fight or flight” concept — the idea that those are the two hard-wired options when in mortal danger — to the updated “freeze, flee, fight.”
  • Why do we freeze? It’s part of a predatory defense system that is wired to keep the organism alive. Not only do we do it, but so do other mammals and other vertebrates. Even invertebrates — like flies — freeze. If you are freezing, you are less likely to be detected if the predator is far away, and if the predator is close by, you can postpone the attack (movement by the prey is a trigger for attack)
  • The freezing reaction is accompanied by a hormonal surge that helps mobilize your energy and focus your attention. While the hormonal and other physiological responses that accompany freezing are there for good reason, in highly stressful situations the secretions can be excessive and create impediments to making informed choices.
  • Sometimes freezing is brief and sometimes it persists. This can reflect the particular situation you are in, but also your individual predisposition. Some people naturally have the ability to think through a stressful situation, or to even be motivated by it, and will more readily run, hide or fight as required.
  • we have created a version of this predicament using rats. The animals have been trained, through trial and error, to “know” how to escape in a certain dangerous situation. But when they are actually placed in the dangerous situation, some rats simply cannot execute the response — they stay frozen. If, however, we artificially shut down a key subregion of the amygdala in these rats, they are able to overcome the built-in impulse to freeze and use their “knowledge” about what to do.
  • shown that if people cognitively reappraise a situation, it can dampen their amygdala activity. This dampening may open the way for conceptually based actions, like “run, hide, fight,” to replace freezing and other hard-wired impulses.
  • How to encourage this kind of cognitive reappraisal? Perhaps we could harness the power of social media to conduct a kind of collective cultural training in which we learn to reappraise the freezing that occurs in dangerous situations. In most of us, freezing will occur no matter what. It’s just a matter of how long it will last.
Javier E

Jordan Peterson Comes to Aspen - The Atlantic - 0 views

  • Peterson is traveling the English-speaking world in order to spread the message of this core conviction: that the way to fix what ails Western societies is a psychological project, targeted at helping individuals to get their lives in order, not a sociological project that seeks to improve society through politics, or popular culture, or by focusing on class, racial, or gender identity.
  • the Aspen Ideas Festival, which is co-sponsored by the Aspen Institute and The Atlantic, was an anomaly in this series of public appearances: a gathering largely populated by people—Democrats and centrist Republicans, corporate leaders, academics, millionaire philanthropists, journalists—invested in the contrary proposition, that the way to fix what ails society is a sociological project, one that effects change by focusing on politics, or changing popular culture, or spurring technological advances, or investing more in diversity and inclusiveness.
  • Many of its attendees, like many journalists, are most interested in Peterson as a political figure at the center of controversies
  • ...21 more annotations...
  • Peterson deserves a full, appropriately complex accounting of his best and worst arguments; I intend to give him one soon. For now, I can only tell you how the Peterson phenomenon manifested one night in Aspen
  • “For the first time in human history the spoken word has the same reach as the written word, and there are no barriers to entry. That’s a Gutenberg revolution,” he said. “That’s a big deal. This is a game changer. The podcast world is also a Gutenberg moment but it’s even more extensive. The problem with books is that you can’t do anything else while you’re reading. But if you’re listening to a podcast you can be driving a tractor or a long haul truck or doing the dishes. So podcasts free up two hours a day for people to engage in educational activity they otherwise wouldn’t be able to engage in. That’s one-eighth of people’s lives. You’re handing people a lot of time back to engage in high-level intellectual education.
  • that technological revolution has revealed something good that we didn’t know before: “The narrow bandwidth of TV has made us think that we are stupider than we are. And people have a real hunger for deep intellectual dialogue.”
  • I’ve known for years that the university underserved the community, because we assumed that university education is for 18- to 22-year-olds, which is a proposition that’s so absurd it is absolutely mind-boggling that anyone ever conceptualized it. Why wouldn’t you take university courses throughout your entire life? What, you stop searching for wisdom when you’re 22? I don’t think so. You don’t even start until you’re like in your mid 20s. So I knew universities were underserving the broader community a long time ago. But there wasn’t a mechanism whereby that could be rectified.
  • Universities are beyond forgiveness, he argued, because due to the growing ranks of administrators, there’s been a radical increase in tuition. “Unsuspecting students are given free access to student loans that will cripple them through their 30s and their 40s, and the universities are enticing them to extend their carefree adolescence for a four year period at the cost of mortgaging their future in a deal that does not allow for escape through bankruptcy,” he complained. “So it’s essentially a form of indentured servitude. There’s no excuse for that … That cripples the economy because the students become overlaid with debt that they’ll never pay off at the time when they should be at the peak of their ability to take entrepreneurial risks. That’s absolutely appalling.”
  • A critique I frequently hear from Peterson’s critics is that everything he says is either obvious or wrong. I think that critique fails insofar as I sometimes see some critics calling one of his statements obvious even as others insist it is obviously wrong.
  • a reliable difference among men and women cross-culturally is that men are more aggressive than women. Now what's the evidence for that? Here's one piece of evidence: There are 10 times as many men in prison. Now is that a sociocultural construct? It's like, no, it's not a sociocultural construct. Okay?
  • Here's another piece of data. Women try to commit suicide more than men by a lot, and that's because women are more prone to depression and anxiety than men are. And there are reasons for that, and that's cross-cultural as well. Now men are way more likely to actually commit suicide. Why? Because they're more aggressive so they use lethal means. So now the question is how much more aggressive are men than women? The answer is not very much. So the claim that men and women are more the same than different is actually true. This is where you have to know something about statistics to understand the way the world works, instead of just applying your a priori ideological presuppositions to things that are too complex to fit in that rubric.
  • So if you draw two people out of a crowd, one man and one woman, and you had to lay a bet on who was more aggressive, and you bet on the woman, you'd win 40 percent of the time. That's quite a lot. It isn't 50 percent of the time which would be no differences. But it’s a lot. There are lots of women who are more aggressive than lots of men. So the curves overlap a lot. There's way more similarity than difference. And this is along the dimension where there's the most difference. But here's the problem. You can take small differences at the average of a distribution. Then the distributions move off to the side. And then all the action is at the tail. So here's the situation. You don't care about how aggressive the average person is. It's not that relevant. What people care about is who is the most aggressive person out of 100, because that's the person you'd better watch out for.
  • Whenever I'm interviewed by journalists who have the scent of blood in their nose, let's say, they're very willing and able to characterize the situation I find myself in as political. But that's because they can't see the world in any other manner. The political is a tiny fraction of the world. And what I'm doing isn't political. It's psychological or theological. The political element is peripheral. And if people come to the live lectures, let's say, that's absolutely self-evident
  • In a New York Times article titled, “Jordan Peterson, Custodian of the Patriarchy,” the writer Nellie Bowles quoted her subject as follows:
  • Violent attacks are what happens when men do not have partners, Mr. Peterson says, and society needs to work to make sure those men are married. “He was angry at God because women were rejecting him,” Mr. Peterson says of the Toronto killer. “The cure for that is enforced monogamy. That’s actually why monogamy emerges.” Mr. Peterson does not pause when he says this. Enforced monogamy is, to him, simply a rational solution. Otherwise women will all only go for the most high-status men, he explains, and that couldn’t make either gender happy in the end.
  • Ever since, some Peterson critics have claimed that Peterson wants to force women to have sex with male incels, or something similarly dystopian.
  • ...it's an anthropological truism generated primarily through scholars on the left, just so everybody is clear about it, that societies that use monogamy as a social norm, which by the way is virtually every human society that ever existed, do that in an attempt to control the aggression that goes along with polygamy. It's like ‘Oh my God, how contentious can you get.’ Well, how many of you are in monogamous relationships? A majority. How is that enforced?...
  • If everyone you talk to is boring it’s not them! And so if you're rejected by the opposite sex, if you’re heterosexual, then you're wrong, they're not wrong, and you've got some work to do, man. You've got some difficult work to do. And there isn't anything I've been telling young men that's clearer than that … What I've been telling people is take the responsibility for failure onto yourself. That's a hint that you've got work to do. It could also be a hint that you're young and useless and why the hell would anybody have anything to do with you because you don't have anything to offer. And that's rectifiable. Maturity helps to rectify that.
  • And what's the gender? Men. Because if you go two standard deviations out from the mean on two curves that overlap but are disjointed, then you derive an overwhelming preponderance of the overrepresented group. That's why men are about 10 times more likely to be in prison.  
  • Weiss: You are often characterized, at least in the mainstream press, as being transphobic. If you had a student come to you and say, I was born female, I now identify as male, I want you to call me by male pronouns. Would you say yes to that?
  • Peterson: Well, it would depend on the student and the context and why I thought they were asking me and what I believe their demand actually characterized, and all of that. Because that can be done in a way that is genuine and acceptable, and a way that is manipulative and unacceptable. And if it was genuine and acceptable then I would have no problem with it. And if it was manipulative and unacceptable then not a chance. And you might think, ‘Well, who am I to judge?’ Well, first of all, I am a clinical psychologist, I've talked to people for about 25,000 hours. And I'm responsible for judging how I am going to use my words. I'd judge the same way I judge all my interactions with people, which is to the best of my ability, and characterized by all the errors that I'm prone to. I'm not saying that my judgment would be unerring. I live with the consequences and I'm willing to accept the responsibility.
  • But also to be clear about this, it never happened––I never refused to call anyone by anything they had asked me to call them by, although that's been reported multiple times. It's a complete falsehood. And it had nothing to do with the transgender issue as far as I'm concerned.
  • type one and type two error problem
  • note what his avowed position is: that he has never refused to call a transgender person by their preferred pronoun, that he has done so many times, that he would always try to err on the side of believing a request to be earnest, and that he reserves the right to decline a request he believes to be in bad faith. Whether one finds that to be reasonable or needlessly difficult, it seems irresponsible to tell trans people that a prominent intellectual hates them or is deeply antagonistic to them when the only seeming conflict is utterly hypothetical and ostensibly not even directed against people that Peterson believes to be trans, but only against people whom he does not believe to be trans
Javier E

Opinion | Is Computer Code a Foreign Language? - The New York Times - 1 views

  • the proposal that foreign language learning can be replaced by computer coding knowledge is misguided:
  • Our profound and impressive ability to create complex tools with which to manipulate our environments is secondary to our ability to conceptualize and communicate about those environments in natural languages.
  • more urgent is my alarm at the growing tendency to accept and even foster the decline of the sort of interpersonal human contact that learning languages both requires and cultivates.
  • ...6 more annotations...
  • Language is an essential — perhaps the essential — marker of our species. We learn in and through natural languages; we develop our most fundamental cognitive skills by speaking and hearing languages; and we ultimately assume our identities as human beings and members of communities by exercising those languages
  • It stems from a widely held but mistaken belief that science and technology education should take precedence over subjects like English, history and foreign languages.
  • Natural languages aren’t just more complex versions of the algorithms with which we teach machines to do tasks; they are also the living embodiments of our essence as social animals.
  • We express our love and our losses, explore beauty, justice and the meaning of our existence, and even come to know ourselves all though natural languages.
  • we are fundamentally limited in how much we can know about another’s thoughts and feelings, and that this limitation and the desire to transcend it is essential to our humanity
  • or us humans, communication is about much more than getting information or following instructions; it’s about learning who we are by interacting with others.
Javier E

ROUGH TYPE | Nicholas Carr's blog - 0 views

  • The smartphone has become a repository of the self, recording and dispensing the words, sounds and images that define what we think, what we experience and who we are. In a 2015 Gallup survey, more than half of iPhone owners said that they couldn’t imagine life without the device.
  • So what happens to our minds when we allow a single tool such dominion over our perception and cognition?
  • Not only do our phones shape our thoughts in deep and complicated ways, but the effects persist even when we aren’t using the devices. As the brain grows dependent on the technology, the research suggests, the intellect weakens.
  • ...39 more annotations...
  • he has seen mounting evidence that using a smartphone, or even hearing one ring or vibrate, produces a welter of distractions that makes it harder to concentrate on a difficult problem or job. The division of attention impedes reasoning and performance.
  • Another 2015 study, appearing in the Journal of Computer-Mediated Communication, showed that when people hear their phone ring but are unable to answer it, their blood pressure spikes, their pulse quickens, and their problem-solving skills decline.
  • The researchers recruited 520 undergraduates at UCSD and gave them two standard tests of intellectual acuity. One test gauged “available working-memory capacity,” a measure of how fully a person’s mind can focus on a particular task. The second assessed “fluid intelligence,” a person’s ability to interpret and solve an unfamiliar problem. The only variable in the experiment was the location of the subjects’ smartphones. Some of the students were asked to place their phones in front of them on their desks; others were told to stow their phones in their pockets or handbags; still others were required to leave their phones in a different room.
  • In both tests, the subjects whose phones were in view posted the worst scores, while those who left their phones in a different room did the best. The students who kept their phones in their pockets or bags came out in the middle. As the phone’s proximity increased, brainpower decreased.
  • In subsequent interviews, nearly all the participants said that their phones hadn’t been a distraction—that they hadn’t even thought about the devices during the experiment. They remained oblivious even as the phones disrupted their focus and thinking.
  • In a 2013 study conducted at the University of Essex in England, 142 participants were divided into pairs and asked to converse in private for ten minutes. Half talked with a phone in the room, half without a phone present. The subjects were then given tests of affinity, trust and empathy. “The mere presence of mobile phones,” the researchers reported in the Journal of Social and Personal Relationships, “inhibited the development of interpersonal closeness and trust” and diminished “the extent to which individuals felt empathy and understanding from their partners.”
  • the “integration of smartphones into daily life” appears to cause a “brain drain” that can diminish such vital mental skills as “learning, logical reasoning, abstract thought, problem solving, and creativity.”
  •  Smartphones have become so entangled with our existence that, even when we’re not peering or pawing at them, they tug at our attention, diverting precious cognitive resources. Just suppressing the desire to check our phone, which we do routinely and subconsciously throughout the day, can debilitate our thinking.
  • They found that students who didn’t bring their phones to the classroom scored a full letter-grade higher on a test of the material presented than those who brought their phones. It didn’t matter whether the students who had their phones used them or not: All of them scored equally poorly.
  • A study of nearly a hundred secondary schools in the U.K., published last year in the journal Labour Economics, found that when schools ban smartphones, students’ examination scores go up substantially, with the weakest students benefiting the most.
  • Social skills and relationships seem to suffer as well.
  • Because smartphones serve as constant reminders of all the friends we could be chatting with electronically, they pull at our minds when we’re talking with people in person, leaving our conversations shallower and less satisfying.
  • A second experiment conducted by the researchers produced similar results, while also revealing that the more heavily students relied on their phones in their everyday lives, the greater the cognitive penalty they suffered.
  • The evidence that our phones can get inside our heads so forcefully is unsettling. It suggests that our thoughts and feelings, far from being sequestered in our skulls, can be skewed by external forces we’re not even aware o
  •  Scientists have long known that the brain is a monitoring system as well as a thinking system. Its attention is drawn toward any object that is new, intriguing or otherwise striking — that has, in the psychological jargon, “salience.”
  • even in the history of captivating media, the smartphone stands out. It is an attention magnet unlike any our minds have had to grapple with before. Because the phone is packed with so many forms of information and so many useful and entertaining functions, it acts as what Dr. Ward calls a “supernormal stimulus,” one that can “hijack” attention whenever it is part of our surroundings — and it is always part of our surroundings.
  • Imagine combining a mailbox, a newspaper, a TV, a radio, a photo album, a public library and a boisterous party attended by everyone you know, and then compressing them all into a single, small, radiant object. That is what a smartphone represents to us. No wonder we can’t take our minds off it.
  • The irony of the smartphone is that the qualities that make it so appealing to us — its constant connection to the net, its multiplicity of apps, its responsiveness, its portability — are the very ones that give it such sway over our minds.
  • Phone makers like Apple and Samsung and app writers like Facebook, Google and Snap design their products to consume as much of our attention as possible during every one of our waking hours
  • Social media apps were designed to exploit “a vulnerability in human psychology,” former Facebook president Sean Parker said in a recent interview. “[We] understood this consciously. And we did it anyway.”
  • A quarter-century ago, when we first started going online, we took it on faith that the web would make us smarter: More information would breed sharper thinking. We now know it’s not that simple.
  • As strange as it might seem, people’s knowledge and understanding may actually dwindle as gadgets grant them easier access to online data stores
  • In a seminal 2011 study published in Science, a team of researchers — led by the Columbia University psychologist Betsy Sparrow and including the late Harvard memory expert Daniel Wegner — had a group of volunteers read forty brief, factual statements (such as “The space shuttle Columbia disintegrated during re-entry over Texas in Feb. 2003”) and then type the statements into a computer. Half the people were told that the machine would save what they typed; half were told that the statements would be erased.
  • Afterward, the researchers asked the subjects to write down as many of the statements as they could remember. Those who believed that the facts had been recorded in the computer demonstrated much weaker recall than those who assumed the facts wouldn’t be stored. Anticipating that information would be readily available in digital form seemed to reduce the mental effort that people made to remember it
  • The researchers dubbed this phenomenon the “Google effect” and noted its broad implications: “Because search engines are continually available to us, we may often be in a state of not feeling we need to encode the information internally. When we need it, we will look it up.”
  • as the pioneering psychologist and philosopher William James said in an 1892 lecture, “the art of remembering is the art of thinking.”
  • Only by encoding information in our biological memory can we weave the rich intellectual associations that form the essence of personal knowledge and give rise to critical and conceptual thinking. No matter how much information swirls around us, the less well-stocked our memory, the less we have to think with.
  • As Dr. Wegner and Dr. Ward explained in a 2013 Scientific American article, when people call up information through their devices, they often end up suffering from delusions of intelligence. They feel as though “their own mental capacities” had generated the information, not their devices. “The advent of the ‘information age’ seems to have created a generation of people who feel they know more than ever before,” the scholars concluded, even though “they may know ever less about the world around them.”
  • That insight sheds light on society’s current gullibility crisis, in which people are all too quick to credit lies and half-truths spread through social media. If your phone has sapped your powers of discernment, you’ll believe anything it tells you.
  • Data, the novelist and critic Cynthia Ozick once wrote, is “memory without history.” Her observation points to the problem with allowing smartphones to commandeer our brains
  • When we constrict our capacity for reasoning and recall or transfer those skills to a gadget, we sacrifice our ability to turn information into knowledge. We get the data but lose the meaning
  • We need to give our minds more room to think. And that means putting some distance between ourselves and our phones.
  • Harvard Business School professor emerita Shoshana Zuboff argues in her new book that the Valley’s wealth and power are predicated on an insidious, essentially pathological form of private enterprise—what she calls “surveillance capitalism.” Pioneered by Google, perfected by Facebook, and now spreading throughout the economy, surveillance capitalism uses human life as its raw material. Our everyday experiences, distilled into data, have become a privately-owned business asset used to predict and mold our behavior, whether we’re shopping or socializing, working or voting.
  • By reengineering the economy and society to their own benefit, Google and Facebook are perverting capitalism in a way that undermines personal freedom and corrodes democracy.
  • Under the Fordist model of mass production and consumption that prevailed for much of the twentieth century, industrial capitalism achieved a relatively benign balance among the contending interests of business owners, workers, and consumers. Enlightened executives understood that good pay and decent working conditions would ensure a prosperous middle class eager to buy the goods and services their companies produced. It was the product itself — made by workers, sold by companies, bought by consumers — that tied the interests of capitalism’s participants together. Economic and social equilibrium was negotiated through the product.
  • By removing the tangible product from the center of commerce, surveillance capitalism upsets the equilibrium. Whenever we use free apps and online services, it’s often said, we become the products, our attention harvested and sold to advertisers
  • this truism gets it wrong. Surveillance capitalism’s real products, vaporous but immensely valuable, are predictions about our future behavior — what we’ll look at, where we’ll go, what we’ll buy, what opinions we’ll hold — that internet companies derive from our personal data and sell to businesses, political operatives, and other bidders.
  • Unlike financial derivatives, which they in some ways resemble, these new data derivatives draw their value, parasite-like, from human experience.To the Googles and Facebooks of the world, we are neither the customer nor the product. We are the source of what Silicon Valley technologists call “data exhaust” — the informational byproducts of online activity that become the inputs to prediction algorithms
  • internet companies operate in what Zuboff terms “extreme structural independence from people.” When databases displace goods as the engine of the economy, our own interests, as consumers but also as citizens, cease to be part of the negotiation. We are no longer one of the forces guiding the market’s invisible hand. We are the objects of surveillance and control.
Javier E

Opinion | How Genetics Is Changing Our Understanding of 'Race' - The New York Times - 0 views

  • In 1942, the anthropologist Ashley Montagu published “Man’s Most Dangerous Myth: The Fallacy of Race,” an influential book that argued that race is a social concept with no genetic basis.
  • eginning in 1972, genetic findings began to be incorporated into this argument. That year, the geneticist Richard Lewontin published an important study of variation in protein types in blood. He grouped the human populations he analyzed into seven “races” — West Eurasians, Africans, East Asians, South Asians, Native Americans, Oceanians and Australians — and found that around 85 percent of variation in the protein types could be accounted for by variation within populations and “races,” and only 15 percent by variation across them. To the extent that there was variation among humans, he concluded, most of it was because of “differences between individuals.”
  • In this way, a consensus was established that among human populations there are no differences large enough to support the concept of “biological race.” Instead, it was argued, race is a “social construct,” a way of categorizing people that changes over time and across countries.
  • ...29 more annotations...
  • t is true that race is a social construct. It is also true, as Dr. Lewontin wrote, that human populations “are remarkably similar to each other” from a genetic point of view.
  • this consensus has morphed, seemingly without questioning, into an orthodoxy. The orthodoxy maintains that the average genetic differences among people grouped according to today’s racial terms are so trivial when it comes to any meaningful biological traits that those differences can be ignored.
  • With the help of these tools, we are learning that while race may be a social construct, differences in genetic ancestry that happen to correlate to many of today’s racial constructs are real.
  • I have deep sympathy for the concern that genetic discoveries could be misused to justify racism. But as a geneticist I also know that it is simply no longer possible to ignore average genetic differences among “races.”
  • Groundbreaking advances in DNA sequencing technology have been made over the last two decades
  • Care.
  • The orthodoxy goes further, holding that we should be anxious about any research into genetic differences among populations
  • You will sometimes hear that any biological differences among populations are likely to be small, because humans have diverged too recently from common ancestors for substantial differences to have arisen under the pressure of natural selection. This is not true. The ancestors of East Asians, Europeans, West Africans and Australians were, until recently, almost completely isolated from one another for 40,000 years or longer, which is more than sufficient time for the forces of evolution to work
  • I am worried that well-meaning people who deny the possibility of substantial biological differences among human populations are digging themselves into an indefensible position, one that will not survive the onslaught of science.
  • I am also worried that whatever discoveries are made — and we truly have no idea yet what they will be — will be cited as “scientific proof” that racist prejudices and agendas have been correct all along, and that those well-meaning people will not understand the science well enough to push back against these claims.
  • This is why it is important, even urgent, that we develop a candid and scientifically up-to-date way of discussing any such difference
  • While most people will agree that finding a genetic explanation for an elevated rate of disease is important, they often draw the line there. Finding genetic influences on a propensity for disease is one thing, they argue, but looking for such influences on behavior and cognition is another
  • Is performance on an intelligence test or the number of years of school a person attends shaped by the way a person is brought up? Of course. But does it measure something having to do with some aspect of behavior or cognition? Almost certainly.
  • Recent genetic studies have demonstrated differences across populations not just in the genetic determinants of simple traits such as skin color, but also in more complex traits like bodily dimensions and susceptibility to diseases.
  • in Iceland, there has been measurable genetic selection against the genetic variations that predict more years of education in that population just within the last century.
  • consider what kinds of voices are filling the void that our silence is creating
  • Nicholas Wade, a longtime science journalist for The New York Times, rightly notes in his 2014 book, “A Troublesome Inheritance: Genes, Race and Human History,” that modern research is challenging our thinking about the nature of human population differences. But he goes on to make the unfounded and irresponsible claim that this research is suggesting that genetic factors explain traditional stereotypes.
  • 139 geneticists (including myself) pointed out in a letter to The New York Times about Mr. Wade’s book, there is no genetic evidence to back up any of the racist stereotypes he promotes.
  • Another high-profile example is James Watson, the scientist who in 1953 co-discovered the structure of DNA, and who was forced to retire as head of the Cold Spring Harbor Laboratories in 2007 after he stated in an interview — without any scientific evidence — that research has suggested that genetic factors contribute to lower intelligence in Africans than in Europeans.
  • What makes Dr. Watson’s and Mr. Wade’s statements so insidious is that they start with the accurate observation that many academics are implausibly denying the possibility of average genetic differences among human populations, and then end with a claim — backed by no evidence — that they know what those differences are and that they correspond to racist stereotypes
  • They use the reluctance of the academic community to openly discuss these fraught issues to provide rhetorical cover for hateful ideas and old racist canards.
  • This is why knowledgeable scientists must speak out. If we abstain from laying out a rational framework for discussing differences among populations, we risk losing the trust of the public and we actively contribute to the distrust of expertise that is now so prevalent.
  • If scientists can be confident of anything, it is that whatever we currently believe about the genetic nature of differences among populations is most likely wrong.
  • For example, my laboratory discovered in 2016, based on our sequencing of ancient human genomes, that “whites” are not derived from a population that existed from time immemorial, as some people believe. Instead, “whites” represent a mixture of four ancient populations that lived 10,000 years ago and were each as different from one another as Europeans and East Asians are today.
  • For me, a natural response to the challenge is to learn from the example of the biological differences that exist between males and females
  • The differences between the sexes are far more profound than those that exist among human populations, reflecting more than 100 million years of evolution and adaptation. Males and females differ by huge tracts of genetic material
  • How do we accommodate the biological differences between men and women? I think the answer is obvious: We should both recognize that genetic differences between males and females exist and we should accord each sex the same freedoms and opportunities regardless of those differences
  • fulfilling these aspirations in practice is a challenge. Yet conceptually it is straightforward.
  • Compared with the enormous differences that exist among individuals, differences among populations are on average many times smaller, so it should be only a modest challenge to accommodate a reality in which the average genetic contributions to human traits differ.
Javier E

At the Existentialist Café: Freedom, Being, and Apricot Cocktails with Jean-P... - 0 views

  • The phenomenologists’ leading thinker, Edmund Husserl, provided a rallying cry, ‘To the things themselves!’ It meant: don’t waste time on the interpretations that accrue upon things, and especially don’t waste time wondering whether the things are real. Just look at this that’s presenting itself to you, whatever this may be, and describe it as precisely as possible.
  • You might think you have defined me by some label, but you are wrong, for I am always a work in progress. I create myself constantly through action, and this is so fundamental to my human condition that, for Sartre, it is the human condition, from the moment of first consciousness to the moment when death wipes it out. I am my own freedom: no more, no less.
  • Sartre wrote like a novelist — not surprisingly, since he was one. In his novels, short stories and plays as well as in his philosophical treatises, he wrote about the physical sensations of the world and the structures and moods of human life. Above all, he wrote about one big subject: what it meant to be free. Freedom, for him, lay at the heart of all human experience, and this set humans apart from all other kinds of object.
  • ...97 more annotations...
  • Sartre listened to his problem and said simply, ‘You are free, therefore choose — that is to say, invent.’ No signs are vouchsafed in this world, he said. None of the old authorities can relieve you of the burden of freedom. You can weigh up moral or practical considerations as carefully as you like, but ultimately you must take the plunge and do something, and it’s up to you what that something is.
  • Even if the situation is unbearable — perhaps you are facing execution, or sitting in a Gestapo prison, or about to fall off a cliff — you are still free to decide what to make of it in mind and deed. Starting from where you are now, you choose. And in choosing, you also choose who you will be.
  • The war had made people realise that they and their fellow humans were capable of departing entirely from civilised norms; no wonder the idea of a fixed human nature seemed questionable.
  • If this sounds difficult and unnerving, it’s because it is. Sartre does not deny that the need to keep making decisions brings constant anxiety. He heightens this anxiety by pointing out that what you do really matters. You should make your choices as though you were choosing on behalf of the whole of humanity, taking the entire burden of responsibility for how the human race behaves. If you avoid this responsibility by fooling yourself that you are the victim of circumstance or of someone else’s bad advice, you are failing to meet the demands of human life and choosing a fake existence, cut off from your own ‘authenticity’.
  • Along with the terrifying side of this comes a great promise: Sartre’s existentialism implies that it is possible to be authentic and free, as long as you keep up the effort.
  • almost all agreed that it was, as an article in Les nouvelles littéraires phrased it, a ‘sickening mixture of philosophic pretentiousness, equivocal dreams, physiological technicalities, morbid tastes and hesitant eroticism … an introspective embryo that one would take distinct pleasure in crushing’.
  • he offered a philosophy designed for a species that had just scared the hell out of itself, but that finally felt ready to grow up and take responsibility.
  • In this rebellious world, just as with the Parisian bohemians and Dadaists in earlier generations, everything that was dangerous and provocative was good, and everything that was nice or bourgeois was bad.
  • Such interweaving of ideas and life had a long pedigree, although the existentialists gave it a new twist. Stoic and Epicurean thinkers in the classical world had practised philosophy as a means of living well, rather than of seeking knowledge or wisdom for their own sake. By reflecting on life’s vagaries in philosophical ways, they believed they could become more resilient, more able to rise above circumstances, and better equipped to manage grief, fear, anger, disappointment or anxiety.
  • In the tradition they passed on, philosophy is neither a pure intellectual pursuit nor a collection of cheap self-help tricks, but a discipline for flourishing and living a fully human, responsible life.
  • For Kierkegaard, Descartes had things back to front. In his own view, human existence comes first: it is the starting point for everything we do, not the result of a logical deduction. My existence is active: I live it and choose it, and this precedes any statement I can make about myself.
  • Studying our own moral genealogy cannot help us to escape or transcend ourselves. But it can enable us to see our illusions more clearly and lead a more vital, assertive existence.
  • What was needed, he felt, was not high moral or theological ideals, but a deeply critical form of cultural history or ‘genealogy’ that would uncover the reasons why we humans are as we are, and how we came to be that way. For him, all philosophy could even be redefined as a form of psychology, or history.
  • For those oppressed on grounds of race or class, or for those fighting against colonialism, existentialism offered a change of perspective — literally, as Sartre proposed that all situations be judged according to how they appeared in the eyes of those most oppressed, or those whose suffering was greatest.
  • She observed that we need not expect moral philosophers to ‘live by’ their ideas in a simplistic way, as if they were following a set of rules. But we can expect them to show how their ideas are lived in. We should be able to look in through the windows of a philosophy, as it were, and see how people occupy it, how they move about and how they conduct themselves.
  • the existentialists inhabited their historical and personal world, as they inhabited their ideas. This notion of ‘inhabited philosophy’ is one I’ve borrowed from the English philosopher and novelist Iris Murdoch, who wrote the first full-length book on Sartre and was an early adopter of existentialism
  • What is existentialism anyway?
  • An existentialist who is also phenomenological provides no easy rules for dealing with this condition, but instead concentrates on describing lived experience as it presents itself. — By describing experience well, he or she hopes to understand this existence and awaken us to ways of living more authentic lives.
  • Existentialists concern themselves with individual, concrete human existence. — They consider human existence different from the kind of being other things have. Other entities are what they are, but as a human I am whatever I choose to make of myself at every moment. I am free — — and therefore I’m responsible for everything I do, a dizzying fact which causes — an anxiety inseparable from human existence itself.
  • On the other hand, I am only free within situations, which can include factors in my own biology and psychology as well as physical, historical and social variables of the world into which I have been thrown. — Despite the limitations, I always want more: I am passionately involved in personal projects of all kinds. — Human existence is thus ambiguous: at once boxed in by borders and yet transcendent and exhilarating. —
  • The first part of this is straightforward: a phenomenologist’s job is to describe. This is the activity that Husserl kept reminding his students to do. It meant stripping away distractions, habits, clichés of thought, presumptions and received ideas, in order to return our attention to what he called the ‘things themselves’. We must fix our beady gaze on them and capture them exactly as they appear, rather than as we think they are supposed to be.
  • Husserl therefore says that, to phenomenologically describe a cup of coffee, I should set aside both the abstract suppositions and any intrusive emotional associations. Then I can concentrate on the dark, fragrant, rich phenomenon in front of me now. This ‘setting aside’ or ‘bracketing out’ of speculative add-ons Husserl called epoché — a term borrowed from the ancient Sceptics,
  • The point about rigour is crucial; it brings us back to the first half of the command to describe phenomena. A phenomenologist cannot get away with listening to a piece of music and saying, ‘How lovely!’ He or she must ask: is it plaintive? is it dignified? is it colossal and sublime? The point is to keep coming back to the ‘things themselves’ — phenomena stripped of their conceptual baggage — so as to bail out weak or extraneous material and get to the heart of the experience.
  • Husserlian ‘bracketing out’ or epoché allows the phenomenologist to temporarily ignore the question ‘But is it real?’, in order to ask how a person experiences his or her world. Phenomenology gives a formal mode of access to human experience. It lets philosophers talk about life more or less as non-philosophers do, while still being able to tell themselves they are being methodical and rigorous.
  • Besides claiming to transform the way we think about reality, phenomenologists promised to change how we think about ourselves. They believed that we should not try to find out what the human mind is, as if it were some kind of substance. Instead, we should consider what it does, and how it grasps its experiences.
  • For Brentano, this reaching towards objects is what our minds do all the time. Our thoughts are invariably of or about something, he wrote: in love, something is loved, in hatred, something is hated, in judgement, something is affirmed or denied. Even when I imagine an object that isn’t there, my mental structure is still one of ‘about-ness’ or ‘of-ness’.
  • Except in deepest sleep, my mind is always engaged in this aboutness: it has ‘intentionality’. Having taken the germ of this from Brentano, Husserl made it central to his whole philosophy.
  • Husserl saw in the idea of intentionality a way to sidestep two great unsolved puzzles of philosophical history: the question of what objects ‘really’ are, and the question of what the mind ‘really’ is. By doing the epoché and bracketing out all consideration of reality from both topics, one is freed to concentrate on the relationship in the middle. One can apply one’s descriptive energies to the endless dance of intentionality that takes place in our lives: the whirl of our minds as they seize their intended phenomena one after the other and whisk them around the floor,
  • Understood in this way, the mind hardly is anything at all: it is its aboutness. This makes the human mind (and possibly some animal minds) different from any other naturally occurring entity. Nothing else can be as thoroughly about or of things as the mind is:
  • Some Eastern meditation techniques aim to still this scurrying creature, but the extreme difficulty of this shows how unnatural it is to be mentally inert. Left to itself, the mind reaches out in all directions as long as it is awake — and even carries on doing it in the dreaming phase of its sleep.
  • a mind that is experiencing nothing, imagining nothing, or speculating about nothing can hardly be said to be a mind at all.
  • Three simple ideas — description, phenomenon, intentionality — provided enough inspiration to keep roomfuls of Husserlian assistants busy in Freiburg for decades. With all of human existence awaiting their attention, how could they ever run out of things to do?
  • For Sartre, this gives the mind an immense freedom. If we are nothing but what we think about, then no predefined ‘inner nature’ can hold us back. We are protean.
  • way of this interpretation. Real, not real; inside, outside; what difference did it make? Reflecting on this, Husserl began turning his phenomenology into a branch of ‘idealism’ — the philosophical tradition which denied external reality and defined everything as a kind of private hallucination.
  • For Sartre, if we try to shut ourselves up inside our own minds, ‘in a nice warm room with the shutters closed’, we cease to exist. We have no cosy home: being out on the dusty road is the very definition of what we are.
  • One might think that, if Heidegger had anything worth saying, he could have communicated it in ordinary language. The fact is that he does not want to be ordinary, and he may not even want to communicate in the usual sense. He wants to make the familiar obscure, and to vex us. George Steiner thought that Heidegger’s purpose was less to be understood than to be experienced through a ‘felt strangeness’.
  • He takes Dasein in its most ordinary moments, then talks about it in the most innovative way he can. For Heidegger, Dasein’s everyday Being is right here: it is Being-in-the-world, or In-der-Welt-sein. The main feature of Dasein’s everyday Being-in-the-world right here is that it is usually busy doing something.
  • Thus, for Heidegger, all Being-in-the-world is also a ‘Being-with’ or Mitsein. We cohabit with others in a ‘with-world’, or Mitwelt. The old philosophical problem of how we prove the existence of other minds has now vanished. Dasein swims in the with-world long before it wonders about other minds.
  • Sometimes the best-educated people were those least inclined to take the Nazis seriously, dismissing them as too absurd to last. Karl Jaspers was one of those who made this mistake, as he later recalled, and Beauvoir observed similar dismissive attitudes among the French students in Berlin.
  • In any case, most of those who disagreed with Hitler’s ideology soon learned to keep their view to themselves. If a Nazi parade passed on the street, they would either slip out of view or give the obligatory salute like everyone else, telling themselves that the gesture meant nothing if they did not believe in it. As the psychologist Bruno Bettelheim later wrote of this period, few people will risk their life for such a small thing as raising an arm — yet that is how one’s powers of resistance are eroded away, and eventually one’s responsibility and integrity go with them.
  • for Arendt, if you do not respond adequately when the times demand it, you show a lack of imagination and attention that is as dangerous as deliberately committing an abuse. It amounts to disobeying the one command she had absorbed from Heidegger in those Marburg days: Think!
  • ‘Everything takes place under a kind of anaesthesia. Objectively dreadful events produce a thin, puny emotional response. Murders are committed like schoolboy pranks. Humiliation and moral decay are accepted like minor incidents.’ Haffner thought modernity itself was partly to blame: people had become yoked to their habits and to mass media, forgetting to stop and think, or to disrupt their routines long enough to question what was going on.
  • Heidegger’s former lover and student Hannah Arendt would argue, in her 1951 study The Origins of Totalitarianism, that totalitarian movements thrived at least partly because of this fragmentation in modern lives, which made people more vulnerable to being swept away by demagogues. Elsewhere, she coined the phrase ‘the banality of evil’ to describe the most extreme failures of personal moral awareness.
  • His communicative ideal fed into a whole theory of history: he traced all civilisation to an ‘Axial Period’ in the fifth century BC, during which philosophy and culture exploded simultaneously in Europe, the Middle East and Asia, as though a great bubble of minds had erupted from the earth’s surface. ‘True philosophy needs communion to come into existence,’ he wrote, and added, ‘Uncommunicativeness in a philosopher is virtually a criterion of the untruth of his thinking.’
  • The idea of being called to authenticity became a major theme in later existentialism, the call being interpreted as saying something like ‘Be yourself!’, as opposed to being phony. For Heidegger, the call is more fundamental than that. It is a call to take up a self that you didn’t know you had: to wake up to your Being. Moreover, it is a call to action. It requires you to do something: to take a decision of some sort.
  • Being and Time contained at least one big idea that should have been of use in resisting totalitarianism. Dasein, Heidegger wrote there, tends to fall under the sway of something called das Man or ‘the they’ — an impersonal entity that robs us of the freedom to think for ourselves. To live authentically requires resisting or outwitting this influence, but this is not easy because das Man is so nebulous. Man in German does not mean ‘man’ as in English (that’s der Mann), but a neutral abstraction, something like ‘one’ in the English phrase ‘one doesn’t do that’,
  • for Heidegger, das Man is me. It is everywhere and nowhere; it is nothing definite, but each of us is it. As with Being, it is so ubiquitous that it is difficult to see. If I am not careful, however, das Man takes over the important decisions that should be my own. It drains away my responsibility or ‘answerability’. As Arendt might put it, we slip into banality, failing to think.
  • Jaspers focused on what he called Grenzsituationen — border situations, or limit situations. These are the moments when one finds oneself constrained or boxed in by what is happening, but at the same time pushed by these events towards the limits or outer edge of normal experience. For example, you might have to make a life-or-death choice, or something might remind you suddenly of your mortality,
  • Jaspers’ interest in border situations probably had much to do with his own early confrontation with mortality. From childhood, he had suffered from a heart condition so severe that he always expected to die at any moment. He also had emphysema, which forced him to speak slowly, taking long pauses to catch his breath. Both illnesses meant that he had to budget his energies with care in order to get his work done without endangering his life.
  • If I am to resist das Man, I must become answerable to the call of my ‘voice of conscience’. This call does not come from God, as a traditional Christian definition of the voice of conscience might suppose. It comes from a truly existentialist source: my own authentic self. Alas, this voice is one I do not recognise and may not hear, because it is not the voice of my habitual ‘they-self’. It is an alien or uncanny version of my usual voice. I am familiar with my they-self, but not with my unalienated voice — so, in a weird twist, my real voice is the one that sounds strangest to me.
  • Marcel developed a strongly theological branch of existentialism. His faith distanced him from both Sartre and Heidegger, but he shared a sense of how history makes demands on individuals. In his essay ‘On the Ontological Mystery’, written in 1932 and published in the fateful year of 1933, Marcel wrote of the human tendency to become stuck in habits, received ideas, and a narrow-minded attachment to possessions and familiar scenes. Instead, he urged his readers to develop a capacity for remaining ‘available’ to situations as they arise. Similar ideas of disponibilité or availability had been explored by other writers,
  • Marcel made it his central existential imperative. He was aware of how rare and difficult it was. Most people fall into what he calls ‘crispation’: a tensed, encrusted shape in life — ‘as though each one of us secreted a kind of shell which gradually hardened and imprisoned him’.
  • Bettelheim later observed that, under Nazism, only a few people realised at once that life could not continue unaltered: these were the ones who got away quickly. Bettelheim himself was not among them. Caught in Austria when Hitler annexed it, he was sent first to Dachau and then to Buchenwald, but was then released in a mass amnesty to celebrate Hitler’s birthday in 1939 — an extraordinary reprieve, after which he left at once for America.
  • we are used to reading philosophy as offering a universal message for all times and places — or at least as aiming to do so. But Heidegger disliked the notion of universal truths or universal humanity, which he considered a fantasy. For him, Dasein is not defined by shared faculties of reason and understanding, as the Enlightenment philosophers thought. Still less is it defined by any kind of transcendent eternal soul, as in religious tradition. We do not exist on a higher, eternal plane at all. Dasein’s Being is local: it has a historical situation, and is constituted in time and place.
  • For Marcel, learning to stay open to reality in this way is the philosopher’s prime job. Everyone can do it, but the philosopher is the one who is called on above all to stay awake, so as to be the first to sound the alarm if something seems wrong.
  • Second, it also means understanding that we are historical beings, and grasping the demands our particular historical situation is making on us. In what Heidegger calls ‘anticipatory resoluteness’, Dasein discovers ‘that its uttermost possibility lies in giving itself up’. At that moment, through Being-towards-death and resoluteness in facing up to one’s time, one is freed from the they-self and attains one’s true, authentic self.
  • If we are temporal beings by our very nature, then authentic existence means accepting, first, that we are finite and mortal. We will die: this all-important realisation is what Heidegger calls authentic ‘Being-towards-Death’, and it is fundamental to his philosophy.
  • Hannah Arendt, instead, left early on: she had the benefit of a powerful warning. Just after the Nazi takeover, in spring 1933, she had been arrested while researching materials on anti-Semitism for the German Zionist Organisation at Berlin’s Prussian State Library. Her apartment was searched; both she and her mother were locked up briefly, then released. They fled, without stopping to arrange travel documents. They crossed to Czechoslovakia (then still safe) by a method that sounds almost too fabulous to be true: a sympathetic German family on the border had a house with its front door in Germany and its back door in Czechoslovakia. The family would invite people for dinner, then let them leave through the back door at night.
  • As Sartre argued in his 1943 review of The Stranger, basic phenomenological principles show that experience comes to us already charged with significance. A piano sonata is a melancholy evocation of longing. If I watch a soccer match, I see it as a soccer match, not as a meaningless scene in which a number of people run around taking turns to apply their lower limbs to a spherical object. If the latter is what I’m seeing, then I am not watching some more essential, truer version of soccer; I am failing to watch it properly as soccer at all.
  • Much as they liked Camus personally, neither Sartre nor Beauvoir accepted his vision of absurdity. For them, life is not absurd, even when viewed on a cosmic scale, and nothing can be gained by saying it is. Life for them is full of real meaning, although that meaning emerges differently for each of us.
  • For Sartre, we show bad faith whenever we portray ourselves as passive creations of our race, class, job, history, nation, family, heredity, childhood influences, events, or even hidden drives in our subconscious which we claim are out of our control. It is not that such factors are unimportant: class and race, in particular, he acknowledged as powerful forces in people’s lives, and Simone de Beauvoir would soon add gender to that list.
  • Sartre takes his argument to an extreme point by asserting that even war, imprisonment or the prospect of imminent death cannot take away my existential freedom. They form part of my ‘situation’, and this may be an extreme and intolerable situation, but it still provides only a context for whatever I choose to do next. If I am about to die, I can decide how to face that death. Sartre here resurrects the ancient Stoic idea that I may not choose what happens to me, but I can choose what to make of it, spiritually speaking.
  • But the Stoics cultivated indifference in the face of terrible events, whereas Sartre thought we should remain passionately, even furiously engaged with what happens to us and with what we can achieve. We should not expect freedom to be anything less than fiendishly difficult.
  • Freedom does not mean entirely unconstrained movement, and it certainly does not mean acting randomly. We often mistake the very things that enable us to be free — context, meaning, facticity, situation, a general direction in our lives — for things that define us and take away our freedom. It is only with all of these that we can be free in a real sense.
  • Nor did he mean that privileged groups have the right to pontificate to the poor and downtrodden about the need to ‘take responsibility’ for themselves. That would be a grotesque misreading of Sartre’s point, since his sympathy in any encounter always lay with the more oppressed side. But for each of us — for me — to be in good faith means not making excuses for myself.
  • Camus’ novel gives us a deliberately understated vision of heroism and decisive action compared to those of Sartre and Beauvoir. One can only do so much. It can look like defeatism, but it shows a more realistic perception of what it takes to actually accomplish difficult tasks like liberating one’s country.
  • Camus just kept returning to his core principle: no torture, no killing — at least not with state approval. Beauvoir and Sartre believed they were taking a more subtle and more realistic view. If asked why a couple of innocuous philosophers had suddenly become so harsh, they would have said it was because the war had changed them in profound ways. It had shown them that one’s duties to humanity could be more complicated than they seemed. ‘The war really divided my life in two,’ Sartre said later.
  • Poets and artists ‘let things be’, but they also let things come out and show themselves. They help to ease things into ‘unconcealment’ (Unverborgenheit), which is Heidegger’s rendition of the Greek term alētheia, usually translated as ‘truth’. This is a deeper kind of truth than the mere correspondence of a statement to reality, as when we say ‘The cat is on the mat’ and point to a mat with a cat on it. Long before we can do this, both cat and mat must ‘stand forth out of concealedness’. They must un-hide themselves.
  • Heidegger does not use the word ‘consciousness’ here because — as with his earlier work — he is trying to make us think in a radically different way about ourselves. We are not to think of the mind as an empty cavern, or as a container filled with representations of things. We are not even supposed to think of it as firing off arrows of intentional ‘aboutness’, as in the earlier phenomenology of Brentano. Instead, Heidegger draws us into the depths of his Schwarzwald, and asks us to imagine a gap with sunlight filtering in. We remain in the forest, but we provide a relatively open spot where other beings can bask for a moment. If we did not do this, everything would remain in the thickets, hidden even to itself.
  • The astronomer Carl Sagan began his 1980 television series Cosmos by saying that human beings, though made of the same stuff as the stars, are conscious and are therefore ‘a way for the cosmos to know itself’. Merleau-Ponty similarly quoted his favourite painter Cézanne as saying, ‘The landscape thinks itself in me, and I am its consciousness.’ This is something like what Heidegger thinks humanity contributes to the earth. We are not made of spiritual nothingness; we are part of Being, but we also bring something unique with us. It is not much: a little open space, perhaps with a path and a bench like the one the young Heidegger used to sit on to do his homework. But through us, the miracle occurs.
  • Beauty aside, Heidegger’s late writing can also be troubling, with its increasingly mystical notion of what it is to be human. If one speaks of a human being mainly as an open space or a clearing, or a means of ‘letting beings be’ and dwelling poetically on the earth, then one doesn’t seem to be talking about any recognisable person. The old Dasein has become less human than ever. It is now a forestry feature.
  • Even today, Jaspers, the dedicated communicator, is far less widely read than Heidegger, who has influenced architects, social theorists, critics, psychologists, artists, film-makers, environmental activists, and innumerable students and enthusiasts — including the later deconstructionist and post-structuralist schools, which took their starting point from his late thinking. Having spent the late 1940s as an outsider and then been rehabilitated, Heidegger became the overwhelming presence in university philosophy all over the European continent from then on.
  • As Levinas reflected on this experience, it helped to lead him to a philosophy that was essentially ethical, rather than ontological like Heidegger’s. He developed his ideas from the work of Jewish theologian Martin Buber, whose I and Thou in 1923 had distinguished between my relationship with an impersonal ‘it’ or ‘them’, and the direct personal encounter I have with a ‘you’. Levinas took it further: when I encounter you, we normally meet face-to-face, and it is through your face that you, as another person, can make ethical demands on me. This is very different from Heidegger’s Mitsein or Being-with, which suggests a group of people standing alongside one another, shoulder to shoulder as if in solidarity — perhaps as a unified nation or Volk.
  • For Levinas, we literally face each other, one individual at a time, and that relationship becomes one of communication and moral expectation. We do not merge; we respond to one another. Instead of being co-opted into playing some role in my personal drama of authenticity, you look me in the eyes — and you remain Other. You remain you.
  • This relationship is more fundamental than the self, more fundamental than consciousness, more fundamental even than Being — and it brings an unavoidable ethical obligation. Ever since Husserl, phenomenologists and existentialists had being trying to stretch the definition of existence to incorporate our social lives and relationships. Levinas did more: he turned philosophy around entirely so that these relationships were the foundation of our existence, not an extension of it.
  • Her last work, The Need for Roots, argues, among other things, that none of us has rights, but each one of us has a near-infinite degree of duty and obligation to the other. Whatever the underlying cause of her death — and anorexia nervosa seems to have been involved — no one could deny that she lived out her philosophy with total commitment. Of all the lives touched on in this book, hers is surely the most profound and challenging application of Iris Murdoch’s notion that a philosophy can be ‘inhabited’.
  • Other thinkers took radical ethical turns during the war years. The most extreme was Simone Weil, who actually tried to live by the principle of putting other people’s ethical demands first. Having returned to France after her travels through Germany in 1932, she had worked in a factory so as to experience the degrading nature of such work for herself. When France fell in 1940, her family fled to Marseilles (against her protests), and later to the US and to Britain. Even in exile, Weil made extraordinary sacrifices. If there were people in the world who could not sleep in a bed, she would not do so either, so she slept on the floor.
  • The mystery tradition had roots in Kierkegaard’s ‘leap of faith’. It owed much to the other great nineteenth-century mystic of the impossible, Dostoevsky, and to older theological notions. But it also grew from the protracted trauma that was the first half of the twentieth century. Since 1914, and especially since 1939, people in Europe and elsewhere had come to the realisation that we cannot fully know or trust ourselves; that we have no excuses or explanations for what we do — and yet that we must ground our existence and relationships on something firm, because otherwise we cannot survive.
  • One striking link between these radical ethical thinkers, all on the fringes of our main story, is that they had religious faith. They also granted a special role to the notion of ‘mystery’ — that which cannot be known, calculated or understood, especially when it concerns our relationships with each other. Heidegger was different from them, since he rejected the religion he grew up with and had no real interest in ethics — probably as a consequence of his having no real interest in the human.
  • Meanwhile, the Christian existentialist Gabriel Marcel was also still arguing, as he had since the 1930s, that ethics trumps everything else in philosophy and that our duty to each other is so great as to play the role of a transcendent ‘mystery’. He too had been led to this position partly by a wartime experience: during the First World War he had worked for the Red Cross’ Information Service, with the unenviable job of answering relatives’ inquiries about missing soldiers. Whenever news came, he passed it on, and usually it was not good. As Marcel later said, this task permanently inoculated him against warmongering rhetoric of any kind, and it made him aware of the power of what is unknown in our lives.
  • As the play’s much-quoted and frequently misunderstood final line has it: ‘Hell is other people.’ Sartre later explained that he did not mean to say that other people were hellish in general. He meant that after death we become frozen in their view, unable any longer to fend off their interpretation. In life, we can still do something to manage the impression we make; in death, this freedom goes and we are left entombed in other’s people’s memories and perceptions.
  • We have to do two near-impossible things at once: understand ourselves as limited by circumstances, and yet continue to pursue our projects as though we are truly in control. In Beauvoir’s view, existentialism is the philosophy that best enables us to do this, because it concerns itself so deeply with both freedom and contingency. It acknowledges the radical and terrifying scope of our freedom in life, but also the concrete influences that other philosophies tend to ignore: history, the body, social relationships and the environment.
  • The aspects of our existence that limit us, Merleau-Ponty says, are the very same ones that bind us to the world and give us scope for action and perception. They make us what we are. Sartre acknowledged the need for this trade-off, but he found it more painful to accept. Everything in him longed to be free of bonds, of impediments and limitations
  • Of course we have to learn this skill of interpreting and anticipating the world, and this happens in early childhood, which is why Merleau-Ponty thought child psychology was essential to philosophy. This is an extraordinary insight. Apart from Rousseau, very few philosophers before him had taken childhood seriously; most wrote as though all human experience were that of a fully conscious, rational, verbal adult who has been dropped into this world from the sky — perhaps by a stork.
  • For Merleau-Ponty, we cannot understand our experience if we don’t think of ourselves in part as overgrown babies. We fall for optical illusions because we once learned to see the world in terms of shapes, objects and things relevant to our own interests. Our first perceptions came to us in tandem with our first active experiments in observing the world and reaching out to explore it, and are still linked with those experiences.
  • Another factor in all of this, for Merleau-Ponty, is our social existence: we cannot thrive without others, or not for long, and we need this especially in early life. This makes solipsistic speculation about the reality of others ridiculous; we could never engage in such speculation if we hadn’t already been formed by them.
  • As Descartes could have said (but didn’t), ‘I think, therefore other people exist.’ We grow up with people playing with us, pointing things out, talking, listening, and getting us used to reading emotions and movements; this is how we become capable, reflective, smoothly integrated beings.
  • In general, Merleau-Ponty thinks human experience only makes sense if we abandon philosophy’s time-honoured habit of starting with a solitary, capsule-like, immobile adult self, isolated from its body and world, which must then be connected up again — adding each element around it as though adding clothing to a doll. Instead, for him, we slide from the womb to the birth canal to an equally close and total immersion in the world. That immersion continues as long as we live, although we may also cultivate the art of partially withdrawing from time to time when we want to think or daydream.
  • When he looks for his own metaphor to describe how he sees consciousness, he comes up with a beautiful one: consciousness, he suggests, is like a ‘fold’ in the world, as though someone had crumpled a piece of cloth to make a little nest or hollow. It stays for a while, before eventually being unfolded and smoothed away. There is something seductive, even erotic, in this idea of my conscious self as an improvised pouch in the cloth of the world. I still have my privacy — my withdrawing room. But I am part of the world’s fabric, and I remain formed out of it for as long as I am here.
  • By the time of these works, Merleau-Ponty is taking his desire to describe experience to the outer limits of what language can convey. Just as with the late Husserl or Heidegger, or Sartre in his Flaubert book, we see a philosopher venturing so far from shore that we can barely follow. Emmanuel Levinas would head out to the fringes too, eventually becoming incomprehensible to all but his most patient initiates.
  • Sartre once remarked — speaking of a disagreement they had about Husserl in 1941 — that ‘we discovered, astounded, that our conflicts had, at times, stemmed from our childhood, or went back to the elementary differences of our two organisms’. Merleau-Ponty also said in an interview that Sartre’s work seemed strange to him, not because of philosophical differences, but because of a certain ‘register of feeling’, especially in Nausea, that he could not share. Their difference was one of temperament and of the whole way the world presented itself to them.
  • The two also differed in their purpose. When Sartre writes about the body or other aspects of experience, he generally does it in order to make a different point. He expertly evokes the grace of his café waiter, gliding between the tables, bending at an angle just so, steering the drink-laden tray through the air on the tips of his fingers — but he does it all in order to illustrate his ideas about bad faith. When Merleau-Ponty writes about skilled and graceful movement, the movement itself is his point. This is the thing he wants to understand.
  • We can never move definitively from ignorance to certainty, for the thread of the inquiry will constantly lead us back to ignorance again. This is the most attractive description of philosophy I’ve ever read, and the best argument for why it is worth doing, even (or especially) when it takes us no distance at all from our starting point.
  • By prioritising perception, the body, social life and childhood development, Merleau-Ponty gathered up philosophy’s far-flung outsider subjects and brought them in to occupy the centre of his thought.
  • In his inaugural lecture at the Collège de France on 15 January 1953, published as In Praise of Philosophy, he said that philosophers should concern themselves above all with whatever is ambiguous in our experience. At the same time, they should think clearly about these ambiguities, using reason and science. Thus, he said, ‘The philosopher is marked by the distinguishing trait that he possesses inseparably the taste for evidence and the feeling for ambiguity.’ A constant movement is required between these two
  • As Sartre wrote in response to Hiroshima, humanity had now gained the power to wipe itself out, and must decide every single day that it wanted to live. Camus also wrote that humanity faced the task of choosing between collective suicide and a more intelligent use of its technology — ‘between hell and reason’. After 1945, there seemed little reason to trust in humanity’s ability to choose well.
  • Merleau-Ponty observed in a lecture of 1951 that, more than any previous century, the twentieth century had reminded people how ‘contingent’ their lives were — how at the mercy of historical events and other changes that they could not control. This feeling went on long after the war ended. After the A-bombs were dropped on Hiroshima and Nagasaki, many feared that a Third World War would not be long in coming, this time between the Soviet Union and the United States.
Javier E

Philosophy isn't dead yet | Raymond Tallis | Comment is free | The Guardian - 1 views

  • Fundamental physics is in a metaphysical mess and needs help. The attempt to reconcile its two big theories, general relativity and quantum mechanics, has stalled for nearly 40 years. Endeavours to unite them, such as string theory, are mathematically ingenious but incomprehensible even to many who work with them. This is well known.
  • A better-kept secret is that at the heart of quantum mechanics is a disturbing paradox – the so-called measurement problem, arising ultimately out of the Uncertainty Principle – which apparently demonstrates that the very measurements that have established and confirmed quantum theory should be impossible. Oxford philosopher of physics David Wallace has argued that this threatens to make quantum mechanics incoherent which can be remedied only by vastly multiplying worlds.
  • there is the failure of physics to accommodate conscious beings. The attempt to fit consciousness into the material world, usually by identifying it with activity in the brain, has failed dismally, if only because there is no way of accounting for the fact that certain nerve impulses are supposed to be conscious (of themselves or of the world) while the overwhelming majority (physically essentially the same) are not. In short, physics does not allow for the strange fact that matter reveals itself to material objects (such as physicists).
  • ...3 more annotations...
  • then there is the mishandling of time. The physicist Lee Smolin's recent book, Time Reborn, links the crisis in physics with its failure to acknowledge the fundamental reality of time. Physics is predisposed to lose time because its mathematical gaze freezes change. Tensed time, the difference between a remembered or regretted past and an anticipated or feared future, is particularly elusive. This worried Einstein: in a famous conversation, he mourned the fact that the present tense, "now", lay "just outside of the realm of science".
  • Recent attempts to explain how the universe came out of nothing, which rely on questionable notions such as spontaneous fluctuations in a quantum vacuum, the notion of gravity as negative energy, and the inexplicable free gift of the laws of nature waiting in the wings for the moment of creation, reveal conceptual confusion beneath mathematical sophistication. They demonstrate the urgent need for a radical re-examination of the invisible frameworks within which scientific investigations are conducted.
  • we should reflect on how a scientific image of the world that relies on up to 10 dimensions of space and rests on ideas, such as fundamental particles, that have neither identity nor location, connects with our everyday experience. This should open up larger questions, such as the extent to which mathematical portraits capture the reality of our world – and what we mean by "reality".
Javier E

Praising Andy Warhol - NYTimes.com - 1 views

  • Peter Schjeldahl, for example, calls Warhol a “genius” and a “great artist” and even says that “the gold standard of Warhol exposes every inflated value in other currencies.”
  •   If Warhol is a great artist and these boxes are among his most important works, what am I missing?
  • this explanation of Warhol’s greatness, contrary to the first one, makes art appreciation once again a matter of esoteric knowledge and taste, now focused on subtle philosophical puzzles about the nature of art.
  • ...9 more annotations...
  • Warhol’s boxes are praised for subverting the distinction between mundane objects of everyday life and “art” in a museum.  As a result, we can enjoy and appreciate the things that make up our everyday life just as much as what we see in museums (and with far less effort).  Whereas the joys of traditional art typically require an initiation into an esoteric world of historical information and refined taste, Warhol’s “Pop Art” reveals the joys of what we all readily understand and appreciate.  As Danto put it, “Warhol’s intuition was that nothing an artist could do would give us more of what art sought than reality already gave us.”
  • Warhol’s work is also praised for posing a crucial philosophical question about art.  As Danto puts it: “Given two objects that look exactly alike, how is it possible for one of them to be a work of art and the other just an ordinary object?”  Answering this question requires realizing that there are no perceptual qualities that make something a work of art.  This in turn implies that anything, no matter how it looks, can be a work of art.
  • According to Danto, whether an object is a work of art depends on its relation to an “art world”:  “an atmosphere of artistic theory, a knowledge of the history of art” that exists at a particular time.
  • Appreciations of Warhol’s boxes typically emphasize their effects rather than their appearance.  These appreciations take two quite different forms.
  • it was Danto, not Warhol, who provided the intellectual/aesthetic excitement by formulating and developing a brilliant answer to the question.  To the extent that the philosophical question had artistic value in the context of the contemporary artworld,  Danto was more the artist than Warhol.
  • I agree that Warhol — along with many other artists from the 1950s on — opened up new ways of making art that traditional “high art” had excluded.  But new modes of artistic creation — commercial design techniques, performances, installations, conceptual art — do not guarantee a new kind or a higher quality of aesthetic experience. 
  • anything can be presented as a work of art.   But it does not follow that anything can produce a satisfying aesthetic experience.  The great works of the tradition do not circumscribe the sorts of things that can be art, but they are exemplars of what we expect a work of art to do to us.  (This is the sense in which, according to Kant, originally beautiful works of art are exemplary, yet without providing rules for further such works of art.)
  • Praise of Andy Warhol often emphasizes the new possibilities of artistic creation his work opened up.  That would make his work important in the history of art and for that reason alone of considerable interest.
  • as Jerrold Levinson and others have pointed out, a work can be an important artistic achievement without being an important aesthetic achievement.  This, I suggest, is how we should think about Warhol’s Brillo boxes.
Javier E

The Coming Software Apocalypse - The Atlantic - 1 views

  • Our standard framework for thinking about engineering failures—reflected, for instance, in regulations for medical devices—was developed shortly after World War II, before the advent of software, for electromechanical systems. The idea was that you make something reliable by making its parts reliable (say, you build your engine to withstand 40,000 takeoff-and-landing cycles) and by planning for the breakdown of those parts (you have two engines). But software doesn’t break. Intrado’s faulty threshold is not like the faulty rivet that leads to the crash of an airliner. The software did exactly what it was told to do. In fact it did it perfectly. The reason it failed is that it was told to do the wrong thing.
  • Software failures are failures of understanding, and of imagination. Intrado actually had a backup router, which, had it been switched to automatically, would have restored 911 service almost immediately. But, as described in a report to the FCC, “the situation occurred at a point in the application logic that was not designed to perform any automated corrective actions.”
  • The introduction of programming languages like Fortran and C, which resemble English, and tools, known as “integrated development environments,” or IDEs, that help correct simple mistakes (like Microsoft Word’s grammar checker but for code), obscured, though did little to actually change, this basic alienation—the fact that the programmer didn’t work on a problem directly, but rather spent their days writing out instructions for a machine.
  • ...52 more annotations...
  • Code is too hard to think about. Before trying to understand the attempts themselves, then, it’s worth understanding why this might be: what it is about code that makes it so foreign to the mind, and so unlike anything that came before it.
  • Technological progress used to change the way the world looked—you could watch the roads getting paved; you could see the skylines rise. Today you can hardly tell when something is remade, because so often it is remade by code.
  • Software has enabled us to make the most intricate machines that have ever existed. And yet we have hardly noticed, because all of that complexity is packed into tiny silicon chips as millions and millions of lines of cod
  • The programmer, the renowned Dutch computer scientist Edsger Dijkstra wrote in 1988, “has to be able to think in terms of conceptual hierarchies that are much deeper than a single mind ever needed to face before.” Dijkstra meant this as a warning.
  • As programmers eagerly poured software into critical systems, they became, more and more, the linchpins of the built world—and Dijkstra thought they had perhaps overestimated themselves.
  • What made programming so difficult was that it required you to think like a computer.
  • “The problem is that software engineers don’t understand the problem they’re trying to solve, and don’t care to,” says Leveson, the MIT software-safety expert. The reason is that they’re too wrapped up in getting their code to work.
  • Though he runs a lab that studies the future of computing, he seems less interested in technology per se than in the minds of the people who use it. Like any good toolmaker, he has a way of looking at the world that is equal parts technical and humane. He graduated top of his class at the California Institute of Technology for electrical engineering,
  • “The serious problems that have happened with software have to do with requirements, not coding errors.” When you’re writing code that controls a car’s throttle, for instance, what’s important is the rules about when and how and by how much to open it. But these systems have become so complicated that hardly anyone can keep them straight in their head. “There’s 100 million lines of code in cars now,” Leveson says. “You just cannot anticipate all these things.”
  • a nearly decade-long investigation into claims of so-called unintended acceleration in Toyota cars. Toyota blamed the incidents on poorly designed floor mats, “sticky” pedals, and driver error, but outsiders suspected that faulty software might be responsible
  • software experts spend 18 months with the Toyota code, picking up where NASA left off. Barr described what they found as “spaghetti code,” programmer lingo for software that has become a tangled mess. Code turns to spaghetti when it accretes over many years, with feature after feature piling on top of, and being woven around
  • Using the same model as the Camry involved in the accident, Barr’s team demonstrated that there were actually more than 10 million ways for the onboard computer to cause unintended acceleration. They showed that as little as a single bit flip—a one in the computer’s memory becoming a zero or vice versa—could make a car run out of control. The fail-safe code that Toyota had put in place wasn’t enough to stop it
  • . In all, Toyota recalled more than 9 million cars, and paid nearly $3 billion in settlements and fines related to unintended acceleration.
  • The problem is that programmers are having a hard time keeping up with their own creations. Since the 1980s, the way programmers work and the tools they use have changed remarkably little.
  • “Visual Studio is one of the single largest pieces of software in the world,” he said. “It’s over 55 million lines of code. And one of the things that I found out in this study is more than 98 percent of it is completely irrelevant. All this work had been put into this thing, but it missed the fundamental problems that people faced. And the biggest one that I took away from it was that basically people are playing computer inside their head.” Programmers were like chess players trying to play with a blindfold on—so much of their mental energy is spent just trying to picture where the pieces are that there’s hardly any left over to think about the game itself.
  • The fact that the two of them were thinking about the same problem in the same terms, at the same time, was not a coincidence. They had both just seen the same remarkable talk, given to a group of software-engineering students in a Montreal hotel by a computer researcher named Bret Victor. The talk, which went viral when it was posted online in February 2012, seemed to be making two bold claims. The first was that the way we make software is fundamentally broken. The second was that Victor knew how to fix it.
  • This is the trouble with making things out of code, as opposed to something physical. “The complexity,” as Leveson puts it, “is invisible to the eye.”
  • in early 2012, Victor had finally landed upon the principle that seemed to thread through all of his work. (He actually called the talk “Inventing on Principle.”) The principle was this: “Creators need an immediate connection to what they’re creating.” The problem with programming was that it violated the principle. That’s why software systems were so hard to think about, and so rife with bugs: The programmer, staring at a page of text, was abstracted from whatever it was they were actually making.
  • “Our current conception of what a computer program is,” he said, is “derived straight from Fortran and ALGOL in the late ’50s. Those languages were designed for punch cards.”
  • WYSIWYG (pronounced “wizzywig”) came along. It stood for “What You See Is What You Get.”
  • Victor’s point was that programming itself should be like that. For him, the idea that people were doing important work, like designing adaptive cruise-control systems or trying to understand cancer, by staring at a text editor, was appalling.
  • With the right interface, it was almost as if you weren’t working with code at all; you were manipulating the game’s behavior directly.
  • When the audience first saw this in action, they literally gasped. They knew they weren’t looking at a kid’s game, but rather the future of their industry. Most software involved behavior that unfolded, in complex ways, over time, and Victor had shown that if you were imaginative enough, you could develop ways to see that behavior and change it, as if playing with it in your hands. One programmer who saw the talk wrote later: “Suddenly all of my tools feel obsolete.”
  • hen John Resig saw the “Inventing on Principle” talk, he scrapped his plans for the Khan Academy programming curriculum. He wanted the site’s programming exercises to work just like Victor’s demos. On the left-hand side you’d have the code, and on the right, the running program: a picture or game or simulation. If you changed the code, it’d instantly change the picture. “In an environment that is truly responsive,” Resig wrote about the approach, “you can completely change the model of how a student learns ... [They] can now immediately see the result and intuit how underlying systems inherently work without ever following an explicit explanation.” Khan Academy has become perhaps the largest computer-programming class in the world, with a million students, on average, actively using the program each month.
  • The ideas spread. The notion of liveness, of being able to see data flowing through your program instantly, made its way into flagship programming tools offered by Google and Apple. The default language for making new iPhone and Mac apps, called Swift, was developed by Apple from the ground up to support an environment, called Playgrounds, that was directly inspired by Light Table.
  • “Typically the main problem with software coding—and I’m a coder myself,” Bantegnie says, “is not the skills of the coders. The people know how to code. The problem is what to code. Because most of the requirements are kind of natural language, ambiguous, and a requirement is never extremely precise, it’s often understood differently by the guy who’s supposed to code.”
  • In a pair of later talks, “Stop Drawing Dead Fish” and “Drawing Dynamic Visualizations,” Victor went one further. He demoed two programs he’d built—the first for animators, the second for scientists trying to visualize their data—each of which took a process that used to involve writing lots of custom code and reduced it to playing around in a WYSIWYG interface.
  • Victor suggested that the same trick could be pulled for nearly every problem where code was being written today. “I’m not sure that programming has to exist at all,” he told me. “Or at least software developers.” In his mind, a software developer’s proper role was to create tools that removed the need for software developers. Only then would people with the most urgent computational problems be able to grasp those problems directly, without the intermediate muck of code.
  • Victor implored professional software developers to stop pouring their talent into tools for building apps like Snapchat and Uber. “The inconveniences of daily life are not the significant problems,” he wrote. Instead, they should focus on scientists and engineers—as he put it to me, “these people that are doing work that actually matters, and critically matters, and using really, really bad tools.”
  • Bantegnie’s company is one of the pioneers in the industrial use of model-based design, in which you no longer write code directly. Instead, you create a kind of flowchart that describes the rules your program should follow (the “model”), and the computer generates code for you based on those rules
  • In a model-based design tool, you’d represent this rule with a small diagram, as though drawing the logic out on a whiteboard, made of boxes that represent different states—like “door open,” “moving,” and “door closed”—and lines that define how you can get from one state to the other. The diagrams make the system’s rules obvious: Just by looking, you can see that the only way to get the elevator moving is to close the door, or that the only way to get the door open is to stop.
  • . In traditional programming, your task is to take complex rules and translate them into code; most of your energy is spent doing the translating, rather than thinking about the rules themselves. In the model-based approach, all you have is the rules. So that’s what you spend your time thinking about. It’s a way of focusing less on the machine and more on the problem you’re trying to get it to solve.
  • “Everyone thought I was interested in programming environments,” he said. Really he was interested in how people see and understand systems—as he puts it, in the “visual representation of dynamic behavior.” Although code had increasingly become the tool of choice for creating dynamic behavior, it remained one of the worst tools for understanding it. The point of “Inventing on Principle” was to show that you could mitigate that problem by making the connection between a system’s behavior and its code immediate.
  • On this view, software becomes unruly because the media for describing what software should do—conversations, prose descriptions, drawings on a sheet of paper—are too different from the media describing what software does do, namely, code itself.
  • for this approach to succeed, much of the work has to be done well before the project even begins. Someone first has to build a tool for developing models that are natural for people—that feel just like the notes and drawings they’d make on their own—while still being unambiguous enough for a computer to understand. They have to make a program that turns these models into real code. And finally they have to prove that the generated code will always do what it’s supposed to.
  • tice brings order and accountability to large codebases. But, Shivappa says, “it’s a very labor-intensive process.” He estimates that before they used model-based design, on a two-year-long project only two to three months was spent writing code—the rest was spent working on the documentation.
  • uch of the benefit of the model-based approach comes from being able to add requirements on the fly while still ensuring that existing ones are met; with every change, the computer can verify that your program still works. You’re free to tweak your blueprint without fear of introducing new bugs. Your code is, in FAA parlance, “correct by construction.”
  • “people are not so easily transitioning to model-based software development: They perceive it as another opportunity to lose control, even more than they have already.”
  • The bias against model-based design, sometimes known as model-driven engineering, or MDE, is in fact so ingrained that according to a recent paper, “Some even argue that there is a stronger need to investigate people’s perception of MDE than to research new MDE technologies.”
  • “Human intuition is poor at estimating the true probability of supposedly ‘extremely rare’ combinations of events in systems operating at a scale of millions of requests per second,” he wrote in a paper. “That human fallibility means that some of the more subtle, dangerous bugs turn out to be errors in design; the code faithfully implements the intended design, but the design fails to correctly handle a particular ‘rare’ scenario.”
  • Newcombe was convinced that the algorithms behind truly critical systems—systems storing a significant portion of the web’s data, for instance—ought to be not just good, but perfect. A single subtle bug could be catastrophic. But he knew how hard bugs were to find, especially as an algorithm grew more complex. You could do all the testing you wanted and you’d never find them all.
  • An algorithm written in TLA+ could in principle be proven correct. In practice, it allowed you to create a realistic model of your problem and test it not just thoroughly, but exhaustively. This was exactly what he’d been looking for: a language for writing perfect algorithms.
  • TLA+, which stands for “Temporal Logic of Actions,” is similar in spirit to model-based design: It’s a language for writing down the requirements—TLA+ calls them “specifications”—of computer programs. These specifications can then be completely verified by a computer. That is, before you write any code, you write a concise outline of your program’s logic, along with the constraints you need it to satisfy
  • Programmers are drawn to the nitty-gritty of coding because code is what makes programs go; spending time on anything else can seem like a distraction. And there is a patient joy, a meditative kind of satisfaction, to be had from puzzling out the micro-mechanics of code. But code, Lamport argues, was never meant to be a medium for thought. “It really does constrain your ability to think when you’re thinking in terms of a programming language,”
  • Code makes you miss the forest for the trees: It draws your attention to the working of individual pieces, rather than to the bigger picture of how your program fits together, or what it’s supposed to do—and whether it actually does what you think. This is why Lamport created TLA+. As with model-based design, TLA+ draws your focus to the high-level structure of a system, its essential logic, rather than to the code that implements it.
  • But TLA+ occupies just a small, far corner of the mainstream, if it can be said to take up any space there at all. Even to a seasoned engineer like Newcombe, the language read at first as bizarre and esoteric—a zoo of symbols.
  • this is a failure of education. Though programming was born in mathematics, it has since largely been divorced from it. Most programmers aren’t very fluent in the kind of math—logic and set theory, mostly—that you need to work with TLA+. “Very few programmers—and including very few teachers of programming—understand the very basic concepts and how they’re applied in practice. And they seem to think that all they need is code,” Lamport says. “The idea that there’s some higher level than the code in which you need to be able to think precisely, and that mathematics actually allows you to think precisely about it, is just completely foreign. Because they never learned it.”
  • “In the 15th century,” he said, “people used to build cathedrals without knowing calculus, and nowadays I don’t think you’d allow anyone to build a cathedral without knowing calculus. And I would hope that after some suitably long period of time, people won’t be allowed to write programs if they don’t understand these simple things.”
  • Programmers, as a species, are relentlessly pragmatic. Tools like TLA+ reek of the ivory tower. When programmers encounter “formal methods” (so called because they involve mathematical, “formally” precise descriptions of programs), their deep-seated instinct is to recoil.
  • Formal methods had an image problem. And the way to fix it wasn’t to implore programmers to change—it was to change yourself. Newcombe realized that to bring tools like TLA+ to the programming mainstream, you had to start speaking their language.
  • he presented TLA+ as a new kind of “pseudocode,” a stepping-stone to real code that allowed you to exhaustively test your algorithms—and that got you thinking precisely early on in the design process. “Engineers think in terms of debugging rather than ‘verification,’” he wrote, so he titled his internal talk on the subject to fellow Amazon engineers “Debugging Designs.” Rather than bemoan the fact that programmers see the world in code, Newcombe embraced it. He knew he’d lose them otherwise. “I’ve had a bunch of people say, ‘Now I get it,’” Newcombe says.
  • In the world of the self-driving car, software can’t be an afterthought. It can’t be built like today’s airline-reservation systems or 911 systems or stock-trading systems. Code will be put in charge of hundreds of millions of lives on the road and it has to work. That is no small task.
Javier E

You Think With the World, Not Just Your Brain - The Atlantic - 2 views

  • embodied or extended cognition: broadly, the theory that what we think of as brain processes can take place outside of the brain.
  • The octopus, for instance, has a bizarre and miraculous mind, sometimes inside its brain, sometimes extending beyond it in sucker-tipped trails. Neurons are spread throughout its body; the creature has more of them in its arms than in its brain itself. It’s possible that each arm might be, to some extent, an independently thinking creature, all of which are collapsed into an octopean superconsciousness in times of danger
  • Embodied cognition, though, tells us that we’re all more octopus-like than we realize. Our minds are not like the floating conceptual “I” imagined by Descartes. We’re always thinking with, and inseparable from, our bodies.
  • ...8 more annotations...
  • The body codes how the brain works, more than the brain controls the body. When we walk—whether taking a pleasant afternoon stroll, or storming off in tears, or trying to sneak into a stranger’s house late at night, with intentions that seem to have exploded into our minds from some distant elsewhere—the brain might be choosing where each foot lands, but the way in which it does so is always constrained by the shape of our legs
  • The way in which the brain approaches the task of walking is already coded by the physical layout of the body—and as such, wouldn’t it make sense to think of the body as being part of our decision-making apparatus? The mind is not simply the brain, as a generation of biological reductionists, clearing out the old wreckage of what had once been the soul, once insisted. It’s not a kind of software being run on the logical-processing unit of the brain. It’s bigger, and richer, and grosser, in every sense. It has joints and sinews. The rarefied rational mind sweats and shits; this body, this mound of eventually rotting flesh, is really you.
  • That’s embodied cognition.
  • Extended cognition is stranger.
  • The mind, they argue, has no reason to stop at the edges of the body, hemmed in by skin, flapping open and closed with mouths and anuses.
  • When we jot something down—a shopping list, maybe—on a piece of paper, aren’t we in effect remembering it outside our heads? Most of all, isn’t language itself something that’s always external to the individual mind?
  • Language sits hazy in the world, a symbolic and intersubjective ether, but at the same time it forms the substance of our thought and the structure of our understanding. Isn’t language thinking for us?
  • Writing, for Plato, is a pharmakon, a “remedy” for forgetfulness, but if taken in too strong a dose it becomes a poison: A person no longer remembers things for themselves; it’s the text that remembers, with an unholy autonomy. The same criticisms are now commonly made of smartphones. Not much changes.
‹ Previous 21 - 40 of 56 Next ›
Showing 20 items per page