Skip to main content

Home/ TOK Friends/ Group items tagged science community

Rss Feed Group items tagged

Grace Carey

News at Tipitaka Network - 0 views

  •  
    Finding some interesting and very much TOK articles while I'm working on my religious investigation about the science behind Buddhist beliefs. I found this one particularly intriguing as it discusses why the theory of reincarnation is scientifically sound and why scientists are often narrow-minded and overly trusted. "I was once told by a Buddhist G.P. that, on his first day at a medical school in Sydney, the famous Professor, head of the Medical School, began his welcoming address by stating "Half of what we are going to teach you in the next few years is wrong. Our problem is that we do not know which half it is!" Those were the words of a real scientist." "Logic is only as reliable as the assumptions on which it is based." "Objective experience is that which is free from all bias. In Buddhism, the three types of bias are desire, ill-will and skeptical doubt. Desire makes one see only what one wants to see, it bends the truth to fit one's preferences." "Reality, according to pure science, does not consist of well ordered matter with precise massed, energies and positions in space, all just waiting to be measured. Reality is the broadest of smudges of all possibilities, only some being more probable than others." "At a recent seminar on Science and Religion, at which I was a speaker, a Catholic in the audience bravely announced that whenever she looks through a telescope at the stars, she feels uncomfortable because her religion is threatened. I commented that whenever a scientist looks the other way round through a telescope, to observe the one who is watching, then they feel uncomfortable because their science is threatened by what is doing the seeing! "
Emily Horwitz

Surrounded by Humans, Elephant in South Korea Learns to 'Speak' - NYTimes.com - 0 views

  • At the Everland Zoo in South Korea, there is a young male elephant that can speak Korean.
  • In fact, Koshik seems to imitate the pitch and timbre of human speech, and of his trainers in particular.
  • started imitating human speech out of a need to socialize
  • ...1 more annotation...
  • “He’s basically using this as a social function, but not really to communicate with the keepers,” Dr. Stoeger said.
Javier E

The Selfish Gene turns 40 | Science | The Guardian - 0 views

  • The idea was this: genes strive for immortality, and individuals, families, and species are merely vehicles in that quest. The behaviour of all living things is in service of their genes hence, metaphorically, they are selfish.
  • Before this, it had been proposed that natural selection was honing the behaviour of living things to promote the continuance through time of the individual creature, or family, or group or species. But in fact, Dawkins said, it was the gene itself that was trying to survive, and it just so happened that the best way for it to survive was in concert with other genes in the impermanent husk of an individual
  • This gene-centric view of evolution also began to explain one of the oddities of life on Earth – the behaviour of social insects. What is the point of a drone bee, doomed to remain childless and in the service of a totalitarian queen? Suddenly it made sense that, with the gene itself steering evolution, the fact that the drone shared its DNA with the queen meant that its servitude guarantees not the individual’s survival, but the endurance of the genes they shar
  • ...9 more annotations...
  • the subject is taught bafflingly minimally and late in the curriculum even today; evolution by natural selection is crucial to every aspect of the living world. In the words of the Russian scientist Theodosius Dobzhansky: “Nothing in biology makes sense except in the light of evolution.”
  • his true legacy is The Selfish Gene and its profound effect on multiple generations of scientists and lay readers. In a sense, The Selfish Gene and Dawkins himself are bridges, both intellectually and chronologically, between the titans of mid-century biology – Ronald Fisher, Trivers, Hamilton, Maynard Smith and Williams – and our era of the genome, in which the interrogation of DNA dominates the study of evolution.
  • Genes aren’t what they used to be either. In 1976 they were simply stretches of DNA that encoded proteins. We now know about genes made of DNA’s cousin, RNA; we’ve discovered genes that hop from genome to genome
  • Since 1976, our understanding of why life is the way it is has blossomed and changed. Once the gene became the dominant idea in biology in the 1990s there followed a technological goldrush – the Human Genome Project – to find them all.
  • None of the complications of modern genomes erodes the central premise of the selfish gene.
  • Much of the enmity stems from people misunderstanding that selfishness is being used as a metaphor. The irony of these attacks is that the selfish gene metaphor actually explains altruism. We help others who are not directly related to us because we share similar versions of genes with them.
  • In the scientific community, the chief objection maintains that natural selection can operate at the level of a group of animals, not solely on genes or even individuals
  • To my mind, and that of the majority of evolutionary biologists, the gene-centric view of evolution always emerges intact.
  • the premise remains exciting that a gene’s only desire is to reproduce itself, and that the complexity of genomes makes that reproduction more efficient.
dicindioha

A New Form of Stem-Cell Engineering Raises Ethical Questions - The New York Times - 0 views

  • researchers at Harvard Medical School said it was time to ponder a startling new prospect: synthetic embryos.
  • They are starting to assemble stem cells that can organize themselves into embryolike structures.
  • But in the future, they may develop into far more complex forms, the researchers said, such as a beating human heart connected to a rudimentary brain, all created from stem cells
  • ...8 more annotations...
  • Whatever else, it is sure to unnerve most of us.
  • Scientists, for example, should never create a Sheef that feels pain.
  • Scientists began grappling with the ethics of lab-raised embryos more than four decades ago.
  • In 1979, a federal advisory board recommended that the cutoff should be 14 days.
  • The embryonic cells develop into three types, called germ layers. Each of those germ layers goes on to produce all the body’s tissues and organs.
  • This triggered communication by the cells, and they organized themselves into the arrangement found in an early mouse embryo.
  • Even if ethicists do manage to agree on certain limits, Paul S. Knoepfler, a stem cell biologist at the University of California, Davis, wondered how easy it would be for scientists to know if they had crossed them.
  • Spotting a primitive streak is easy. Determining whether a collection of neurons connected to other tissues in a dish can feel pain is not.
  •  
    Scientists wonder about the response in terms of ethics to their new idea and possibility of synthetic embryos. They might be able to grow into structures that could help in the human body, but to what extent would they stop growing, or would they feel pain? Are we creating life?... just to destroy it?
charlottedonoho

How have changes to publishing affected scientists? | Julie McDougall-Waters | Science ... - 0 views

  • That was the purpose of a recent oral history event at the Royal Society, involving four senior scientists who began their careers in the 1960s and 1970s. Rather than simply reminiscing, they were asked to recall their publishing experiences in scientific periodicals over the last fifty years. How have things changed since they published their first paper?
  • It became clear that the hierarchy of journals has changed over the last fifty years, and the pressure to publish in those considered to have the highest impact has increased considerably, partly a result of the increased volume of data being produced and the need for readers to filter relevant information from the copious amounts of less pertinent stuff available.
  • What have also changed are the technologies available to write a paper. Frith related the process she went through in writing her first paper: “I wrote my papers by long hand and then typed them myself.” Writing a biological paper before computers is one thing, but Ashmore remembered the problems of producing mathematical formulae in a typed manuscript, explaining that “you wrote the paper and probably took it along to somebody to be typed… And then it came back with spaces where you had to write in the equations.”
  • ...2 more annotations...
  • Another change that interested the panellists was the increased number of collaborative and multiple authored papers now submitted to journals, which led them to think about the ethics of acknowledgement. In Meurig Thomas’s view the author is, simply, “the person that primarily thinks about the experiment, plans it, and writes it. I can sleep more comfortably at night this way. If I claim to be a senior author, I have to write it and I have to concoct what the experiment was, and defend it.” Chaloner suggested that authorship has grown “because of the pressure for people to have publications in their names”, with an “agreement to let you come onto this paper and I’ll get on yours next time”. Frith referred to this as “gaming”.
  • Despite all of the technological developments in the last fifty years, there has been no quick or easy response to questions over refereeing, and the event ended with the feeling that although there is no doubt technology has transformed the way science is communicated, its effect has not invariably simplified the process.
carolinewren

Whoops! A creationist museum supporter stumbled upon a major fossil find. - The Washing... - 0 views

  • Adhering to the most extreme form of religious creationism, the exhibits "prove" that the Earth is only around 6,000 years old, and that humans and dinosaurs co-existed.
  • Unfortunately, Nernberg just dug up a 60-million-year-old fish
  • Local outlets report that the man is far from shaken by the bony fish, which he found while excavating a basement in Calgary.
  • ...9 more annotations...
  • He just doesn't believe they're that old. And he's quite the fossil lover.
  • We all have the same evidence, and it’s just a matter of how you interpret it,”
  • “There’s no dates stamped on these things."
  • Just, you know, isotopic dating, basic geology, really shoddy stuff like that.
  • the science of dating fossils is not shaky -- at least not on the order of tens of millions of years of error -- so this fossil and the rocks around it really do give new earth creationism the boot.
  • But this can go down as one of the best examples ever of why it's downright impossible to convince someone who's "opposed" to evolution that it's a basic fact: If you think the very tenets of science are misguided, pretty much any evidence presented to you can be written off as fabricated or misinterpreted.
  • scientific community is thrilled and grateful for the find, and the University of Calgary will unveil the five fossils on Thursday.
  • It's an important point in Earth's evolutionary history, because new species were popping up all over to make up for the ecological niches dinos left behind.
  • Ironically, Nernberg's contributions at the Creation Science Museum are almost certainly what scientists have to thank for the find
Javier E

Lies, Damned Lies, and Medical Science - Magazine - The Atlantic - 0 views

  • He and his team have shown, again and again, and in many different ways, that much of what biomedical researchers conclude in published studies—conclusions that doctors keep in mind when they prescribe antibiotics or blood-pressure medication, or when they advise us to consume more fiber or less meat, or when they recommend surgery for heart disease or back pain—is misleading, exaggerated, and often flat-out wrong. He charges that as much as 90 percent of the published medical information that doctors rely on is flawed. His work has been widely accepted by the medical community
  • for all his influence, he worries that the field of medical research is so pervasively flawed, and so riddled with conflicts of interest, that it might be chronically resistant to change—or even to publicly admitting that there’s a problem
  • he discovered that the range of errors being committed was astonishing: from what questions researchers posed, to how they set up the studies, to which patients they recruited for the studies, to which measurements they took, to how they analyzed the data, to how they presented their results, to how particular studies came to be published in medical journals
  • ...5 more annotations...
  • “The studies were biased,” he says. “Sometimes they were overtly biased. Sometimes it was difficult to see the bias, but it was there.” Researchers headed into their studies wanting certain results—and, lo and behold, they were getting them. We think of the scientific process as being objective, rigorous, and even ruthless in separating out what is true from what we merely wish to be true, but in fact it’s easy to manipulate results, even unintentionally or unconsciously. “At every step in the process, there is room to distort results, a way to make a stronger claim or to select what is going to be concluded,” says Ioannidis. “There is an intellectual conflict of interest that pressures researchers to find whatever it is that is most likely to get them funded.”
  • Ioannidis laid out a detailed mathematical proof that, assuming modest levels of researcher bias, typically imperfect research techniques, and the well-known tendency to focus on exciting rather than highly plausible theories, researchers will come up with wrong findings most of the time.
  • if you’re attracted to ideas that have a good chance of being wrong, and if you’re motivated to prove them right, and if you have a little wiggle room in how you assemble the evidence, you’ll probably succeed in proving wrong theories right. His model predicted, in different fields of medical research, rates of wrongness roughly corresponding to the observed rates at which findings were later convincingly refuted: 80 percent of non-randomized studies (by far the most common type) turn out to be wrong, as do 25 percent of supposedly gold-standard randomized trials, and as much as 10 percent of the platinum-standard large randomized trials.
  • He zoomed in on 49 of the most highly regarded research findings in medicine over the previous 13 years, as judged by the science community’s two standard measures: the papers had appeared in the journals most widely cited in research articles, and the 49 articles themselves were the most widely cited articles in these journals
  • Ioannidis was putting his contentions to the test not against run-of-the-mill research, or even merely well-accepted research, but against the absolute tip of the research pyramid. Of the 49 articles, 45 claimed to have uncovered effective interventions. Thirty-four of these claims had been retested, and 14 of these, or 41 percent, had been convincingly shown to be wrong or significantly exaggerated. If between a third and a half of the most acclaimed research in medicine was proving untrustworthy, the scope and impact of the problem were undeniable.
julia rhodes

In the Human Brain, Size Really Isn't Everything - NYTimes.com - 0 views

  • There are many things that make humans a unique species, but a couple stand out. One is our mind, the other our brain.
  • The human mind can carry out cognitive tasks that other animals cannot, like using language, envisioning the distant future and inferring what other people are thinking.
  • Scientists have long suspected that our big brain and powerful mind are intimately connected. Starting about three million years ago, fossils of our ancient relatives record a huge increase in brain size. Once that cranial growth was underway, our forerunners started leaving behind signs of increasingly sophisticated minds, like stone tools and cave paintings.
  • ...6 more annotations...
  • In our smaller-brained ancestors, the researchers argue, neurons were tightly tethered in a relatively simple pattern of connections. When our ancestors’ brains expanded, those tethers ripped apart, enabling our neurons to form new circuits.
  • There are cortices for the other senses, too. The sensory cortices relay signals to another set of regions called motor cortices. The motor cortices send out commands. This circuit is good for controlling basic mammal behavior. “You experience something in the world and you respond to it,” Dr. Krienen said.
  • After mammals are born, their experiences continue to strengthen this wiring. As a mammal sees more of the world, for example, neurons in the visual cortex form more connections to the motor cortices, so that the bucket brigade moves faster and more efficiently.
  • Human brains are different. As they got bigger, their sensory and motor cortices barely expanded. Instead, it was the regions in between, known as the association cortices, that bloomed. Our association cortices are crucial for the kinds of thought that we humans excel at. Among other tasks, association cortices are crucial for making decisions, retrieving memories and reflecting on ourselves.
  • Association cortices are also unusual for their wiring. They are not connected in the relatively simple, bucket-brigade pattern found in other mammal brains. Instead, they link to one another with wild abandon. A map of association cortices looks less like an assembly line and more like the Internet, with each region linked to others near and far.
  • This new wiring may have been crucial to the evolution of the human mind. Our association cortices liberate us from the rapid responses of other mammal brains. These new brain regions can communicate without any input from the outside world, discovering new insights about our environment and ourselves.
Javier E

Can truth survive this president? An honest investigation. - The Washington Post - 0 views

  • in the summer of 2002, long before “fake news” or “post-truth” infected the vernacular, one of President George W. Bush’s top advisers mocked a journalist for being part of the “reality-based community.” Seeking answers in reality was for suckers, the unnamed adviser explained. “We’re an empire now, and when we act, we create our own reality.”
  • This was the hubris and idealism of a post-Cold War, pre-Iraq War superpower: If you exert enough pressure, events will bend to your will.
  • the deceit emanating from the White House today is lazier, more cynical. It is not born of grand strategy or ideology; it is impulsive and self-serving. It is not arrogant, but shameless.
  • ...26 more annotations...
  • Bush wanted to remake the world. President Trump, by contrast, just wants to make it up as he goes along
  • Through all their debates over who is to blame for imperiling truth (whether Trump, postmodernism, social media or Fox News), as well as the consequences (invariably dire) and the solutions (usually vague), a few conclusions materialize, should you choose to believe them.
  • There is a pattern and logic behind the dishonesty of Trump and his surrogates; however, it’s less multidimensional chess than the simple subordination of reality to political and personal ambition
  • Trump’s untruth sells best precisely when feelings and instincts overpower facts, when America becomes a safe space for fabrication.
  • Rand Corp. scholars Jennifer Kavanagh and Michael D. Rich point to the Gilded Age, the Roaring Twenties and the rise of television in the mid-20th century as recent periods of what they call “Truth Decay” — marked by growing disagreement over facts and interpretation of data; a blurring of lines between opinion, fact and personal experience; and diminishing trust in once-respected sources of information.
  • In eras of truth decay, “competing narratives emerge, tribalism within the U.S. electorate increases, and political paralysis and dysfunction grow,”
  • Once you add the silos of social media as well as deeply polarized politics and deteriorating civic education, it becomes “nearly impossible to have the types of meaningful policy debates that form the foundation of democracy.”
  • To interpret our era’s debasement of language, Kakutani reflects perceptively on the World War II-era works of Victor Klemperer, who showed how the Nazis used “words as ‘tiny doses of arsenic’ to poison and subvert the German culture,” and of Stefan Zweig, whose memoir “The World of Yesterday” highlights how ordinary Germans failed to grasp the sudden erosion of their freedoms.
  • Kakutani calls out lefty academics who for decades preached postmodernism and social constructivism, which argued that truth is not universal but a reflection of relative power, structural forces and personal vantage points.
  • postmodernists rejected Enlightenment ideals as “vestiges of old patriarchal and imperialist thinking,” Kakutani writes, paving the way for today’s violence against fact in politics and science.
  • “dumbed-down corollaries” of postmodernist thought have been hijacked by Trump’s defenders, who use them to explain away his lies, inconsistencies and broken promises.
  • intelligent-design proponents and later climate deniers drew from postmodernism to undermine public perceptions of evolution and climate change. “Even if right-wing politicians and other science deniers were not reading Derrida and Foucault, the germ of the idea made its way to them: science does not have a monopoly on the truth,
  • McIntyre quotes at length from mea culpas by postmodernist and social constructivist writers agonizing over what their theories have wrought, shocked that conservatives would use them for nefarious purposes
  • pro-Trump troll and conspiracy theorist Mike Cernovich , who helped popularize the “Pizzagate” lie, has forthrightly cited his unlikely influences. “Look, I read postmodernist theory in college,” Cernovich told the New Yorker in 2016. “If everything is a narrative, then we need alternatives to the dominant narrative. I don’t seem like a guy who reads [Jacques] Lacan, do I?
  • When truth becomes malleable and contestable regardless of evidence, a mere tussle of manufactured narratives, it becomes less about conveying facts than about picking sides, particularly in politics.
  • In “On Truth,” Cambridge University philosopher Simon Blackburn writes that truth is attainable, if at all, “only at the vanishing end points of enquiry,” adding that, “instead of ‘facts first’ we may do better if we think of ‘enquiry first,’ with the notion of fact modestly waiting to be invited to the feast afterward.
  • He is concerned, but not overwhelmingly so, about the survival of truth under Trump. “Outside the fevered world of politics, truth has a secure enough foothold,” Blackburn writes. “Perjury is still a serious crime, and we still hope that our pilots and surgeons know their way about.
  • Kavanaugh and Rich offer similar consolation: “Facts and data have become more important in most other fields, with political and civil discourse being striking exceptions. Thus, it is hard to argue that the world is truly ‘post-fact.’ ”
  • McIntyre argues persuasively that our methods of ascertaining truth — not just the facts themselves — are under attack, too, and that this assault is especially dangerous.
  • Ideologues don’t just disregard facts they disagree with, he explains, but willingly embrace any information, however dubious, that fits their agenda. “This is not the abandonment of facts, but a corruption of the process by which facts are credibly gathered and reliably used to shape one’s beliefs about reality. Indeed, the rejection of this undermines the idea that some things are true irrespective of how we feel about them.”
  • “It is hardly a depressing new phenomenon that people’s beliefs are capable of being moved by their hopes, grievances and fears,” Blackburn writes. “In order to move people, objective facts must become personal beliefs.” But it can’t work — or shouldn’t work — in reverse.
  • More than fearing a post-truth world, Blackburn is concerned by a “post-shame environment,” in which politicians easily brush off their open disregard for truth.
  • it is human nature to rationalize away the dissonance. “Why get upset by his lies, when all politicians lie?” Kakutani asks, distilling the mind-set. “Why get upset by his venality, when the law of the jungle rules?”
  • So any opposition is deemed a witch hunt, or fake news, rigged or just so unfair. Trump is not killing the truth. But he is vandalizing it, constantly and indiscriminately, diminishing its prestige and appeal, coaxing us to look away from it.
  • the collateral damage includes the American experiment.
  • “One of the most important ways to fight back against post-truth is to fight it within ourselves,” he writes, whatever our particular politics may be. “It is easy to identify a truth that someone else does not want to see. But how many of us are prepared to do this with our own beliefs? To doubt something that we want to believe, even though a little piece of us whispers that we do not have all the facts?”
Javier E

ROUGH TYPE | Nicholas Carr's blog - 0 views

  • The smartphone has become a repository of the self, recording and dispensing the words, sounds and images that define what we think, what we experience and who we are. In a 2015 Gallup survey, more than half of iPhone owners said that they couldn’t imagine life without the device.
  • So what happens to our minds when we allow a single tool such dominion over our perception and cognition?
  • Not only do our phones shape our thoughts in deep and complicated ways, but the effects persist even when we aren’t using the devices. As the brain grows dependent on the technology, the research suggests, the intellect weakens.
  • ...39 more annotations...
  • he has seen mounting evidence that using a smartphone, or even hearing one ring or vibrate, produces a welter of distractions that makes it harder to concentrate on a difficult problem or job. The division of attention impedes reasoning and performance.
  • Another 2015 study, appearing in the Journal of Computer-Mediated Communication, showed that when people hear their phone ring but are unable to answer it, their blood pressure spikes, their pulse quickens, and their problem-solving skills decline.
  • The researchers recruited 520 undergraduates at UCSD and gave them two standard tests of intellectual acuity. One test gauged “available working-memory capacity,” a measure of how fully a person’s mind can focus on a particular task. The second assessed “fluid intelligence,” a person’s ability to interpret and solve an unfamiliar problem. The only variable in the experiment was the location of the subjects’ smartphones. Some of the students were asked to place their phones in front of them on their desks; others were told to stow their phones in their pockets or handbags; still others were required to leave their phones in a different room.
  • In both tests, the subjects whose phones were in view posted the worst scores, while those who left their phones in a different room did the best. The students who kept their phones in their pockets or bags came out in the middle. As the phone’s proximity increased, brainpower decreased.
  • In subsequent interviews, nearly all the participants said that their phones hadn’t been a distraction—that they hadn’t even thought about the devices during the experiment. They remained oblivious even as the phones disrupted their focus and thinking.
  • In a 2013 study conducted at the University of Essex in England, 142 participants were divided into pairs and asked to converse in private for ten minutes. Half talked with a phone in the room, half without a phone present. The subjects were then given tests of affinity, trust and empathy. “The mere presence of mobile phones,” the researchers reported in the Journal of Social and Personal Relationships, “inhibited the development of interpersonal closeness and trust” and diminished “the extent to which individuals felt empathy and understanding from their partners.”
  • the “integration of smartphones into daily life” appears to cause a “brain drain” that can diminish such vital mental skills as “learning, logical reasoning, abstract thought, problem solving, and creativity.”
  •  Smartphones have become so entangled with our existence that, even when we’re not peering or pawing at them, they tug at our attention, diverting precious cognitive resources. Just suppressing the desire to check our phone, which we do routinely and subconsciously throughout the day, can debilitate our thinking.
  • They found that students who didn’t bring their phones to the classroom scored a full letter-grade higher on a test of the material presented than those who brought their phones. It didn’t matter whether the students who had their phones used them or not: All of them scored equally poorly.
  • A study of nearly a hundred secondary schools in the U.K., published last year in the journal Labour Economics, found that when schools ban smartphones, students’ examination scores go up substantially, with the weakest students benefiting the most.
  • Social skills and relationships seem to suffer as well.
  • Because smartphones serve as constant reminders of all the friends we could be chatting with electronically, they pull at our minds when we’re talking with people in person, leaving our conversations shallower and less satisfying.
  • A second experiment conducted by the researchers produced similar results, while also revealing that the more heavily students relied on their phones in their everyday lives, the greater the cognitive penalty they suffered.
  • The evidence that our phones can get inside our heads so forcefully is unsettling. It suggests that our thoughts and feelings, far from being sequestered in our skulls, can be skewed by external forces we’re not even aware o
  •  Scientists have long known that the brain is a monitoring system as well as a thinking system. Its attention is drawn toward any object that is new, intriguing or otherwise striking — that has, in the psychological jargon, “salience.”
  • even in the history of captivating media, the smartphone stands out. It is an attention magnet unlike any our minds have had to grapple with before. Because the phone is packed with so many forms of information and so many useful and entertaining functions, it acts as what Dr. Ward calls a “supernormal stimulus,” one that can “hijack” attention whenever it is part of our surroundings — and it is always part of our surroundings.
  • Imagine combining a mailbox, a newspaper, a TV, a radio, a photo album, a public library and a boisterous party attended by everyone you know, and then compressing them all into a single, small, radiant object. That is what a smartphone represents to us. No wonder we can’t take our minds off it.
  • The irony of the smartphone is that the qualities that make it so appealing to us — its constant connection to the net, its multiplicity of apps, its responsiveness, its portability — are the very ones that give it such sway over our minds.
  • Phone makers like Apple and Samsung and app writers like Facebook, Google and Snap design their products to consume as much of our attention as possible during every one of our waking hours
  • Social media apps were designed to exploit “a vulnerability in human psychology,” former Facebook president Sean Parker said in a recent interview. “[We] understood this consciously. And we did it anyway.”
  • A quarter-century ago, when we first started going online, we took it on faith that the web would make us smarter: More information would breed sharper thinking. We now know it’s not that simple.
  • As strange as it might seem, people’s knowledge and understanding may actually dwindle as gadgets grant them easier access to online data stores
  • In a seminal 2011 study published in Science, a team of researchers — led by the Columbia University psychologist Betsy Sparrow and including the late Harvard memory expert Daniel Wegner — had a group of volunteers read forty brief, factual statements (such as “The space shuttle Columbia disintegrated during re-entry over Texas in Feb. 2003”) and then type the statements into a computer. Half the people were told that the machine would save what they typed; half were told that the statements would be erased.
  • Afterward, the researchers asked the subjects to write down as many of the statements as they could remember. Those who believed that the facts had been recorded in the computer demonstrated much weaker recall than those who assumed the facts wouldn’t be stored. Anticipating that information would be readily available in digital form seemed to reduce the mental effort that people made to remember it
  • The researchers dubbed this phenomenon the “Google effect” and noted its broad implications: “Because search engines are continually available to us, we may often be in a state of not feeling we need to encode the information internally. When we need it, we will look it up.”
  • as the pioneering psychologist and philosopher William James said in an 1892 lecture, “the art of remembering is the art of thinking.”
  • Only by encoding information in our biological memory can we weave the rich intellectual associations that form the essence of personal knowledge and give rise to critical and conceptual thinking. No matter how much information swirls around us, the less well-stocked our memory, the less we have to think with.
  • As Dr. Wegner and Dr. Ward explained in a 2013 Scientific American article, when people call up information through their devices, they often end up suffering from delusions of intelligence. They feel as though “their own mental capacities” had generated the information, not their devices. “The advent of the ‘information age’ seems to have created a generation of people who feel they know more than ever before,” the scholars concluded, even though “they may know ever less about the world around them.”
  • That insight sheds light on society’s current gullibility crisis, in which people are all too quick to credit lies and half-truths spread through social media. If your phone has sapped your powers of discernment, you’ll believe anything it tells you.
  • Data, the novelist and critic Cynthia Ozick once wrote, is “memory without history.” Her observation points to the problem with allowing smartphones to commandeer our brains
  • When we constrict our capacity for reasoning and recall or transfer those skills to a gadget, we sacrifice our ability to turn information into knowledge. We get the data but lose the meaning
  • We need to give our minds more room to think. And that means putting some distance between ourselves and our phones.
  • Harvard Business School professor emerita Shoshana Zuboff argues in her new book that the Valley’s wealth and power are predicated on an insidious, essentially pathological form of private enterprise—what she calls “surveillance capitalism.” Pioneered by Google, perfected by Facebook, and now spreading throughout the economy, surveillance capitalism uses human life as its raw material. Our everyday experiences, distilled into data, have become a privately-owned business asset used to predict and mold our behavior, whether we’re shopping or socializing, working or voting.
  • By reengineering the economy and society to their own benefit, Google and Facebook are perverting capitalism in a way that undermines personal freedom and corrodes democracy.
  • Under the Fordist model of mass production and consumption that prevailed for much of the twentieth century, industrial capitalism achieved a relatively benign balance among the contending interests of business owners, workers, and consumers. Enlightened executives understood that good pay and decent working conditions would ensure a prosperous middle class eager to buy the goods and services their companies produced. It was the product itself — made by workers, sold by companies, bought by consumers — that tied the interests of capitalism’s participants together. Economic and social equilibrium was negotiated through the product.
  • By removing the tangible product from the center of commerce, surveillance capitalism upsets the equilibrium. Whenever we use free apps and online services, it’s often said, we become the products, our attention harvested and sold to advertisers
  • this truism gets it wrong. Surveillance capitalism’s real products, vaporous but immensely valuable, are predictions about our future behavior — what we’ll look at, where we’ll go, what we’ll buy, what opinions we’ll hold — that internet companies derive from our personal data and sell to businesses, political operatives, and other bidders.
  • Unlike financial derivatives, which they in some ways resemble, these new data derivatives draw their value, parasite-like, from human experience.To the Googles and Facebooks of the world, we are neither the customer nor the product. We are the source of what Silicon Valley technologists call “data exhaust” — the informational byproducts of online activity that become the inputs to prediction algorithms
  • internet companies operate in what Zuboff terms “extreme structural independence from people.” When databases displace goods as the engine of the economy, our own interests, as consumers but also as citizens, cease to be part of the negotiation. We are no longer one of the forces guiding the market’s invisible hand. We are the objects of surveillance and control.
Javier E

Opinion | How Genetics Is Changing Our Understanding of 'Race' - The New York Times - 0 views

  • In 1942, the anthropologist Ashley Montagu published “Man’s Most Dangerous Myth: The Fallacy of Race,” an influential book that argued that race is a social concept with no genetic basis.
  • eginning in 1972, genetic findings began to be incorporated into this argument. That year, the geneticist Richard Lewontin published an important study of variation in protein types in blood. He grouped the human populations he analyzed into seven “races” — West Eurasians, Africans, East Asians, South Asians, Native Americans, Oceanians and Australians — and found that around 85 percent of variation in the protein types could be accounted for by variation within populations and “races,” and only 15 percent by variation across them. To the extent that there was variation among humans, he concluded, most of it was because of “differences between individuals.”
  • In this way, a consensus was established that among human populations there are no differences large enough to support the concept of “biological race.” Instead, it was argued, race is a “social construct,” a way of categorizing people that changes over time and across countries.
  • ...29 more annotations...
  • t is true that race is a social construct. It is also true, as Dr. Lewontin wrote, that human populations “are remarkably similar to each other” from a genetic point of view.
  • this consensus has morphed, seemingly without questioning, into an orthodoxy. The orthodoxy maintains that the average genetic differences among people grouped according to today’s racial terms are so trivial when it comes to any meaningful biological traits that those differences can be ignored.
  • With the help of these tools, we are learning that while race may be a social construct, differences in genetic ancestry that happen to correlate to many of today’s racial constructs are real.
  • I have deep sympathy for the concern that genetic discoveries could be misused to justify racism. But as a geneticist I also know that it is simply no longer possible to ignore average genetic differences among “races.”
  • Groundbreaking advances in DNA sequencing technology have been made over the last two decades
  • Care.
  • The orthodoxy goes further, holding that we should be anxious about any research into genetic differences among populations
  • You will sometimes hear that any biological differences among populations are likely to be small, because humans have diverged too recently from common ancestors for substantial differences to have arisen under the pressure of natural selection. This is not true. The ancestors of East Asians, Europeans, West Africans and Australians were, until recently, almost completely isolated from one another for 40,000 years or longer, which is more than sufficient time for the forces of evolution to work
  • I am worried that well-meaning people who deny the possibility of substantial biological differences among human populations are digging themselves into an indefensible position, one that will not survive the onslaught of science.
  • I am also worried that whatever discoveries are made — and we truly have no idea yet what they will be — will be cited as “scientific proof” that racist prejudices and agendas have been correct all along, and that those well-meaning people will not understand the science well enough to push back against these claims.
  • This is why it is important, even urgent, that we develop a candid and scientifically up-to-date way of discussing any such difference
  • While most people will agree that finding a genetic explanation for an elevated rate of disease is important, they often draw the line there. Finding genetic influences on a propensity for disease is one thing, they argue, but looking for such influences on behavior and cognition is another
  • Is performance on an intelligence test or the number of years of school a person attends shaped by the way a person is brought up? Of course. But does it measure something having to do with some aspect of behavior or cognition? Almost certainly.
  • Recent genetic studies have demonstrated differences across populations not just in the genetic determinants of simple traits such as skin color, but also in more complex traits like bodily dimensions and susceptibility to diseases.
  • in Iceland, there has been measurable genetic selection against the genetic variations that predict more years of education in that population just within the last century.
  • consider what kinds of voices are filling the void that our silence is creating
  • Nicholas Wade, a longtime science journalist for The New York Times, rightly notes in his 2014 book, “A Troublesome Inheritance: Genes, Race and Human History,” that modern research is challenging our thinking about the nature of human population differences. But he goes on to make the unfounded and irresponsible claim that this research is suggesting that genetic factors explain traditional stereotypes.
  • 139 geneticists (including myself) pointed out in a letter to The New York Times about Mr. Wade’s book, there is no genetic evidence to back up any of the racist stereotypes he promotes.
  • Another high-profile example is James Watson, the scientist who in 1953 co-discovered the structure of DNA, and who was forced to retire as head of the Cold Spring Harbor Laboratories in 2007 after he stated in an interview — without any scientific evidence — that research has suggested that genetic factors contribute to lower intelligence in Africans than in Europeans.
  • What makes Dr. Watson’s and Mr. Wade’s statements so insidious is that they start with the accurate observation that many academics are implausibly denying the possibility of average genetic differences among human populations, and then end with a claim — backed by no evidence — that they know what those differences are and that they correspond to racist stereotypes
  • They use the reluctance of the academic community to openly discuss these fraught issues to provide rhetorical cover for hateful ideas and old racist canards.
  • This is why knowledgeable scientists must speak out. If we abstain from laying out a rational framework for discussing differences among populations, we risk losing the trust of the public and we actively contribute to the distrust of expertise that is now so prevalent.
  • If scientists can be confident of anything, it is that whatever we currently believe about the genetic nature of differences among populations is most likely wrong.
  • For example, my laboratory discovered in 2016, based on our sequencing of ancient human genomes, that “whites” are not derived from a population that existed from time immemorial, as some people believe. Instead, “whites” represent a mixture of four ancient populations that lived 10,000 years ago and were each as different from one another as Europeans and East Asians are today.
  • For me, a natural response to the challenge is to learn from the example of the biological differences that exist between males and females
  • The differences between the sexes are far more profound than those that exist among human populations, reflecting more than 100 million years of evolution and adaptation. Males and females differ by huge tracts of genetic material
  • How do we accommodate the biological differences between men and women? I think the answer is obvious: We should both recognize that genetic differences between males and females exist and we should accord each sex the same freedoms and opportunities regardless of those differences
  • fulfilling these aspirations in practice is a challenge. Yet conceptually it is straightforward.
  • Compared with the enormous differences that exist among individuals, differences among populations are on average many times smaller, so it should be only a modest challenge to accommodate a reality in which the average genetic contributions to human traits differ.
pier-paolo

Modern Science Didn't Appear Until the 17th Century. What Took So Long? - The New York ... - 0 views

  • While modern science is built on the primacy of empirical data — appealing to the objectivity of facts — actual progress requires determined partisans to move it along.
  • Why wasn’t it the ancient Babylonians putting zero-gravity observatories into orbit around the earth,” Strevens asks, “the ancient Greeks engineering flu vaccines and transplanting hearts?”
  • transforming ordinary thinking humans into modern scientists entails “a morally and intellectually violent process.”
  • ...6 more annotations...
  • So much scientific research takes place under conditions of “intellectual confinement” — painstaking, often tedious work that requires attention to minute details, accounting for fractions of an inch and slivers of a degree.
  • This kind of obsessiveness has made modern science enormously productive, but Strevens says there is something fundamentally irrational and even “inhuman” about it.
  • He points out that focusing so narrowly, for so long, on tedious work that may not come to anything is inherently unappealing for most people. Rich and learned cultures across the world pursued all kinds of erudition and scholarly traditions, but didn’t develop this “knowledge machine”
  • The same goes for brilliant, intellectually curious individuals like Aristotle, who generated his own theory about physics but never proposed anything like the scientific method.
  • but in order to communicate with one another, in scientific journals, they have to abide by this rule. The motto of England’s Royal Society, founded in 1660, is “Nullius in verba”: “Take nobody’s word for it.”
  • purged of all nonscientific curiosity by a “program of moralizing and miseducation.” The great scientists were exceptions because they escaped the “deadening effects” of this inculcation; the rest are just “the standard product of this system”: “an empiricist all the way down.”
lucieperloff

Hours After Pigs' Death, Scientists Restore Brain Cell Activity | Live Science - 0 views

  • In a radical experiment that has some experts questioning what it means to be "alive," scientists have restored brain circulation and some cell activity in pigs' brains hours after the animals died
  • that in some cases, the cell death processes can be postponed or even reversed, Sestan said.
  • Still, the researchers stressed that they did not observe any kind of activity in the pigs' brains that would be needed for normal brain function or things like awareness or consciousness.
  • ...3 more annotations...
  • During this time, the BrainEx system not only preserved brain cell structure and reduced cell death, but also restored some cellular activity.
  • For example, although scientists are a long way from being able to restore brain function in people with severe brain injuries, if some restoration of brain activity is possible, "then we would have to change our definition of brain death," Singhal told Live Science.
    • lucieperloff
       
      Could hypothetically change the medical community for ever
  • The work also could stimulate research on ways to promote brain recovery after loss of blood flow to the brain, such as during a heart attack.
Javier E

How will humanity endure the climate crisis? I asked an acclaimed sci-fi writer | Danie... - 0 views

  • To really grasp the present, we need to imagine the future – then look back from it to better see the now. The angry climate kids do this naturally. The rest of us need to read good science fiction. A great place to start is Kim Stanley Robinson.
  • read 11 of his books, culminating in his instant classic The Ministry for the Future, which imagines several decades of climate politics starting this decade.
  • The first lesson of his books is obvious: climate is the story.
  • ...29 more annotations...
  • What Ministry and other Robinson books do is make us slow down the apocalyptic highlight reel, letting the story play in human time for years, decades, centuries.
  • he wants leftists to set aside their differences, and put a “time stamp on [their] political view” that recognizes how urgent things are. Looking back from 2050 leaves little room for abstract idealism. Progressives need to form “a united front,” he told me. “It’s an all-hands-on-deck situation; species are going extinct and biomes are dying. The catastrophes are here and now, so we need to make political coalitions.”
  • he does want leftists – and everyone else – to take the climate emergency more seriously. He thinks every big decision, every technological option, every political opportunity, warrants climate-oriented scientific scrutiny. Global justice demands nothing less.
  • He wants to legitimize geoengineering, even in forms as radical as blasting limestone dust into the atmosphere for a few years to temporarily dim the heat of the sun
  • Robinson believes that once progressives internalize the insight that the economy is a social construct just like anything else, they can determine – based on the contemporary balance of political forces, ecological needs, and available tools – the most efficient methods for bringing carbon and capital into closer alignment.
  • We live in a world where capitalist states and giant companies largely control science.
  • Yes, we need to consider technologies with an open mind. That includes a frank assessment of how the interests of the powerful will shape how technologies develop
  • Robinson’s imagined future suggests a short-term solution that fits his dreams of a democratic, scientific politics: planning, of both the economy and planet.
  • it’s borrowed from Robinson’s reading of ecological economics. That field’s premise is that the economy is embedded in nature – that its fundamental rules aren’t supply and demand, but the laws of physics, chemistry, biology.
  • The upshot of Robinson’s science fiction is understanding that grand ecologies and human economies are always interdependent.
  • Robinson seems to be urging all of us to treat every possible technological intervention – from expanding nuclear energy, to pumping meltwater out from under glaciers, to dumping iron filings in the ocean – from a strictly scientific perspective: reject dogma, evaluate the evidence, ignore the profit motive.
  • Robinson’s elegant solution, as rendered in Ministry, is carbon quantitative easing. The idea is that central banks invent a new currency; to earn the carbon coins, institutions must show that they’re sucking excess carbon down from the sky. In his novel, this happens thanks to a series of meetings between United Nations technocrats and central bankers. But the technocrats only win the arguments because there’s enough rage, protest and organizing in the streets to force the bankers’ hand.
  • Seen from Mars, then, the problem of 21st-century climate economics is to sync public and private systems of capital with the ecological system of carbon.
  • Success will snowball; we’ll democratically plan more and more of the eco-economy.
  • Robinson thus gets that climate politics are fundamentally the politics of investment – extremely big investments. As he put it to me, carbon quantitative easing isn’t the “silver bullet solution,” just one of several green investment mechanisms we need to experiment with.
  • Robinson shares the great anarchist dream. “Everybody on the planet has an equal amount of power, and comfort, and wealth,” he said. “It’s an obvious goal” but there’s no shortcut.
  • In his political economy, like his imagined settling of Mars, Robinson tries to think like a bench scientist – an experimentalist, wary of unifying theories, eager for many groups to try many things.
  • there’s something liberating about Robinson’s commitment to the scientific method: reasonable people can shed their prejudices, consider all the options and act strategically.
  • The years ahead will be brutal. In Ministry, tens of millions of people die in disasters – and that’s in a scenario that Robinson portrays as relatively optimistic
  • when things get that bad, people take up arms. In Ministry’s imagined future, the rise of weaponized drones allows shadowy environmentalists to attack and kill fossil capitalists. Many – including myself – have used the phrase “eco-terrorism” to describe that violence. Robinson pushed back when we talked. “What if you call that resistance to capitalism realism?” he asked. “What if you call that, well, ‘Freedom fighters’?”
  • Robinson insists that he doesn’t condone the violence depicted in his book; he simply can’t imagine a realistic account of 21st century climate politics in which it doesn’t occur.
  • Malm writes that it’s shocking how little political violence there has been around climate change so far, given how brutally the harms will be felt in communities of color, especially in the global south, who bear no responsibility for the cataclysm, and where political violence has been historically effective in anticolonial struggles.
  • In Ministry, there’s a lot of violence, but mostly off-stage. We see enough to appreciate Robinson’s consistent vision of most people as basically thoughtful: the armed struggle is vicious, but its leaders are reasonable, strategic.
  • the implications are straightforward: there will be escalating violence, escalating state repression and increasing political instability. We must plan for that too.
  • maybe that’s the tension that is Ministry’s greatest lesson for climate politics today. No document that could win consensus at a UN climate summit will be anywhere near enough to prevent catastrophic warming. We can only keep up with history, and clearly see what needs to be done, by tearing our minds out of the present and imagining more radical future vantage points
  • If millions of people around the world can do that, in an increasingly violent era of climate disasters, those people could generate enough good projects to add up to something like a rational plan – and buy us enough time to stabilize the climate, while wresting power from the 1%.
  • Robinson’s optimistic view is that human nature is fundamentally thoughtful, and that it will save us – that the social process of arguing and politicking, with minds as open as we can manage, is a project older than capitalism, and one that will eventually outlive it
  • It’s a perspective worth thinking about – so long as we’re also organizing.
  • Daniel Aldana Cohen is assistant professor of sociology at the University of California, Berkeley, where he directs the Socio-Spatial Climate Collaborative. He is the co-author of A Planet to Win: Why We Need a Green New Deal
Javier E

Do Scientists Regret Not Sticking to the Science? - WSJ - 0 views

  • In a preregistered large-sample controlled experiment, I randomly assigned participants to receive information about the endorsement of Joe Biden by the scientific journal Nature during the COVID-19 pandemic. The endorsement message caused large reductions in stated trust in Nature among Trump supporters. This distrust lowered the demand for COVID-related information provided by Nature, as evidenced by substantially reduced requests for Nature articles on vaccine efficacy when offered. The endorsement also reduced Trump supporters’ trust in scientists in general. The estimated effects on Biden supporters’ trust in Nature and scientists were positive, small and mostly statistically insignificant. I found little evidence that the endorsement changed views about Biden and Trump.
  • These results suggest that political endorsement by scientific journals can undermine and polarize public confidence in the endorsing journals and the scientific community.
  • ... scientists don’t have any special expertise on questions of values and policy. “Sticking to the science” keeps scientists speaking on issues precisely where they ought to be trusted by the public.
  • ...3 more annotations...
  • In the summer of 2020, “public-health experts” decided that racism is a public-health crisis comparable to the coronavirus pandemic. It was therefore, they claimed, within their purview to express public support for the Black Lives Matter protests following the murder of George Floyd and to argue that the benefits of such protests outweighed the increased risk of spreading the disease. Those supposed experts actually knew nothing about the likely effects of the protests. They made no concrete predictions about whether they would in any way ameliorate racism in America, just as Nature can make no concrete predictions about whether its political endorsements will actually help a preferred candidate without jeopardizing its other important goals. The political action was expressive, not evidence-based...
  • as is often the case, a debate which appears to be about the neutrality of institutions is not really about neutrality at all... Rather, it is about whether there is any room left for soberly weighing our goals and values and thinking in a measured way about the consequences of our actions rather than simply reacting to situations in an impulsive and expressive manner, broadcasting our views to the world so that people know where we stand.
  • Our goals and values might not be “neutral” at all, but they might still be best served by procedures, institutions, and even individuals that follow neutral principles.
Javier E

At the Existentialist Café: Freedom, Being, and Apricot Cocktails with Jean-P... - 0 views

  • The phenomenologists’ leading thinker, Edmund Husserl, provided a rallying cry, ‘To the things themselves!’ It meant: don’t waste time on the interpretations that accrue upon things, and especially don’t waste time wondering whether the things are real. Just look at this that’s presenting itself to you, whatever this may be, and describe it as precisely as possible.
  • You might think you have defined me by some label, but you are wrong, for I am always a work in progress. I create myself constantly through action, and this is so fundamental to my human condition that, for Sartre, it is the human condition, from the moment of first consciousness to the moment when death wipes it out. I am my own freedom: no more, no less.
  • Sartre wrote like a novelist — not surprisingly, since he was one. In his novels, short stories and plays as well as in his philosophical treatises, he wrote about the physical sensations of the world and the structures and moods of human life. Above all, he wrote about one big subject: what it meant to be free. Freedom, for him, lay at the heart of all human experience, and this set humans apart from all other kinds of object.
  • ...97 more annotations...
  • Sartre listened to his problem and said simply, ‘You are free, therefore choose — that is to say, invent.’ No signs are vouchsafed in this world, he said. None of the old authorities can relieve you of the burden of freedom. You can weigh up moral or practical considerations as carefully as you like, but ultimately you must take the plunge and do something, and it’s up to you what that something is.
  • Even if the situation is unbearable — perhaps you are facing execution, or sitting in a Gestapo prison, or about to fall off a cliff — you are still free to decide what to make of it in mind and deed. Starting from where you are now, you choose. And in choosing, you also choose who you will be.
  • The war had made people realise that they and their fellow humans were capable of departing entirely from civilised norms; no wonder the idea of a fixed human nature seemed questionable.
  • If this sounds difficult and unnerving, it’s because it is. Sartre does not deny that the need to keep making decisions brings constant anxiety. He heightens this anxiety by pointing out that what you do really matters. You should make your choices as though you were choosing on behalf of the whole of humanity, taking the entire burden of responsibility for how the human race behaves. If you avoid this responsibility by fooling yourself that you are the victim of circumstance or of someone else’s bad advice, you are failing to meet the demands of human life and choosing a fake existence, cut off from your own ‘authenticity’.
  • Along with the terrifying side of this comes a great promise: Sartre’s existentialism implies that it is possible to be authentic and free, as long as you keep up the effort.
  • almost all agreed that it was, as an article in Les nouvelles littéraires phrased it, a ‘sickening mixture of philosophic pretentiousness, equivocal dreams, physiological technicalities, morbid tastes and hesitant eroticism … an introspective embryo that one would take distinct pleasure in crushing’.
  • he offered a philosophy designed for a species that had just scared the hell out of itself, but that finally felt ready to grow up and take responsibility.
  • In this rebellious world, just as with the Parisian bohemians and Dadaists in earlier generations, everything that was dangerous and provocative was good, and everything that was nice or bourgeois was bad.
  • Such interweaving of ideas and life had a long pedigree, although the existentialists gave it a new twist. Stoic and Epicurean thinkers in the classical world had practised philosophy as a means of living well, rather than of seeking knowledge or wisdom for their own sake. By reflecting on life’s vagaries in philosophical ways, they believed they could become more resilient, more able to rise above circumstances, and better equipped to manage grief, fear, anger, disappointment or anxiety.
  • In the tradition they passed on, philosophy is neither a pure intellectual pursuit nor a collection of cheap self-help tricks, but a discipline for flourishing and living a fully human, responsible life.
  • For Kierkegaard, Descartes had things back to front. In his own view, human existence comes first: it is the starting point for everything we do, not the result of a logical deduction. My existence is active: I live it and choose it, and this precedes any statement I can make about myself.
  • Studying our own moral genealogy cannot help us to escape or transcend ourselves. But it can enable us to see our illusions more clearly and lead a more vital, assertive existence.
  • What was needed, he felt, was not high moral or theological ideals, but a deeply critical form of cultural history or ‘genealogy’ that would uncover the reasons why we humans are as we are, and how we came to be that way. For him, all philosophy could even be redefined as a form of psychology, or history.
  • For those oppressed on grounds of race or class, or for those fighting against colonialism, existentialism offered a change of perspective — literally, as Sartre proposed that all situations be judged according to how they appeared in the eyes of those most oppressed, or those whose suffering was greatest.
  • She observed that we need not expect moral philosophers to ‘live by’ their ideas in a simplistic way, as if they were following a set of rules. But we can expect them to show how their ideas are lived in. We should be able to look in through the windows of a philosophy, as it were, and see how people occupy it, how they move about and how they conduct themselves.
  • the existentialists inhabited their historical and personal world, as they inhabited their ideas. This notion of ‘inhabited philosophy’ is one I’ve borrowed from the English philosopher and novelist Iris Murdoch, who wrote the first full-length book on Sartre and was an early adopter of existentialism
  • What is existentialism anyway?
  • An existentialist who is also phenomenological provides no easy rules for dealing with this condition, but instead concentrates on describing lived experience as it presents itself. — By describing experience well, he or she hopes to understand this existence and awaken us to ways of living more authentic lives.
  • Existentialists concern themselves with individual, concrete human existence. — They consider human existence different from the kind of being other things have. Other entities are what they are, but as a human I am whatever I choose to make of myself at every moment. I am free — — and therefore I’m responsible for everything I do, a dizzying fact which causes — an anxiety inseparable from human existence itself.
  • On the other hand, I am only free within situations, which can include factors in my own biology and psychology as well as physical, historical and social variables of the world into which I have been thrown. — Despite the limitations, I always want more: I am passionately involved in personal projects of all kinds. — Human existence is thus ambiguous: at once boxed in by borders and yet transcendent and exhilarating. —
  • The first part of this is straightforward: a phenomenologist’s job is to describe. This is the activity that Husserl kept reminding his students to do. It meant stripping away distractions, habits, clichés of thought, presumptions and received ideas, in order to return our attention to what he called the ‘things themselves’. We must fix our beady gaze on them and capture them exactly as they appear, rather than as we think they are supposed to be.
  • Husserl therefore says that, to phenomenologically describe a cup of coffee, I should set aside both the abstract suppositions and any intrusive emotional associations. Then I can concentrate on the dark, fragrant, rich phenomenon in front of me now. This ‘setting aside’ or ‘bracketing out’ of speculative add-ons Husserl called epoché — a term borrowed from the ancient Sceptics,
  • The point about rigour is crucial; it brings us back to the first half of the command to describe phenomena. A phenomenologist cannot get away with listening to a piece of music and saying, ‘How lovely!’ He or she must ask: is it plaintive? is it dignified? is it colossal and sublime? The point is to keep coming back to the ‘things themselves’ — phenomena stripped of their conceptual baggage — so as to bail out weak or extraneous material and get to the heart of the experience.
  • Husserlian ‘bracketing out’ or epoché allows the phenomenologist to temporarily ignore the question ‘But is it real?’, in order to ask how a person experiences his or her world. Phenomenology gives a formal mode of access to human experience. It lets philosophers talk about life more or less as non-philosophers do, while still being able to tell themselves they are being methodical and rigorous.
  • Besides claiming to transform the way we think about reality, phenomenologists promised to change how we think about ourselves. They believed that we should not try to find out what the human mind is, as if it were some kind of substance. Instead, we should consider what it does, and how it grasps its experiences.
  • For Brentano, this reaching towards objects is what our minds do all the time. Our thoughts are invariably of or about something, he wrote: in love, something is loved, in hatred, something is hated, in judgement, something is affirmed or denied. Even when I imagine an object that isn’t there, my mental structure is still one of ‘about-ness’ or ‘of-ness’.
  • Except in deepest sleep, my mind is always engaged in this aboutness: it has ‘intentionality’. Having taken the germ of this from Brentano, Husserl made it central to his whole philosophy.
  • Husserl saw in the idea of intentionality a way to sidestep two great unsolved puzzles of philosophical history: the question of what objects ‘really’ are, and the question of what the mind ‘really’ is. By doing the epoché and bracketing out all consideration of reality from both topics, one is freed to concentrate on the relationship in the middle. One can apply one’s descriptive energies to the endless dance of intentionality that takes place in our lives: the whirl of our minds as they seize their intended phenomena one after the other and whisk them around the floor,
  • Understood in this way, the mind hardly is anything at all: it is its aboutness. This makes the human mind (and possibly some animal minds) different from any other naturally occurring entity. Nothing else can be as thoroughly about or of things as the mind is:
  • Some Eastern meditation techniques aim to still this scurrying creature, but the extreme difficulty of this shows how unnatural it is to be mentally inert. Left to itself, the mind reaches out in all directions as long as it is awake — and even carries on doing it in the dreaming phase of its sleep.
  • a mind that is experiencing nothing, imagining nothing, or speculating about nothing can hardly be said to be a mind at all.
  • Three simple ideas — description, phenomenon, intentionality — provided enough inspiration to keep roomfuls of Husserlian assistants busy in Freiburg for decades. With all of human existence awaiting their attention, how could they ever run out of things to do?
  • For Sartre, this gives the mind an immense freedom. If we are nothing but what we think about, then no predefined ‘inner nature’ can hold us back. We are protean.
  • way of this interpretation. Real, not real; inside, outside; what difference did it make? Reflecting on this, Husserl began turning his phenomenology into a branch of ‘idealism’ — the philosophical tradition which denied external reality and defined everything as a kind of private hallucination.
  • For Sartre, if we try to shut ourselves up inside our own minds, ‘in a nice warm room with the shutters closed’, we cease to exist. We have no cosy home: being out on the dusty road is the very definition of what we are.
  • One might think that, if Heidegger had anything worth saying, he could have communicated it in ordinary language. The fact is that he does not want to be ordinary, and he may not even want to communicate in the usual sense. He wants to make the familiar obscure, and to vex us. George Steiner thought that Heidegger’s purpose was less to be understood than to be experienced through a ‘felt strangeness’.
  • He takes Dasein in its most ordinary moments, then talks about it in the most innovative way he can. For Heidegger, Dasein’s everyday Being is right here: it is Being-in-the-world, or In-der-Welt-sein. The main feature of Dasein’s everyday Being-in-the-world right here is that it is usually busy doing something.
  • Thus, for Heidegger, all Being-in-the-world is also a ‘Being-with’ or Mitsein. We cohabit with others in a ‘with-world’, or Mitwelt. The old philosophical problem of how we prove the existence of other minds has now vanished. Dasein swims in the with-world long before it wonders about other minds.
  • Sometimes the best-educated people were those least inclined to take the Nazis seriously, dismissing them as too absurd to last. Karl Jaspers was one of those who made this mistake, as he later recalled, and Beauvoir observed similar dismissive attitudes among the French students in Berlin.
  • In any case, most of those who disagreed with Hitler’s ideology soon learned to keep their view to themselves. If a Nazi parade passed on the street, they would either slip out of view or give the obligatory salute like everyone else, telling themselves that the gesture meant nothing if they did not believe in it. As the psychologist Bruno Bettelheim later wrote of this period, few people will risk their life for such a small thing as raising an arm — yet that is how one’s powers of resistance are eroded away, and eventually one’s responsibility and integrity go with them.
  • for Arendt, if you do not respond adequately when the times demand it, you show a lack of imagination and attention that is as dangerous as deliberately committing an abuse. It amounts to disobeying the one command she had absorbed from Heidegger in those Marburg days: Think!
  • ‘Everything takes place under a kind of anaesthesia. Objectively dreadful events produce a thin, puny emotional response. Murders are committed like schoolboy pranks. Humiliation and moral decay are accepted like minor incidents.’ Haffner thought modernity itself was partly to blame: people had become yoked to their habits and to mass media, forgetting to stop and think, or to disrupt their routines long enough to question what was going on.
  • Heidegger’s former lover and student Hannah Arendt would argue, in her 1951 study The Origins of Totalitarianism, that totalitarian movements thrived at least partly because of this fragmentation in modern lives, which made people more vulnerable to being swept away by demagogues. Elsewhere, she coined the phrase ‘the banality of evil’ to describe the most extreme failures of personal moral awareness.
  • His communicative ideal fed into a whole theory of history: he traced all civilisation to an ‘Axial Period’ in the fifth century BC, during which philosophy and culture exploded simultaneously in Europe, the Middle East and Asia, as though a great bubble of minds had erupted from the earth’s surface. ‘True philosophy needs communion to come into existence,’ he wrote, and added, ‘Uncommunicativeness in a philosopher is virtually a criterion of the untruth of his thinking.’
  • The idea of being called to authenticity became a major theme in later existentialism, the call being interpreted as saying something like ‘Be yourself!’, as opposed to being phony. For Heidegger, the call is more fundamental than that. It is a call to take up a self that you didn’t know you had: to wake up to your Being. Moreover, it is a call to action. It requires you to do something: to take a decision of some sort.
  • Being and Time contained at least one big idea that should have been of use in resisting totalitarianism. Dasein, Heidegger wrote there, tends to fall under the sway of something called das Man or ‘the they’ — an impersonal entity that robs us of the freedom to think for ourselves. To live authentically requires resisting or outwitting this influence, but this is not easy because das Man is so nebulous. Man in German does not mean ‘man’ as in English (that’s der Mann), but a neutral abstraction, something like ‘one’ in the English phrase ‘one doesn’t do that’,
  • for Heidegger, das Man is me. It is everywhere and nowhere; it is nothing definite, but each of us is it. As with Being, it is so ubiquitous that it is difficult to see. If I am not careful, however, das Man takes over the important decisions that should be my own. It drains away my responsibility or ‘answerability’. As Arendt might put it, we slip into banality, failing to think.
  • Jaspers focused on what he called Grenzsituationen — border situations, or limit situations. These are the moments when one finds oneself constrained or boxed in by what is happening, but at the same time pushed by these events towards the limits or outer edge of normal experience. For example, you might have to make a life-or-death choice, or something might remind you suddenly of your mortality,
  • Jaspers’ interest in border situations probably had much to do with his own early confrontation with mortality. From childhood, he had suffered from a heart condition so severe that he always expected to die at any moment. He also had emphysema, which forced him to speak slowly, taking long pauses to catch his breath. Both illnesses meant that he had to budget his energies with care in order to get his work done without endangering his life.
  • If I am to resist das Man, I must become answerable to the call of my ‘voice of conscience’. This call does not come from God, as a traditional Christian definition of the voice of conscience might suppose. It comes from a truly existentialist source: my own authentic self. Alas, this voice is one I do not recognise and may not hear, because it is not the voice of my habitual ‘they-self’. It is an alien or uncanny version of my usual voice. I am familiar with my they-self, but not with my unalienated voice — so, in a weird twist, my real voice is the one that sounds strangest to me.
  • Marcel developed a strongly theological branch of existentialism. His faith distanced him from both Sartre and Heidegger, but he shared a sense of how history makes demands on individuals. In his essay ‘On the Ontological Mystery’, written in 1932 and published in the fateful year of 1933, Marcel wrote of the human tendency to become stuck in habits, received ideas, and a narrow-minded attachment to possessions and familiar scenes. Instead, he urged his readers to develop a capacity for remaining ‘available’ to situations as they arise. Similar ideas of disponibilité or availability had been explored by other writers,
  • Marcel made it his central existential imperative. He was aware of how rare and difficult it was. Most people fall into what he calls ‘crispation’: a tensed, encrusted shape in life — ‘as though each one of us secreted a kind of shell which gradually hardened and imprisoned him’.
  • Bettelheim later observed that, under Nazism, only a few people realised at once that life could not continue unaltered: these were the ones who got away quickly. Bettelheim himself was not among them. Caught in Austria when Hitler annexed it, he was sent first to Dachau and then to Buchenwald, but was then released in a mass amnesty to celebrate Hitler’s birthday in 1939 — an extraordinary reprieve, after which he left at once for America.
  • we are used to reading philosophy as offering a universal message for all times and places — or at least as aiming to do so. But Heidegger disliked the notion of universal truths or universal humanity, which he considered a fantasy. For him, Dasein is not defined by shared faculties of reason and understanding, as the Enlightenment philosophers thought. Still less is it defined by any kind of transcendent eternal soul, as in religious tradition. We do not exist on a higher, eternal plane at all. Dasein’s Being is local: it has a historical situation, and is constituted in time and place.
  • For Marcel, learning to stay open to reality in this way is the philosopher’s prime job. Everyone can do it, but the philosopher is the one who is called on above all to stay awake, so as to be the first to sound the alarm if something seems wrong.
  • Second, it also means understanding that we are historical beings, and grasping the demands our particular historical situation is making on us. In what Heidegger calls ‘anticipatory resoluteness’, Dasein discovers ‘that its uttermost possibility lies in giving itself up’. At that moment, through Being-towards-death and resoluteness in facing up to one’s time, one is freed from the they-self and attains one’s true, authentic self.
  • If we are temporal beings by our very nature, then authentic existence means accepting, first, that we are finite and mortal. We will die: this all-important realisation is what Heidegger calls authentic ‘Being-towards-Death’, and it is fundamental to his philosophy.
  • Hannah Arendt, instead, left early on: she had the benefit of a powerful warning. Just after the Nazi takeover, in spring 1933, she had been arrested while researching materials on anti-Semitism for the German Zionist Organisation at Berlin’s Prussian State Library. Her apartment was searched; both she and her mother were locked up briefly, then released. They fled, without stopping to arrange travel documents. They crossed to Czechoslovakia (then still safe) by a method that sounds almost too fabulous to be true: a sympathetic German family on the border had a house with its front door in Germany and its back door in Czechoslovakia. The family would invite people for dinner, then let them leave through the back door at night.
  • As Sartre argued in his 1943 review of The Stranger, basic phenomenological principles show that experience comes to us already charged with significance. A piano sonata is a melancholy evocation of longing. If I watch a soccer match, I see it as a soccer match, not as a meaningless scene in which a number of people run around taking turns to apply their lower limbs to a spherical object. If the latter is what I’m seeing, then I am not watching some more essential, truer version of soccer; I am failing to watch it properly as soccer at all.
  • Much as they liked Camus personally, neither Sartre nor Beauvoir accepted his vision of absurdity. For them, life is not absurd, even when viewed on a cosmic scale, and nothing can be gained by saying it is. Life for them is full of real meaning, although that meaning emerges differently for each of us.
  • For Sartre, we show bad faith whenever we portray ourselves as passive creations of our race, class, job, history, nation, family, heredity, childhood influences, events, or even hidden drives in our subconscious which we claim are out of our control. It is not that such factors are unimportant: class and race, in particular, he acknowledged as powerful forces in people’s lives, and Simone de Beauvoir would soon add gender to that list.
  • Sartre takes his argument to an extreme point by asserting that even war, imprisonment or the prospect of imminent death cannot take away my existential freedom. They form part of my ‘situation’, and this may be an extreme and intolerable situation, but it still provides only a context for whatever I choose to do next. If I am about to die, I can decide how to face that death. Sartre here resurrects the ancient Stoic idea that I may not choose what happens to me, but I can choose what to make of it, spiritually speaking.
  • But the Stoics cultivated indifference in the face of terrible events, whereas Sartre thought we should remain passionately, even furiously engaged with what happens to us and with what we can achieve. We should not expect freedom to be anything less than fiendishly difficult.
  • Freedom does not mean entirely unconstrained movement, and it certainly does not mean acting randomly. We often mistake the very things that enable us to be free — context, meaning, facticity, situation, a general direction in our lives — for things that define us and take away our freedom. It is only with all of these that we can be free in a real sense.
  • Nor did he mean that privileged groups have the right to pontificate to the poor and downtrodden about the need to ‘take responsibility’ for themselves. That would be a grotesque misreading of Sartre’s point, since his sympathy in any encounter always lay with the more oppressed side. But for each of us — for me — to be in good faith means not making excuses for myself.
  • Camus’ novel gives us a deliberately understated vision of heroism and decisive action compared to those of Sartre and Beauvoir. One can only do so much. It can look like defeatism, but it shows a more realistic perception of what it takes to actually accomplish difficult tasks like liberating one’s country.
  • Camus just kept returning to his core principle: no torture, no killing — at least not with state approval. Beauvoir and Sartre believed they were taking a more subtle and more realistic view. If asked why a couple of innocuous philosophers had suddenly become so harsh, they would have said it was because the war had changed them in profound ways. It had shown them that one’s duties to humanity could be more complicated than they seemed. ‘The war really divided my life in two,’ Sartre said later.
  • Poets and artists ‘let things be’, but they also let things come out and show themselves. They help to ease things into ‘unconcealment’ (Unverborgenheit), which is Heidegger’s rendition of the Greek term alētheia, usually translated as ‘truth’. This is a deeper kind of truth than the mere correspondence of a statement to reality, as when we say ‘The cat is on the mat’ and point to a mat with a cat on it. Long before we can do this, both cat and mat must ‘stand forth out of concealedness’. They must un-hide themselves.
  • Heidegger does not use the word ‘consciousness’ here because — as with his earlier work — he is trying to make us think in a radically different way about ourselves. We are not to think of the mind as an empty cavern, or as a container filled with representations of things. We are not even supposed to think of it as firing off arrows of intentional ‘aboutness’, as in the earlier phenomenology of Brentano. Instead, Heidegger draws us into the depths of his Schwarzwald, and asks us to imagine a gap with sunlight filtering in. We remain in the forest, but we provide a relatively open spot where other beings can bask for a moment. If we did not do this, everything would remain in the thickets, hidden even to itself.
  • The astronomer Carl Sagan began his 1980 television series Cosmos by saying that human beings, though made of the same stuff as the stars, are conscious and are therefore ‘a way for the cosmos to know itself’. Merleau-Ponty similarly quoted his favourite painter Cézanne as saying, ‘The landscape thinks itself in me, and I am its consciousness.’ This is something like what Heidegger thinks humanity contributes to the earth. We are not made of spiritual nothingness; we are part of Being, but we also bring something unique with us. It is not much: a little open space, perhaps with a path and a bench like the one the young Heidegger used to sit on to do his homework. But through us, the miracle occurs.
  • Beauty aside, Heidegger’s late writing can also be troubling, with its increasingly mystical notion of what it is to be human. If one speaks of a human being mainly as an open space or a clearing, or a means of ‘letting beings be’ and dwelling poetically on the earth, then one doesn’t seem to be talking about any recognisable person. The old Dasein has become less human than ever. It is now a forestry feature.
  • Even today, Jaspers, the dedicated communicator, is far less widely read than Heidegger, who has influenced architects, social theorists, critics, psychologists, artists, film-makers, environmental activists, and innumerable students and enthusiasts — including the later deconstructionist and post-structuralist schools, which took their starting point from his late thinking. Having spent the late 1940s as an outsider and then been rehabilitated, Heidegger became the overwhelming presence in university philosophy all over the European continent from then on.
  • As Levinas reflected on this experience, it helped to lead him to a philosophy that was essentially ethical, rather than ontological like Heidegger’s. He developed his ideas from the work of Jewish theologian Martin Buber, whose I and Thou in 1923 had distinguished between my relationship with an impersonal ‘it’ or ‘them’, and the direct personal encounter I have with a ‘you’. Levinas took it further: when I encounter you, we normally meet face-to-face, and it is through your face that you, as another person, can make ethical demands on me. This is very different from Heidegger’s Mitsein or Being-with, which suggests a group of people standing alongside one another, shoulder to shoulder as if in solidarity — perhaps as a unified nation or Volk.
  • For Levinas, we literally face each other, one individual at a time, and that relationship becomes one of communication and moral expectation. We do not merge; we respond to one another. Instead of being co-opted into playing some role in my personal drama of authenticity, you look me in the eyes — and you remain Other. You remain you.
  • This relationship is more fundamental than the self, more fundamental than consciousness, more fundamental even than Being — and it brings an unavoidable ethical obligation. Ever since Husserl, phenomenologists and existentialists had being trying to stretch the definition of existence to incorporate our social lives and relationships. Levinas did more: he turned philosophy around entirely so that these relationships were the foundation of our existence, not an extension of it.
  • Her last work, The Need for Roots, argues, among other things, that none of us has rights, but each one of us has a near-infinite degree of duty and obligation to the other. Whatever the underlying cause of her death — and anorexia nervosa seems to have been involved — no one could deny that she lived out her philosophy with total commitment. Of all the lives touched on in this book, hers is surely the most profound and challenging application of Iris Murdoch’s notion that a philosophy can be ‘inhabited’.
  • Other thinkers took radical ethical turns during the war years. The most extreme was Simone Weil, who actually tried to live by the principle of putting other people’s ethical demands first. Having returned to France after her travels through Germany in 1932, she had worked in a factory so as to experience the degrading nature of such work for herself. When France fell in 1940, her family fled to Marseilles (against her protests), and later to the US and to Britain. Even in exile, Weil made extraordinary sacrifices. If there were people in the world who could not sleep in a bed, she would not do so either, so she slept on the floor.
  • The mystery tradition had roots in Kierkegaard’s ‘leap of faith’. It owed much to the other great nineteenth-century mystic of the impossible, Dostoevsky, and to older theological notions. But it also grew from the protracted trauma that was the first half of the twentieth century. Since 1914, and especially since 1939, people in Europe and elsewhere had come to the realisation that we cannot fully know or trust ourselves; that we have no excuses or explanations for what we do — and yet that we must ground our existence and relationships on something firm, because otherwise we cannot survive.
  • One striking link between these radical ethical thinkers, all on the fringes of our main story, is that they had religious faith. They also granted a special role to the notion of ‘mystery’ — that which cannot be known, calculated or understood, especially when it concerns our relationships with each other. Heidegger was different from them, since he rejected the religion he grew up with and had no real interest in ethics — probably as a consequence of his having no real interest in the human.
  • Meanwhile, the Christian existentialist Gabriel Marcel was also still arguing, as he had since the 1930s, that ethics trumps everything else in philosophy and that our duty to each other is so great as to play the role of a transcendent ‘mystery’. He too had been led to this position partly by a wartime experience: during the First World War he had worked for the Red Cross’ Information Service, with the unenviable job of answering relatives’ inquiries about missing soldiers. Whenever news came, he passed it on, and usually it was not good. As Marcel later said, this task permanently inoculated him against warmongering rhetoric of any kind, and it made him aware of the power of what is unknown in our lives.
  • As the play’s much-quoted and frequently misunderstood final line has it: ‘Hell is other people.’ Sartre later explained that he did not mean to say that other people were hellish in general. He meant that after death we become frozen in their view, unable any longer to fend off their interpretation. In life, we can still do something to manage the impression we make; in death, this freedom goes and we are left entombed in other’s people’s memories and perceptions.
  • We have to do two near-impossible things at once: understand ourselves as limited by circumstances, and yet continue to pursue our projects as though we are truly in control. In Beauvoir’s view, existentialism is the philosophy that best enables us to do this, because it concerns itself so deeply with both freedom and contingency. It acknowledges the radical and terrifying scope of our freedom in life, but also the concrete influences that other philosophies tend to ignore: history, the body, social relationships and the environment.
  • The aspects of our existence that limit us, Merleau-Ponty says, are the very same ones that bind us to the world and give us scope for action and perception. They make us what we are. Sartre acknowledged the need for this trade-off, but he found it more painful to accept. Everything in him longed to be free of bonds, of impediments and limitations
  • Of course we have to learn this skill of interpreting and anticipating the world, and this happens in early childhood, which is why Merleau-Ponty thought child psychology was essential to philosophy. This is an extraordinary insight. Apart from Rousseau, very few philosophers before him had taken childhood seriously; most wrote as though all human experience were that of a fully conscious, rational, verbal adult who has been dropped into this world from the sky — perhaps by a stork.
  • For Merleau-Ponty, we cannot understand our experience if we don’t think of ourselves in part as overgrown babies. We fall for optical illusions because we once learned to see the world in terms of shapes, objects and things relevant to our own interests. Our first perceptions came to us in tandem with our first active experiments in observing the world and reaching out to explore it, and are still linked with those experiences.
  • Another factor in all of this, for Merleau-Ponty, is our social existence: we cannot thrive without others, or not for long, and we need this especially in early life. This makes solipsistic speculation about the reality of others ridiculous; we could never engage in such speculation if we hadn’t already been formed by them.
  • As Descartes could have said (but didn’t), ‘I think, therefore other people exist.’ We grow up with people playing with us, pointing things out, talking, listening, and getting us used to reading emotions and movements; this is how we become capable, reflective, smoothly integrated beings.
  • In general, Merleau-Ponty thinks human experience only makes sense if we abandon philosophy’s time-honoured habit of starting with a solitary, capsule-like, immobile adult self, isolated from its body and world, which must then be connected up again — adding each element around it as though adding clothing to a doll. Instead, for him, we slide from the womb to the birth canal to an equally close and total immersion in the world. That immersion continues as long as we live, although we may also cultivate the art of partially withdrawing from time to time when we want to think or daydream.
  • When he looks for his own metaphor to describe how he sees consciousness, he comes up with a beautiful one: consciousness, he suggests, is like a ‘fold’ in the world, as though someone had crumpled a piece of cloth to make a little nest or hollow. It stays for a while, before eventually being unfolded and smoothed away. There is something seductive, even erotic, in this idea of my conscious self as an improvised pouch in the cloth of the world. I still have my privacy — my withdrawing room. But I am part of the world’s fabric, and I remain formed out of it for as long as I am here.
  • By the time of these works, Merleau-Ponty is taking his desire to describe experience to the outer limits of what language can convey. Just as with the late Husserl or Heidegger, or Sartre in his Flaubert book, we see a philosopher venturing so far from shore that we can barely follow. Emmanuel Levinas would head out to the fringes too, eventually becoming incomprehensible to all but his most patient initiates.
  • Sartre once remarked — speaking of a disagreement they had about Husserl in 1941 — that ‘we discovered, astounded, that our conflicts had, at times, stemmed from our childhood, or went back to the elementary differences of our two organisms’. Merleau-Ponty also said in an interview that Sartre’s work seemed strange to him, not because of philosophical differences, but because of a certain ‘register of feeling’, especially in Nausea, that he could not share. Their difference was one of temperament and of the whole way the world presented itself to them.
  • The two also differed in their purpose. When Sartre writes about the body or other aspects of experience, he generally does it in order to make a different point. He expertly evokes the grace of his café waiter, gliding between the tables, bending at an angle just so, steering the drink-laden tray through the air on the tips of his fingers — but he does it all in order to illustrate his ideas about bad faith. When Merleau-Ponty writes about skilled and graceful movement, the movement itself is his point. This is the thing he wants to understand.
  • We can never move definitively from ignorance to certainty, for the thread of the inquiry will constantly lead us back to ignorance again. This is the most attractive description of philosophy I’ve ever read, and the best argument for why it is worth doing, even (or especially) when it takes us no distance at all from our starting point.
  • By prioritising perception, the body, social life and childhood development, Merleau-Ponty gathered up philosophy’s far-flung outsider subjects and brought them in to occupy the centre of his thought.
  • In his inaugural lecture at the Collège de France on 15 January 1953, published as In Praise of Philosophy, he said that philosophers should concern themselves above all with whatever is ambiguous in our experience. At the same time, they should think clearly about these ambiguities, using reason and science. Thus, he said, ‘The philosopher is marked by the distinguishing trait that he possesses inseparably the taste for evidence and the feeling for ambiguity.’ A constant movement is required between these two
  • As Sartre wrote in response to Hiroshima, humanity had now gained the power to wipe itself out, and must decide every single day that it wanted to live. Camus also wrote that humanity faced the task of choosing between collective suicide and a more intelligent use of its technology — ‘between hell and reason’. After 1945, there seemed little reason to trust in humanity’s ability to choose well.
  • Merleau-Ponty observed in a lecture of 1951 that, more than any previous century, the twentieth century had reminded people how ‘contingent’ their lives were — how at the mercy of historical events and other changes that they could not control. This feeling went on long after the war ended. After the A-bombs were dropped on Hiroshima and Nagasaki, many feared that a Third World War would not be long in coming, this time between the Soviet Union and the United States.
Javier E

The advantage of ambiguity | MIT News - 1 views

  • Why did language evolve? While the answer might seem obvious — as a way for individuals to exchange information — linguists and other students of communication have debated this question for years. Many prominent linguists, including MIT’s Noam Chomsky, have argued that language is, in fact, poorly designed for communication. Such a use, they say, is merely a byproduct of a system that probably evolved for other reasons — perhaps for structuring our own private thoughts.
  • In a new theory, they claim that ambiguity actually makes language more efficient, by allowing for the reuse of short, efficient sounds that listeners can easily disambiguate with the help of context.
  • “Various people have said that ambiguity is a problem for communication,” says Ted Gibson, an MIT professor of cognitive science and senior author of a paper describing the research to appear in the journal Cognition. "But the fact that context disambiguates has important ramifications for the re-use of potentially ambiguous forms. Ambiguity is no longer a problem — it's something that you can take advantage of, because you can reuse easy [words] in different contexts over and over again."
  • ...5 more annotations...
  • virtually no speaker of English gets confused when he or she hears the word “mean.” That’s because the different senses of the word occur in such different contexts as to allow listeners to infer its meaning nearly automatically.
  • To understand why ambiguity makes a language more efficient rather than less so, think about the competing desires of the speaker and the listener. The speaker is interested in conveying as much as possible with the fewest possible words, while the listener is aiming to get a complete and specific understanding of what the speaker is trying to say.
  • it is “cognitively cheaper” to have the listener infer certain things from the context than to have the speaker spend time on longer and more complicated utterances. The result is a system that skews toward ambiguity, reusing the “easiest” words. Once context is considered, it’s clear that “ambiguity is actually something you would want in the communication system,” Piantadosi says.
  • “You would expect that since languages are constantly changing, they would evolve to get rid of ambiguity,” Wasow says. “But if you look at natural languages, they are massively ambiguous: Words have multiple meanings, there are multiple ways to parse strings of words. … This paper presents a really rigorous argument as to why that kind of ambiguity is actually functional for communicative purposes, rather than dysfunctional.”
  • “Ambiguity is only good for us [as humans] because we have these really sophisticated cognitive mechanisms for disambiguating,” he says. “It’s really difficult to work out the details of what those are, or even some sort of approximation that you could get a computer to use.”
krystalxu

Basics of Communication | Psychology Today - 0 views

  • The most common problem that we can make as message senders is coding our thought, feeling or need in a way that has a low chance of being understood by the receiver.
  • if you don't know certain words or the message is too complex, then there is a low chance of really understanding it.
  • no single person is 100% at fault for any communication problem.
  • ...3 more annotations...
  • Be Aware of your own communication errors. We are all susceptible to sending confusing messages and to missing the boat in terms of what someone else was trying to tell us
  • Check in with the sender when you are decoding messages to make sure you have the right understanding.
  • "Wow, I'm getting huuuuuungry." Person B sees and hears this, and interprets it to mean that Person A is hungry. Simple right?
Javier E

Living Another Day, Thanks to Grandparents Who Couldn't Sleep - The New York Times - 1 views

  • A new study, published Tuesday in Proceedings of the Royal Society B, suggests that the way sleep patterns change with age may be an evolutionary adaptation that helped our ancestors survive the night by ensuring one person in a community was awake at all times. The researchers called this phenomenon the “poorly sleeping grandparent hypothesis,” suggesting that an older member of a community who woke before dawn might have been crucial to spotting the threat of a hungry predator while younger people were still asleep. It may explain why people slept in mixed-age groups through much of human history.
  • The Hadza sleeping environment may have similarities to that of earlier humans, researchers said. They sleep outdoors or in grass huts in groups of 20 to 30 people without artificially regulating temperature or light. These conditions provide a suitable window to study the evolutionary aspects of sleep.
  • more than 220 total hours of sleep observation, researchers found only 18 minutes when all adults were sound asleep simultaneously. Typically, older participants in their 50s and 60s went to bed earlier and woke up earlier than those in their 20s and 30s. On average, more than a third of the group was alert, or lightly dozing, at any given time.
  • ...3 more annotations...
  • “We have a propensity to overcategorize things as disorders in the West,” said David Samson, an author of the study and an assistant professor of anthropology at the University of Toronto. “It might help elderly individuals to know changes they’re experiencing have an evolutionary reason.”
  • “The variation may be partially explained by genetics,” she said, “but there are environmental conditions too.” As people age, their social needs and level of activity change, potentially affecting their sleep patterns.
  • there is evidence of a genetic link, she added, pointing out that sleep quality declined among the older Hadza even while they remained active hunters and gatherers.
anonymous

How the World's Oldest Wooden Sculpture Is Reshaping Prehistory - The New York Times - 0 views

  • How the World’s Oldest Wooden Sculpture Is Reshaping Prehistory
  • At 12,500 years old, the Shigir Idol is by far the earliest known work of ritual art. Only decay has kept others from being found.
  • The world’s oldest known wooden sculpture — a nine-foot-tall totem pole thousands of years old — looms over a hushed chamber of an obscure Russian museum in the Ural Mountains, not far from the Siberian border
  • ...24 more annotations...
  • Shigir Idol
  • Dug out of a peat bog by gold miners in 1890, the relic, or what’s left of it, is carved from a great slab of freshly cut larch.
  • Scattered among the geometric patterns (zigzags, chevrons, herringbones) are eight human faces, each with slashes for eyes that peer not so benignly from the front and back planes.
  • “Whether it screams or shouts or sings, it projects authority, possibly malevolent authority. It’s not immediately a friend of yours, much less an ancient friend of yours.”
  • In archaeology, portable prehistoric sculpture is called “mobiliary art.”
  • The statue’s age was a matter of conjecture until 1997, when it was carbon-dated by Russian scientists to about 9,500 years old, an age that struck most scholars as fanciful.
  • The statue was more than twice as old as the Egyptian pyramids and Stonehenge, as well as, by many millenniums, the first known work of ritual art.
  • A new study that Dr. Terberger wrote with some of the same colleagues in Quaternary International, further skews our understanding of prehistory by pushing back the original date of the Shigir Idol by another 900 years, placing it in the context of the early art in Eurasia.
  • “During the period of rapid cooling from about 10,700 B.C. to 9,600 B.C. that we call the Younger Dryas, no beavers should have been around in the Transurals,” he said.)
  • Written with an eye toward disentangling Western science from colonialism, Dr. Terberger’s latest paper challenges the ethnocentric notion that pretty much everything, including symbolic expression and philosophical perceptions of the world, came to Europe by way of the sedentary farming communities in the Fertile Crescent 8,000 years ago.
  • “It’s similar to the ‘Neanderthals did not make art’ fable, which was entirely based on absence of evidence,
  • Likewise, the overwhelming scientific consensus used to hold that modern humans were superior in key ways, including their ability to innovate, communicate and adapt to different environments.
  • Nonsense, all of it.”
  • makes it clear that arguments about the wealth of mobiliary art in, say, the Upper Paleolithic of Germany or France by comparison to southern Europe, are largely nonsensical and an artifact of tundra (where there are no trees and you use ivory, which is archaeologically visible) versus open forest environments
  • The Shigir Idol, named for the bog near Kirovgrad in which it was found, is presumed to have rested on a rock base for perhaps two or three decades before toppling into a long-gone paleo-lake, where the peat’s antimicrobial properties protected it like a time capsule.
  • “It was not a scientific construction,”
  • “The rings tell us that trees were growing very slowly, as the temperature was still quite cold,”
  • Dr. Terberger respectfully disagrees.
  • “The landscape changed, and the art — figurative designs and naturalistic animals painted in caves and carved in rock — did, too, perhaps as a way to help people come to grips with the challenging environments they encountered.”
  • And what do the engravings mean? Svetlana Savchenko, the artifact’s curator and an author on the study, speculates that the eight faces may well contain encrypted information about ancestor spirits, the boundary between earth and sky, or a creation myth.
  • The temple’s stones were carved around 11,000 years ago, which makes them 1,500 years younger than the Shigir Idol.
  • One could wonder how many similar pieces have been lost over time due to poor preservation conditions.”
  • The similarity of the geometric motifs to others across Europe in that era, he added, “is evidence of long-distance contacts and a shared sign language over vast areas. The sheer size of the idol also seems to indicate it was meant as a marker in the landscape that was supposed to be seen by other hunter-gatherer groups — perhaps marking the border of a territory, a warning or welcoming sign.”
  • “What do you think is the hardest thing to find in the Stone Age archaeology of the Urals?”A pause: Sites?“No,” he said, sighing softly. “Funding.”
« First ‹ Previous 61 - 80 of 280 Next › Last »
Showing 20 items per page