Skip to main content

Home/ TOK Friends/ Group items tagged state of nature

Rss Feed Group items tagged

Javier E

How Tech Can Turn Doctors Into Clerical Workers - The New York Times - 0 views

  • what I see in my colleague is disillusionment, and it has come too early, and I am seeing too much of it.
  • In America today, the patient in the hospital bed is just the icon, a place holder for the real patient who is not in the bed but in the computer. That virtual entity gets all our attention. Old-fashioned “bedside” rounds conducted by the attending physician too often take place nowhere near the bed but have become “card flip” rounds
  • My young colleague slumping in the chair in my office survived the student years, then three years of internship and residency and is now a full-time practitioner and teacher. The despair I hear comes from being the highest-paid clerical worker in the hospital: For every one hour we spend cumulatively with patients, studies have shown, we spend nearly two hours on our primitive Electronic Health Records, or “E.H.R.s,” and another hour or two during sacred personal time.
  • ...23 more annotations...
  • The living, breathing source of the data and images we juggle, meanwhile, is in the bed and left wondering: Where is everyone? What are they doing? Hello! It’s my body, you know
  • Our $3.4 trillion health care system is responsible for more than a quarter of a million deaths per year because of medical error, the rough equivalent of, say, a jumbo jet’s crashing every day.
  • I can get cash and account details all over America and beyond. Yet I can’t reliably get a patient record from across town, let alone from a hospital in the same state, even if both places use the same brand of E.H.R
  • the leading E.H.R.s were never built with any understanding of the rituals of care or the user experience of physicians or nurses. A clinician will make roughly 4,000 keyboard clicks during a busy 10-hour emergency-room shift
  • In the process, our daily progress notes have become bloated cut-and-paste monsters that are inaccurate and hard to wade through. A half-page, handwritten progress note of the paper era might in a few lines tell you what a physician really thought
  • so much of the E.H.R., but particularly the physical exam it encodes, is a marvel of fiction, because we humans don’t want to leave a check box empty or leave gaps in a template.
  • For a study, my colleagues and I at Stanford solicited anecdotes from physicians nationwide about patients for whom an oversight in the exam (a “miss”) had resulted in real consequences, like diagnostic delay, radiation exposure, therapeutic or surgical misadventure, even death. They were the sorts of things that would leave no trace in the E.H.R. because the recorded exam always seems complete — and yet the omission would be glaring and memorable to other physicians involved in the subsequent care. We got more than 200 such anecdotes.
  • The reason for these errors? Most of them resulted from exams that simply weren’t done as claimed. “Food poisoning” was diagnosed because the strangulated hernia in the groin was overlooked, or patients were sent to the catheterization lab for chest pain because no one saw the shingles rash on the left chest.
  • I worry that such mistakes come because we’ve gotten trapped in the bunker of machine medicine. It is a preventable kind of failure
  • How we salivated at the idea of searchable records, of being able to graph fever trends, or white blood counts, or share records at a keystroke with another institution — “interoperability”
  • The seriously ill patient has entered another kingdom, an alternate universe, a place and a process that is frightening, infantilizing; that patient’s greatest need is both scientific state-of-the-art knowledge and genuine caring from another human being. Caring is expressed in listening, in the time-honored ritual of the skilled bedside exam — reading the body — in touching and looking at where it hurts and ultimately in localizing the disease for patients not on a screen, not on an image, not on a biopsy report, but on their bodies.
  • What if the computer gave the nurse the big picture of who he was both medically and as a person?
  • a professor at M.I.T. whose current interest in biomedical engineering is “bedside informatics,” marvels at the fact that in an I.C.U., a blizzard of monitors from disparate manufacturers display EKG, heart rate, respiratory rate, oxygen saturation, blood pressure, temperature and more, and yet none of this is pulled together, summarized and synthesized anywhere for the clinical staff to use
  • What these monitors do exceedingly well is sound alarms, an average of one alarm every eight minutes, or more than 180 per patient per day. What is our most common response to an alarm? We look for the button to silence the nuisance because, unlike those in a Boeing cockpit, say, our alarms are rarely diagnosing genuine danger.
  • By some estimates, more than 50 percent of physicians in the United States have at least one symptom of burnout, defined as a syndrome of emotional exhaustion, cynicism and decreased efficacy at work
  • It is on the increase, up by 9 percent from 2011 to 2014 in one national study. This is clearly not an individual problem but a systemic one, a 4,000-key-clicks-a-day problem.
  • The E.H.R. is only part of the issue: Other factors include rapid patient turnover, decreased autonomy, merging hospital systems, an aging population, the increasing medical complexity of patients. Even if the E.H.R. is not the sole cause of what ails us, believe me, it has become the symbol of burnou
  • burnout is one of the largest predictors of physician attrition from the work force. The total cost of recruiting a physician can be nearly $90,000, but the lost revenue per physician who leaves is between $500,000 and $1 million, even more in high-paying specialties.
  • I hold out hope that artificial intelligence and machine-learning algorithms will transform our experience, particularly if natural-language processing and video technology allow us to capture what is actually said and done in the exam room.
  • as with any lab test, what A.I. will provide is at best a recommendation that a physician using clinical judgment must decide how to apply.
  • True clinical judgment is more than addressing the avalanche of blood work, imaging and lab tests; it is about using human skills to understand where the patient is in the trajectory of a life and the disease, what the nature of the patient’s family and social circumstances is and how much they want done.
  • Much of that is a result of poorly coordinated care, poor communication, patients falling through the cracks, knowledge not being transferred and so on, but some part of it is surely from failing to listen to the story and diminishing skill in reading the body as a text.
  • As he was nearing death, Avedis Donabedian, a guru of health care metrics, was asked by an interviewer about the commercialization of health care. “The secret of quality,” he replied, “is love.”/•/
Javier E

Opinion | How Genetics Is Changing Our Understanding of 'Race' - The New York Times - 0 views

  • In 1942, the anthropologist Ashley Montagu published “Man’s Most Dangerous Myth: The Fallacy of Race,” an influential book that argued that race is a social concept with no genetic basis.
  • eginning in 1972, genetic findings began to be incorporated into this argument. That year, the geneticist Richard Lewontin published an important study of variation in protein types in blood. He grouped the human populations he analyzed into seven “races” — West Eurasians, Africans, East Asians, South Asians, Native Americans, Oceanians and Australians — and found that around 85 percent of variation in the protein types could be accounted for by variation within populations and “races,” and only 15 percent by variation across them. To the extent that there was variation among humans, he concluded, most of it was because of “differences between individuals.”
  • In this way, a consensus was established that among human populations there are no differences large enough to support the concept of “biological race.” Instead, it was argued, race is a “social construct,” a way of categorizing people that changes over time and across countries.
  • ...29 more annotations...
  • t is true that race is a social construct. It is also true, as Dr. Lewontin wrote, that human populations “are remarkably similar to each other” from a genetic point of view.
  • this consensus has morphed, seemingly without questioning, into an orthodoxy. The orthodoxy maintains that the average genetic differences among people grouped according to today’s racial terms are so trivial when it comes to any meaningful biological traits that those differences can be ignored.
  • With the help of these tools, we are learning that while race may be a social construct, differences in genetic ancestry that happen to correlate to many of today’s racial constructs are real.
  • I have deep sympathy for the concern that genetic discoveries could be misused to justify racism. But as a geneticist I also know that it is simply no longer possible to ignore average genetic differences among “races.”
  • Groundbreaking advances in DNA sequencing technology have been made over the last two decades
  • Care.
  • The orthodoxy goes further, holding that we should be anxious about any research into genetic differences among populations
  • You will sometimes hear that any biological differences among populations are likely to be small, because humans have diverged too recently from common ancestors for substantial differences to have arisen under the pressure of natural selection. This is not true. The ancestors of East Asians, Europeans, West Africans and Australians were, until recently, almost completely isolated from one another for 40,000 years or longer, which is more than sufficient time for the forces of evolution to work
  • I am worried that well-meaning people who deny the possibility of substantial biological differences among human populations are digging themselves into an indefensible position, one that will not survive the onslaught of science.
  • I am also worried that whatever discoveries are made — and we truly have no idea yet what they will be — will be cited as “scientific proof” that racist prejudices and agendas have been correct all along, and that those well-meaning people will not understand the science well enough to push back against these claims.
  • This is why it is important, even urgent, that we develop a candid and scientifically up-to-date way of discussing any such difference
  • While most people will agree that finding a genetic explanation for an elevated rate of disease is important, they often draw the line there. Finding genetic influences on a propensity for disease is one thing, they argue, but looking for such influences on behavior and cognition is another
  • Is performance on an intelligence test or the number of years of school a person attends shaped by the way a person is brought up? Of course. But does it measure something having to do with some aspect of behavior or cognition? Almost certainly.
  • Recent genetic studies have demonstrated differences across populations not just in the genetic determinants of simple traits such as skin color, but also in more complex traits like bodily dimensions and susceptibility to diseases.
  • in Iceland, there has been measurable genetic selection against the genetic variations that predict more years of education in that population just within the last century.
  • consider what kinds of voices are filling the void that our silence is creating
  • Nicholas Wade, a longtime science journalist for The New York Times, rightly notes in his 2014 book, “A Troublesome Inheritance: Genes, Race and Human History,” that modern research is challenging our thinking about the nature of human population differences. But he goes on to make the unfounded and irresponsible claim that this research is suggesting that genetic factors explain traditional stereotypes.
  • 139 geneticists (including myself) pointed out in a letter to The New York Times about Mr. Wade’s book, there is no genetic evidence to back up any of the racist stereotypes he promotes.
  • Another high-profile example is James Watson, the scientist who in 1953 co-discovered the structure of DNA, and who was forced to retire as head of the Cold Spring Harbor Laboratories in 2007 after he stated in an interview — without any scientific evidence — that research has suggested that genetic factors contribute to lower intelligence in Africans than in Europeans.
  • What makes Dr. Watson’s and Mr. Wade’s statements so insidious is that they start with the accurate observation that many academics are implausibly denying the possibility of average genetic differences among human populations, and then end with a claim — backed by no evidence — that they know what those differences are and that they correspond to racist stereotypes
  • They use the reluctance of the academic community to openly discuss these fraught issues to provide rhetorical cover for hateful ideas and old racist canards.
  • This is why knowledgeable scientists must speak out. If we abstain from laying out a rational framework for discussing differences among populations, we risk losing the trust of the public and we actively contribute to the distrust of expertise that is now so prevalent.
  • If scientists can be confident of anything, it is that whatever we currently believe about the genetic nature of differences among populations is most likely wrong.
  • For example, my laboratory discovered in 2016, based on our sequencing of ancient human genomes, that “whites” are not derived from a population that existed from time immemorial, as some people believe. Instead, “whites” represent a mixture of four ancient populations that lived 10,000 years ago and were each as different from one another as Europeans and East Asians are today.
  • For me, a natural response to the challenge is to learn from the example of the biological differences that exist between males and females
  • The differences between the sexes are far more profound than those that exist among human populations, reflecting more than 100 million years of evolution and adaptation. Males and females differ by huge tracts of genetic material
  • How do we accommodate the biological differences between men and women? I think the answer is obvious: We should both recognize that genetic differences between males and females exist and we should accord each sex the same freedoms and opportunities regardless of those differences
  • fulfilling these aspirations in practice is a challenge. Yet conceptually it is straightforward.
  • Compared with the enormous differences that exist among individuals, differences among populations are on average many times smaller, so it should be only a modest challenge to accommodate a reality in which the average genetic contributions to human traits differ.
Javier E

In Defense of Facts - The Atlantic - 1 views

  • over 13 years, he has published a series of anthologies—of the contemporary American essay, of the world essay, and now of the historical American essay—that misrepresents what the essay is and does, that falsifies its history, and that contains, among its numerous selections, very little one would reasonably classify within the genre. And all of this to wide attention and substantial acclaim
  • D’Agata’s rationale for his “new history,” to the extent that one can piece it together from the headnotes that preface each selection, goes something like this. The conventional essay, nonfiction as it is, is nothing more than a delivery system for facts. The genre, as a consequence, has suffered from a chronic lack of critical esteem, and thus of popular attention. The true essay, however, deals not in knowing but in “unknowing”: in uncertainty, imagination, rumination; in wandering and wondering; in openness and inconclusion
  • Every piece of this is false in one way or another.
  • ...31 more annotations...
  • There are genres whose principal business is fact—journalism, history, popular science—but the essay has never been one of them. If the form possesses a defining characteristic, it is that the essay makes an argument
  • That argument can rest on fact, but it can also rest on anecdote, or introspection, or cultural interpretation, or some combination of all these and more
  • what makes a personal essay an essay and not just an autobiographical narrative is precisely that it uses personal material to develop, however speculatively or intuitively, a larger conclusion.
  • Nonfiction is the source of the narcissistic injury that seems to drive him. “Nonfiction,” he suggests, is like saying “not art,” and if D’Agata, who has himself published several volumes of what he refers to as essays, desires a single thing above all, it is to be known as a maker of art.
  • D’Agata tells us that the term has been in use since about 1950. In fact, it was coined in 1867 by the staff of the Boston Public Library and entered widespread circulation after the turn of the 20th century. The concept’s birth and growth, in other words, did coincide with the rise of the novel to literary preeminence, and nonfiction did long carry an odor of disesteem. But that began to change at least as long ago as the 1960s, with the New Journalism and the “nonfiction novel.”
  • What we really seem to get in D’Agata’s trilogy, in other words, is a compendium of writing that the man himself just happens to like, or that he wants to appropriate as a lineage for his own work.
  • What it’s like is abysmal: partial to trivial formal experimentation, hackneyed artistic rebellion, opaque expressions of private meaning, and modish political posturing
  • If I bought a bag of chickpeas and opened it to find that it contained some chickpeas, some green peas, some pebbles, and some bits of goat poop, I would take it back to the store. And if the shopkeeper said, “Well, they’re ‘lyric’ chickpeas,” I would be entitled to say, “You should’ve told me that before I bought them.”
  • when he isn’t cooking quotes or otherwise fudging the record, he is simply indifferent to issues of factual accuracy, content to rely on a mixture of guesswork, hearsay, and his own rather faulty memory.
  • His rejoinders are more commonly a lot more hostile—not to mention juvenile (“Wow, Jim, your penis must be so much bigger than mine”), defensive, and in their overarching logic, deeply specious. He’s not a journalist, he insists; he’s an essayist. He isn’t dealing in anything as mundane as the facts; he’s dealing in “art, dickhead,” in “poetry,” and there are no rules in art.
  • D’Agata replies that there is something between history and fiction. “We all believe in emotional truths that could never hold water, but we still cling to them and insist on their relevance.” The “emotional truths” here, of course, are D’Agata’s, not Presley’s. If it feels right to say that tae kwon do was invented in ancient India (not modern Korea, as Fingal discovers it was), then that is when it was invented. The term for this is truthiness.
  • D’Agata clearly wants to have it both ways. He wants the imaginative freedom of fiction without relinquishing the credibility (and for some readers, the significance) of nonfiction. He has his fingers crossed, and he’s holding them behind his back. “John’s a different kind of writer,” an editor explains to Fingal early in the book. Indeed he is. But the word for such a writer isn’t essayist. It’s liar.
  • he point of all this nonsense, and a great deal more just like it, is to advance an argument about the essay and its history. The form, D’Agata’s story seems to go, was neglected during the long ages that worshiped “information” but slowly emerged during the 19th and 20th centuries as artists learned to defy convention and untrammel their imaginations, coming fully into its own over the past several decades with the dawning recognition of the illusory nature of knowledge.
  • Most delectable is when he speaks about “the essay’s traditional ‘five-paragraph’ form.” I almost fell off my chair when I got to that one. The five-paragraph essay—introduction, three body paragraphs, conclusion; stultifying, formulaic, repetitive—is the province of high-school English teachers. I have never met one outside of a classroom, and like any decent college writing instructor, I never failed to try to wean my students away from them. The five-paragraph essay isn’t an essay; it’s a paper.
  • What he fails to understand is that facts and the essay are not antagonists but siblings, offspring of the same historical moment
  • —by ignoring the actual contexts of his selections, and thus their actual intentions—D’Agata makes the familiar contemporary move of imposing his own conceits and concerns upon the past. That is how ethnography turns into “song,” Socrates into an essayist, and the whole of literary history into a single man’s “emotional truth.”
  • The history of the essay is indeed intertwined with “facts,” but in a very different way than D’Agata imagines. D’Agata’s mind is Manichaean. Facts bad, imagination good
  • When he refers to his selections as essays, he does more than falsify the essay as a genre. He also effaces all the genres that they do belong to: not only poetry, fiction, journalism, and travel, but, among his older choices, history, parable, satire, the sermon, and more—genres that possess their own particular traditions, conventions, and expectation
  • one needs to recognize that facts themselves have a history.
  • Facts are not just any sort of knowledge, such as also existed in the ancient and medieval worlds. A fact is a unit of information that has been established through uniquely modern methods
  • Fact, etymologically, means “something done”—that is, an act or deed
  • It was only in the 16th century—an age that saw the dawning of a new empirical spirit, one that would issue not only in modern science, but also in modern historiography, journalism, and scholarship—that the word began to signify our current sense of “real state of things.”
  • It was at this exact time, and in this exact spirit, that the essay was born. What distinguished Montaigne’s new form—his “essays” or attempts to discover and publish the truth about himself—was not that it was personal (precursors like Seneca also wrote personally), but that it was scrupulously investigative. Montaigne was conducting research into his soul, and he was determined to get it right.
  • His famous motto, Que sais-je?—“What do I know?”—was an expression not of radical doubt but of the kind of skepticism that fueled the modern revolution in knowledge.
  • It is no coincidence that the first English essayist, Galileo’s contemporary Francis Bacon, was also the first great theorist of science.
  • That knowledge is problematic—difficult to establish, labile once created, often imprecise and always subject to the limitations of the human mind—is not the discovery of postmodernism. It is a foundational insight of the age of science, of fact and information, itself.
  • The point is not that facts do not exist, but that they are unstable (and are becoming more so as the pace of science quickens). Knowledge is always an attempt. Every fact was established by an argument—by observation and interpretation—and is susceptible to being overturned by a different one
  • A fact, you might say, is nothing more than a frozen argument, the place where a given line of investigation has come temporarily to rest.
  • Sometimes those arguments are scientific papers. Sometimes they are news reports, which are arguments with everything except the conclusions left out (the legwork, the notes, the triangulation of sources—the research and the reasoning).
  • When it comes to essays, though, we don’t refer to those conclusions as facts. We refer to them as wisdom, or ideas
  • the essay draws its strength not from separating reason and imagination but from putting them in conversation. A good essay moves fluidly between thought and feeling. It subjects the personal to the rigors of the intellect and the discipline of external reality. The truths it finds are more than just emotional.
Javier E

How will humanity endure the climate crisis? I asked an acclaimed sci-fi writer | Danie... - 0 views

  • To really grasp the present, we need to imagine the future – then look back from it to better see the now. The angry climate kids do this naturally. The rest of us need to read good science fiction. A great place to start is Kim Stanley Robinson.
  • read 11 of his books, culminating in his instant classic The Ministry for the Future, which imagines several decades of climate politics starting this decade.
  • The first lesson of his books is obvious: climate is the story.
  • ...29 more annotations...
  • What Ministry and other Robinson books do is make us slow down the apocalyptic highlight reel, letting the story play in human time for years, decades, centuries.
  • he wants leftists to set aside their differences, and put a “time stamp on [their] political view” that recognizes how urgent things are. Looking back from 2050 leaves little room for abstract idealism. Progressives need to form “a united front,” he told me. “It’s an all-hands-on-deck situation; species are going extinct and biomes are dying. The catastrophes are here and now, so we need to make political coalitions.”
  • he does want leftists – and everyone else – to take the climate emergency more seriously. He thinks every big decision, every technological option, every political opportunity, warrants climate-oriented scientific scrutiny. Global justice demands nothing less.
  • He wants to legitimize geoengineering, even in forms as radical as blasting limestone dust into the atmosphere for a few years to temporarily dim the heat of the sun
  • Robinson believes that once progressives internalize the insight that the economy is a social construct just like anything else, they can determine – based on the contemporary balance of political forces, ecological needs, and available tools – the most efficient methods for bringing carbon and capital into closer alignment.
  • We live in a world where capitalist states and giant companies largely control science.
  • Yes, we need to consider technologies with an open mind. That includes a frank assessment of how the interests of the powerful will shape how technologies develop
  • Robinson’s imagined future suggests a short-term solution that fits his dreams of a democratic, scientific politics: planning, of both the economy and planet.
  • it’s borrowed from Robinson’s reading of ecological economics. That field’s premise is that the economy is embedded in nature – that its fundamental rules aren’t supply and demand, but the laws of physics, chemistry, biology.
  • The upshot of Robinson’s science fiction is understanding that grand ecologies and human economies are always interdependent.
  • Robinson seems to be urging all of us to treat every possible technological intervention – from expanding nuclear energy, to pumping meltwater out from under glaciers, to dumping iron filings in the ocean – from a strictly scientific perspective: reject dogma, evaluate the evidence, ignore the profit motive.
  • Robinson’s elegant solution, as rendered in Ministry, is carbon quantitative easing. The idea is that central banks invent a new currency; to earn the carbon coins, institutions must show that they’re sucking excess carbon down from the sky. In his novel, this happens thanks to a series of meetings between United Nations technocrats and central bankers. But the technocrats only win the arguments because there’s enough rage, protest and organizing in the streets to force the bankers’ hand.
  • Seen from Mars, then, the problem of 21st-century climate economics is to sync public and private systems of capital with the ecological system of carbon.
  • Success will snowball; we’ll democratically plan more and more of the eco-economy.
  • Robinson thus gets that climate politics are fundamentally the politics of investment – extremely big investments. As he put it to me, carbon quantitative easing isn’t the “silver bullet solution,” just one of several green investment mechanisms we need to experiment with.
  • Robinson shares the great anarchist dream. “Everybody on the planet has an equal amount of power, and comfort, and wealth,” he said. “It’s an obvious goal” but there’s no shortcut.
  • In his political economy, like his imagined settling of Mars, Robinson tries to think like a bench scientist – an experimentalist, wary of unifying theories, eager for many groups to try many things.
  • there’s something liberating about Robinson’s commitment to the scientific method: reasonable people can shed their prejudices, consider all the options and act strategically.
  • The years ahead will be brutal. In Ministry, tens of millions of people die in disasters – and that’s in a scenario that Robinson portrays as relatively optimistic
  • when things get that bad, people take up arms. In Ministry’s imagined future, the rise of weaponized drones allows shadowy environmentalists to attack and kill fossil capitalists. Many – including myself – have used the phrase “eco-terrorism” to describe that violence. Robinson pushed back when we talked. “What if you call that resistance to capitalism realism?” he asked. “What if you call that, well, ‘Freedom fighters’?”
  • Robinson insists that he doesn’t condone the violence depicted in his book; he simply can’t imagine a realistic account of 21st century climate politics in which it doesn’t occur.
  • Malm writes that it’s shocking how little political violence there has been around climate change so far, given how brutally the harms will be felt in communities of color, especially in the global south, who bear no responsibility for the cataclysm, and where political violence has been historically effective in anticolonial struggles.
  • In Ministry, there’s a lot of violence, but mostly off-stage. We see enough to appreciate Robinson’s consistent vision of most people as basically thoughtful: the armed struggle is vicious, but its leaders are reasonable, strategic.
  • the implications are straightforward: there will be escalating violence, escalating state repression and increasing political instability. We must plan for that too.
  • maybe that’s the tension that is Ministry’s greatest lesson for climate politics today. No document that could win consensus at a UN climate summit will be anywhere near enough to prevent catastrophic warming. We can only keep up with history, and clearly see what needs to be done, by tearing our minds out of the present and imagining more radical future vantage points
  • If millions of people around the world can do that, in an increasingly violent era of climate disasters, those people could generate enough good projects to add up to something like a rational plan – and buy us enough time to stabilize the climate, while wresting power from the 1%.
  • Robinson’s optimistic view is that human nature is fundamentally thoughtful, and that it will save us – that the social process of arguing and politicking, with minds as open as we can manage, is a project older than capitalism, and one that will eventually outlive it
  • It’s a perspective worth thinking about – so long as we’re also organizing.
  • Daniel Aldana Cohen is assistant professor of sociology at the University of California, Berkeley, where he directs the Socio-Spatial Climate Collaborative. He is the co-author of A Planet to Win: Why We Need a Green New Deal
Javier E

Is Science Kind of a Scam? - The New Yorker - 1 views

  • No well-tested scientific concept is more astonishing than the one that gives its name to a new book by the Scientific American contributing editor George Musser, “Spooky Action at a Distance
  • The ostensible subject is the mechanics of quantum entanglement; the actual subject is the entanglement of its observers.
  • his question isn’t so much how this weird thing can be true as why, given that this weird thing had been known about for so long, so many scientists were so reluctant to confront it. What keeps a scientific truth from spreading?
  • ...29 more annotations...
  • it is as if two magic coins, flipped at different corners of the cosmos, always came up heads or tails together. (The spooky action takes place only in the context of simultaneous measurement. The particles share states, but they don’t send signals.)
  • fashion, temperament, zeitgeist, and sheer tenacity affected the debate, along with evidence and argument.
  • The certainty that spooky action at a distance takes place, Musser says, challenges the very notion of “locality,” our intuitive sense that some stuff happens only here, and some stuff over there. What’s happening isn’t really spooky action at a distance; it’s spooky distance, revealed through an action.
  • Why, then, did Einstein’s question get excluded for so long from reputable theoretical physics? The reasons, unfolding through generations of physicists, have several notable social aspects,
  • What started out as a reductio ad absurdum became proof that the cosmos is in certain ways absurd. What began as a bug became a feature and is now a fact.
  • “If poetry is emotion recollected in tranquility, then science is tranquility recollected in emotion.” The seemingly neutral order of the natural world becomes the sounding board for every passionate feeling the physicist possesses.
  • Musser explains that the big issue was settled mainly by being pushed aside. Generational imperatives trumped evidentiary ones. The things that made Einstein the lovable genius of popular imagination were also the things that made him an easy object of condescension. The hot younger theorists patronized him,
  • There was never a decisive debate, never a hallowed crucial experiment, never even a winning argument to settle the case, with one physicist admitting, “Most physicists (including me) accept that Bohr won the debate, although like most physicists I am hard pressed to put into words just how it was done.”
  • Arguing about non-locality went out of fashion, in this account, almost the way “Rock Around the Clock” displaced Sinatra from the top of the charts.
  • The same pattern of avoidance and talking-past and taking on the temper of the times turns up in the contemporary science that has returned to the possibility of non-locality.
  • the revival of “non-locality” as a topic in physics may be due to our finding the metaphor of non-locality ever more palatable: “Modern communications technology may not technically be non-local but it sure feels that it is.”
  • Living among distant connections, where what happens in Bangalore happens in Boston, we are more receptive to the idea of such a strange order in the universe.
  • The “indeterminacy” of the atom was, for younger European physicists, “a lesson of modernity, an antidote to a misplaced Enlightenment trust in reason, which German intellectuals in the 1920’s widely held responsible for their country’s defeat in the First World War.” The tonal and temperamental difference between the scientists was as great as the evidence they called on.
  • Science isn’t a slot machine, where you drop in facts and get out truths. But it is a special kind of social activity, one where lots of different human traits—obstinacy, curiosity, resentment of authority, sheer cussedness, and a grudging readiness to submit pet notions to popular scrutiny—end by producing reliable knowledge
  • What was magic became mathematical and then mundane. “Magical” explanations, like spooky action, are constantly being revived and rebuffed, until, at last, they are reinterpreted and accepted. Instead of a neat line between science and magic, then, we see a jumpy, shifting boundary that keeps getting redrawn
  • Real-world demarcations between science and magic, Musser’s story suggests, are like Bugs’s: made on the move and as much a trap as a teaching aid.
  • In the past several decades, certainly, the old lines between the history of astrology and astronomy, and between alchemy and chemistry, have been blurred; historians of the scientific revolution no longer insist on a clean break between science and earlier forms of magic.
  • Where once logical criteria between science and non-science (or pseudo-science) were sought and taken seriously—Karl Popper’s criterion of “falsifiability” was perhaps the most famous, insisting that a sound theory could, in principle, be proved wrong by one test or another—many historians and philosophers of science have come to think that this is a naïve view of how the scientific enterprise actually works.
  • They see a muddle of coercion, old magical ideas, occasional experiment, hushed-up failures—all coming together in a social practice that gets results but rarely follows a definable logic.
  • Yet the old notion of a scientific revolution that was really a revolution is regaining some credibility.
  • David Wootton, in his new, encyclopedic history, “The Invention of Science” (Harper), recognizes the blurred lines between magic and science but insists that the revolution lay in the public nature of the new approach.
  • What killed alchemy was the insistence that experiments must be openly reported in publications which presented a clear account of what had happened, and they must then be replicated, preferably before independent witnesses.
  • Wootton, while making little of Popper’s criterion of falsifiability, makes it up to him by borrowing a criterion from his political philosophy. Scientific societies are open societies. One day the lunar tides are occult, the next day they are science, and what changes is the way in which we choose to talk about them.
  • Wootton also insists, against the grain of contemporary academia, that single observed facts, what he calls “killer facts,” really did polish off antique authorities
  • once we agree that the facts are facts, they can do amazing work. Traditional Ptolemaic astronomy, in place for more than a millennium, was destroyed by what Galileo discovered about the phases of Venus. That killer fact “serves as a single, solid, and strong argument to establish its revolution around the Sun, such that no room whatsoever remains for doubt,” Galileo wrote, and Wootton adds, “No one was so foolish as to dispute these claims.
  • everal things flow from Wootton’s view. One is that “group think” in the sciences is often true think. Science has always been made in a cloud of social networks.
  • There has been much talk in the pop-sci world of “memes”—ideas that somehow manage to replicate themselves in our heads. But perhaps the real memes are not ideas or tunes or artifacts but ways of making them—habits of mind rather than products of mind
  • science, then, a club like any other, with fetishes and fashions, with schemers, dreamers, and blackballed applicants? Is there a real demarcation to be made between science and every other kind of social activity
  • The claim that basic research is valuable because it leads to applied technology may be true but perhaps is not at the heart of the social use of the enterprise. The way scientists do think makes us aware of how we can think
Javier E

Can Political Theology Save Secularism? | Religion & Politics - 0 views

  • Osama bin Laden had forced us to admit that, while the U.S. may legally separate church and state, it cannot do so intellectually. Beneath even the most ostensibly faithless of our institutions and our polemicists lie crouching religious lions, ready to devour the infidels who set themselves in opposition to the theology of the free market and the messianic march of democracy
  • As our political system depends on a shaky separation between religion and politics that has become increasingly unstable, scholars are sensing the deep disillusionment afoot and trying to chart a way out.
  • At its best, Religion for Atheists is a chronicle of the smoldering heap that liberal capitalism has made of the social rhythms that used to serve as a buffer between humans and the random cruelty of the universe. Christian and Jewish traditions, Botton argues, reinforced the ideas that people are morally deficient, that disappointment and suffering are normative, and that death is inevitable. The abandonment of those realities for the delusions of the self-made individual, the fantasy superman who can bend reality to his will if he works hard enough and is positive enough, leaves little mystery to why we are perpetually stressed out, overworked, and unsatisfied.
  • ...12 more annotations...
  • Botton’s central obsession is the insane ways bourgeois postmoderns try to live, namely in a perpetual upward swing of ambition and achievement, where failure indicates character deficiency despite an almost total lack of social infrastructure to help us navigate careers, relationships, parenting, and death. But he seems uninterested in how those structures were destroyed or what it might take to rebuild them
  • Botton wants to keep bourgeois secularism and add a few new quasi-religious social routines. Quasi-religious social routines may indeed be a part of the solution, as we shall see, but they cannot be simply flung atop a regime as indifferent to human values as liberal capitalism.
  • Citizens see the structure behind the façade and lose faith in the myth of the state as a dispassionate, egalitarian arbiter of conflict. Once theological passions can no longer be sublimated in material affluence and the fiction of representative democracy, it is little surprise to see them break out in movements that are, on both the left and the right, explicitly hostile to the liberal state.
  • Western politics have an auto-immune disorder: they are structured to pretend that their notions of reason, right, and sovereignty are detached from a deeply theological heritage. When pressed by war and economic dysfunction, liberal ideas prove as compatible with zealotry and domination as any others.
  • Secularism is not strictly speaking a religion, but it represents an orientation toward religion that serves the theological purpose of establishing a hierarchy of legitimate social values. Religion must be “privatized” in liberal societies to keep it out of the way of economic functioning. In this view, legitimate politics is about making the trains run on time and reducing the federal deficit; everything else is radicalism. A surprising number of American intellectuals are able to persuade themselves that this vision of politics is sufficient, even though the train tracks are crumbling, the deficit continues to gain on the GDP, and millions of citizens are sinking into the dark mire of debt and permanent unemployment.
  • Critchley has made a career forging a philosophical account of human ethical responsibility and political motivation. His question is: after the rational hopes of the Enlightenment corroded into nihilism, how do humans write a believable story about what their existence means in the world? After the death of God, how do we account for our feelings of moral responsibility, and how might that account motivate us to resist the deadening political system we face?
  • The question is what to do in the face of the unmistakable religious and political nihilism currently besetting Western democracies.
  • both Botton and Critchley believe the solution involves what Derrida called a “religion without religion”—for Critchley a “faith of the faithless,” for Botton a “religion for atheists.”
  • a new political becoming will require a complete break with the status quo, a new political sphere that we understand as our own deliberate creation, uncoupled from the theological fictions of natural law or God-given rights
  • Critchley proposes as the foundation of politics “the poetic construction of a supreme fiction … a fiction that we know to be a fiction and yet in which we believe nonetheless.” Following the French philosopher Alain Badiou and the Apostle Paul, Critchley conceives political “truth” as something like fidelity: a radical loyalty to the historical moment where true politics came to life.
  • But unlike an evangelist, Critchley understands that attempting to fill the void with traditional religion is to slip back into a slumber that reinforces institutions desperate to maintain the political and economic status quo. Only in our condition of brokenness and finitude, uncomforted by promises of divine salvation, can we be open to a connection with others that might mark the birth of political resistance
  • This is the crux of the difference between Critchley’s radical faithless faith and Botton’s bourgeois secularism. Botton has imagined religion as little more than a coping mechanism for the “terrifying degrees of pain which arise from our vulnerability,” seemingly unaware that the pain and vulnerability may intensify many times over. It won’t be enough to simply to sublimate our terror in confessional restaurants and atheist temples. The recognition of finitude, the weight of our nothingness, can hollow us into a different kind of self: one without illusions or reputations or private property, one with nothing but radical openness to others. Only then can there be the possibility of meaning, of politics, of hope.
Javier E

If We Knew Then What We Know Now About Covid, What Would We Have Done Differently? - WSJ - 0 views

  • For much of 2020, doctors and public-health officials thought the virus was transmitted through droplets emitted from one person’s mouth and touched or inhaled by another person nearby. We were advised to stay at least 6 feet away from each other to avoid the droplets
  • A small cadre of aerosol scientists had a different theory. They suspected that Covid-19 was transmitted not so much by droplets but by smaller infectious aerosol particles that could travel on air currents way farther than 6 feet and linger in the air for hours. Some of the aerosol particles, they believed, were small enough to penetrate the cloth masks widely used at the time.
  • The group had a hard time getting public-health officials to embrace their theory. For one thing, many of them were engineers, not doctors.
  • ...37 more annotations...
  • “My first and biggest wish is that we had known early that Covid-19 was airborne,”
  • , “Once you’ve realized that, it informs an entirely different strategy for protection.” Masking, ventilation and air cleaning become key, as well as avoiding high-risk encounters with strangers, he says.
  • Instead of washing our produce and wearing hand-sewn cloth masks, we could have made sure to avoid superspreader events and worn more-effective N95 masks or their equivalent. “We could have made more of an effort to develop and distribute N95s to everyone,” says Dr. Volckens. “We could have had an Operation Warp Speed for masks.”
  • We didn’t realize how important clear, straight talk would be to maintaining public trust. If we had, we could have explained the biological nature of a virus and warned that Covid-19 would change in unpredictable ways.  
  • We didn’t know how difficult it would be to get the basic data needed to make good public-health and medical decisions. If we’d had the data, we could have more effectively allocated scarce resources
  • In the face of a pandemic, he says, the public needs an early basic and blunt lesson in virology
  • and mutates, and since we’ve never seen this particular virus before, we will need to take unprecedented actions and we will make mistakes, he says.
  • Since the public wasn’t prepared, “people weren’t able to pivot when the knowledge changed,”
  • By the time the vaccines became available, public trust had been eroded by myriad contradictory messages—about the usefulness of masks, the ways in which the virus could be spread, and whether the virus would have an end date.
  • , the absence of a single, trusted source of clear information meant that many people gave up on trying to stay current or dismissed the different points of advice as partisan and untrustworthy.
  • “The science is really important, but if you don’t get the trust and communication right, it can only take you so far,”
  • people didn’t know whether it was OK to visit elderly relatives or go to a dinner party.
  • Doctors didn’t know what medicines worked. Governors and mayors didn’t have the information they needed to know whether to require masks. School officials lacked the information needed to know whether it was safe to open schools.
  • Had we known that even a mild case of Covid-19 could result in long Covid and other serious chronic health problems, we might have calculated our own personal risk differently and taken more care.
  • just months before the outbreak of the pandemic, the Council of State and Territorial Epidemiologists released a white paper detailing the urgent need to modernize the nation’s public-health system still reliant on manual data collection methods—paper records, phone calls, spreadsheets and faxes.
  • While the U.K. and Israel were collecting and disseminating Covid case data promptly, in the U.S. the CDC couldn’t. It didn’t have a centralized health-data collection system like those countries did, but rather relied on voluntary reporting by underfunded state and local public-health systems and hospitals.
  • doctors and scientists say they had to depend on information from Israel, the U.K. and South Africa to understand the nature of new variants and the effectiveness of treatments and vaccines. They relied heavily on private data collection efforts such as a dashboard at Johns Hopkins University’s Coronavirus Resource Center that tallied cases, deaths and vaccine rates globally.
  • For much of the pandemic, doctors, epidemiologists, and state and local governments had no way to find out in real time how many people were contracting Covid-19, getting hospitalized and dying
  • To solve the data problem, Dr. Ranney says, we need to build a public-health system that can collect and disseminate data and acts like an electrical grid. The power company sees a storm coming and lines up repair crews.
  • If we’d known how damaging lockdowns would be to mental health, physical health and the economy, we could have taken a more strategic approach to closing businesses and keeping people at home.
  • t many doctors say they were crucial at the start of the pandemic to give doctors and hospitals a chance to figure out how to accommodate and treat the avalanche of very sick patients.
  • The measures reduced deaths, according to many studies—but at a steep cost.
  • The lockdowns didn’t have to be so harmful, some scientists say. They could have been more carefully tailored to protect the most vulnerable, such as those in nursing homes and retirement communities, and to minimize widespread disruption.
  • Lockdowns could, during Covid-19 surges, close places such as bars and restaurants where the virus is most likely to spread, while allowing other businesses to stay open with safety precautions like masking and ventilation in place.  
  • The key isn’t to have the lockdowns last a long time, but that they are deployed earlier,
  • If England’s March 23, 2020, lockdown had begun one week earlier, the measure would have nearly halved the estimated 48,600 deaths in the first wave of England’s pandemic
  • If the lockdown had begun a week later, deaths in the same period would have more than doubled
  • It is possible to avoid lockdowns altogether. Taiwan, South Korea and Hong Kong—all countries experienced at handling disease outbreaks such as SARS in 2003 and MERS—avoided lockdowns by widespread masking, tracking the spread of the virus through testing and contact tracing and quarantining infected individuals.
  • With good data, Dr. Ranney says, she could have better managed staffing and taken steps to alleviate the strain on doctors and nurses by arranging child care for them.
  • Early in the pandemic, public-health officials were clear: The people at increased risk for severe Covid-19 illness were older, immunocompromised, had chronic kidney disease, Type 2 diabetes or serious heart conditions
  • t had the unfortunate effect of giving a false sense of security to people who weren’t in those high-risk categories. Once case rates dropped, vaccines became available and fear of the virus wore off, many people let their guard down, ditching masks, spending time in crowded indoor places.
  • it has become clear that even people with mild cases of Covid-19 can develop long-term serious and debilitating diseases. Long Covid, whose symptoms include months of persistent fatigue, shortness of breath, muscle aches and brain fog, hasn’t been the virus’s only nasty surprise
  • In February 2022, a study found that, for at least a year, people who had Covid-19 had a substantially increased risk of heart disease—even people who were younger and had not been hospitalized
  • respiratory conditions.
  • Some scientists now suspect that Covid-19 might be capable of affecting nearly every organ system in the body. It may play a role in the activation of dormant viruses and latent autoimmune conditions people didn’t know they had
  •  A blood test, he says, would tell people if they are at higher risk of long Covid and whether they should have antivirals on hand to take right away should they contract Covid-19.
  • If the risks of long Covid had been known, would people have reacted differently, especially given the confusion over masks and lockdowns and variants? Perhaps. At the least, many people might not have assumed they were out of the woods just because they didn’t have any of the risk factors.
Javier E

Grayson Perry's Reith Lectures: Who decides what makes art good? - FT.com - 0 views

  • I think this is one of the most burning issues around art – how do we tell if something is good? And who tells us that it’s good?
  • many of the methods of judging are very problematic and many of the criteria used to assess art are conflicting. We have financial value, popularity, art historical significance, or aesthetic sophistication. All these things could be at odds with each other.
  • A visitor to an exhibition like the Hockney one, if they were judging the quality of the art, might use a word like “beauty”. Now, if you use that kind of word in the art world, be very careful. There will be sucking of teeth and mournful shaking of heads because their hero, the artist Marcel Duchamp, of “urinal” fame, he said, “Aesthetic delectation is the danger to be avoided.” In the art world sometimes it can feel as if to judge something on its beauty, on its aesthetic merits, is as if you’re buying into something politically incorrect, into sexism, into racism, colonialism, class privilege. It almost feels it’s loaded, because where does our idea of beauty come from?
  • ...16 more annotations...
  • beauty is very much about familiarity and it’s reinforcing an idea we have already. It’s like when we go on holiday, all we really want to do is take the photograph that we’ve seen in the brochure. Because our idea of beauty is constructed, by family, friends, education, nationality, race, religion, politics, all these things
  • I have found the 21st-century version of the Venetian secret and it is a mathematical formula. What you do, you get a half-decent, non-offensive kind of idea, then you times it by the number of studio assistants, and then you divide it with an ambitious art dealer, and that equals number of oligarchs and hedge fund managers in the world.
  • the nearest we have to an empirical measure of art that actually does exist is the market. By that reckoning, Cézanne’s “Card Players” is the most beautiful lovely painting in the world. I find it a little bit clunky-kitsch but that’s me. It’s worth $260m.
  • The opposite arguments are that it’s art for art’s sake and that’s a very idealistic position to take. Clement Greenberg, a famous art critic in the 1950s, said that art will always be tied to money by an umbilical cord of gold, either state money or market money. I’m pragmatic about it: one of my favourite quotes is you’ll never have a good art career unless your work fits into the elevator of a New York apartment block.
  • there’s one thing about that red painting that ends up in Sotheby’s. It’s not just any old red painting. It is a painting that has been validated. This is an important word in the art world and the big question is: who validates? There is quite a cast of characters in this validation chorus that will kind of decide what is good art. They are a kind of panel, if you like, that decides on what is good quality, what are we going to end up looking at?
  • They include artists, teachers, dealers, collectors, critics, curators, the media, even the public maybe. And they form this lovely consensus around what is good art.
  • there were four stages to the rise of an artist. Peers, serious critics and collectors, dealers, then the public.
  • Another member of that cast of validating characters is the collectors. In the 1990s, if Charles Saatchi just put his foot over the threshold of your exhibition, that was it. The media was agog and he would come in and Hoover it up. You do want the heavyweight collector to buy your work because that gives it kudos. You don’t want a tacky one who is just buying it to glitz up their hallway.
  • The next part of this chorus of validation are the dealers. A good dealer brand has a very powerful effect on the reputation of the artist; they form a part of placing the work. This is a slightly mysterious process that many people don’t quite understand but a dealer will choose where your work goes so it gains the brownie points, so the buzz around it goes up.
  • now, of course, galleries like the Tate Modern want a big name because visitor numbers, in a way, are another empirical measure of quality. So perhaps at the top of the tree of the validation cast are the curators, and in the past century they have probably become the most powerful giver-outers of brownie points in the art world.
  • ach of the encounters with these members of the cast of validation bestows upon the work, and on the artist, a patina, and what makes that patina is all these hundreds of little conversations and reviews and the good prices over time. These are the filters that pass a work of art through into the canon.
  • So what does this lovely consensus, that all these people are bestowing on this artwork, that anoints it with the quality that we all want, boil down to? I think in many ways what it boils down to is seriousness. That’s the most valued currency in the art world.
  • The whole idea of quality now seems to be contested, as if you’re buying into the language of the elite by saying, “Oh, that’s very good.” How you might judge this work is really problematic because to say it’s not beautiful is to put the wrong kind of criteria on it. You might say, “Oh, it’s dull!” [And people will say] “Oh, you’re just not understanding it with the right terms.” So I think, “Well, how do we judge these things?” Because a lot of them are quite politicised. There’s quite a right-on element to them, so do we judge them on how ethical they are, or how politically right-on they are?
  • What I am attempting to explain is how the art we see in museums and in galleries around the world, and in biennales – how it ends up there, how it gets chosen. In the end, if enough of the right people think it’s good, that’s all there is to it. But, as Alan Bennett said when he was a trustee of the National Gallery, they should put a big sign up outside saying: “You don’t have to like it all.”
  • Or then again I might say, “Well, what do I judge them against?” Do I judge them against government policy? Do I judge them against reality TV? Because that does participation very well. So, in the end, what do we do? What happens to this sort of art when it doesn’t have validation? What is it left with? It’s left with popularity.
  • Then, of course, the next group of people we might think about in deciding what is good art is the public. Since the mid-1990s, art has got a lot more media attention. But popularity has always been a quite dodgy quality [to have]. The highbrow critics will say, “Oh, he’s a bit of a celebrity,” and they turn their noses up about people who are well known to the public
Javier E

How Did Consciousness Evolve? - The Atlantic - 0 views

  • Theories of consciousness come from religion, from philosophy, from cognitive science, but not so much from evolutionary biology. Maybe that’s why so few theories have been able to tackle basic questions such as: What is the adaptive value of consciousness? When did it evolve and what animals have it?
  • The Attention Schema Theory (AST), developed over the past five years, may be able to answer those questions.
  • The theory suggests that consciousness arises as a solution to one of the most fundamental problems facing any nervous system: Too much information constantly flows in to be fully processed. The brain evolved increasingly sophisticated mechanisms for deeply processing a few select signals at the expense of others, and in the AST, consciousness is the ultimate result of that evolutionary sequence
  • ...23 more annotations...
  • Even before the evolution of a central brain, nervous systems took advantage of a simple computing trick: competition.
  • It coordinates something called overt attention – aiming the satellite dishes of the eyes, ears, and nose toward anything important.
  • Selective enhancement therefore probably evolved sometime between hydras and arthropods—between about 700 and 600 million years ago, close to the beginning of complex, multicellular life
  • The next evolutionary advance was a centralized controller for attention that could coordinate among all senses. In many animals, that central controller is a brain area called the tectum
  • At any moment only a few neurons win that intense competition, their signals rising up above the noise and impacting the animal’s behavior. This process is called selective signal enhancement, and without it, a nervous system can do almost nothing.
  • All vertebrates—fish, reptiles, birds, and mammals—have a tectum. Even lampreys have one, and they appeared so early in evolution that they don’t even have a lower jaw. But as far as anyone knows, the tectum is absent from all invertebrates
  • According to fossil and genetic evidence, vertebrates evolved around 520 million years ago. The tectum and the central control of attention probably evolved around then, during the so-called Cambrian Explosion when vertebrates were tiny wriggling creatures competing with a vast range of invertebrates in the sea.
  • The tectum is a beautiful piece of engineering. To control the head and the eyes efficiently, it constructs something called an internal model, a feature well known to engineers. An internal model is a simulation that keeps track of whatever is being controlled and allows for predictions and planning.
  • The tectum’s internal model is a set of information encoded in the complex pattern of activity of the neurons. That information simulates the current state of the eyes, head, and other major body parts, making predictions about how these body parts will move next and about the consequences of their movement
  • In fish and amphibians, the tectum is the pinnacle of sophistication and the largest part of the brain. A frog has a pretty good simulation of itself.
  • With the evolution of reptiles around 350 to 300 million years ago, a new brain structure began to emerge – the wulst. Birds inherited a wulst from their reptile ancestors. Mammals did too, but our version is usually called the cerebral cortex and has expanded enormously
  • The cortex also takes in sensory signals and coordinates movement, but it has a more flexible repertoire. Depending on context, you might look toward, look away, make a sound, do a dance, or simply store the sensory event in memory in case the information is useful for the future.
  • The most important difference between the cortex and the tectum may be the kind of attention they control. The tectum is the master of overt attention—pointing the sensory apparatus toward anything important. The cortex ups the ante with something called covert attention. You don’t need to look directly at something to covertly attend to it. Even if you’ve turned your back on an object, your cortex can still focus its processing resources on it
  • The cortex needs to control that virtual movement, and therefore like any efficient controller it needs an internal model. Unlike the tectum, which models concrete objects like the eyes and the head, the cortex must model something much more abstract. According to the AST, it does so by constructing an attention schema—a constantly updated set of information that describes what covert attention is doing moment-by-moment and what its consequences are
  • Covert attention isn’t intangible. It has a physical basis, but that physical basis lies in the microscopic details of neurons, synapses, and signals. The brain has no need to know those details. The attention schema is therefore strategically vague. It depicts covert attention in a physically incoherent way, as a non-physical essence
  • this, according to the theory, is the origin of consciousness. We say we have consciousness because deep in the brain, something quite primitive is computing that semi-magical self-description.
  • I’m reminded of Teddy Roosevelt’s famous quote, “Do what you can with what you have where you are.” Evolution is the master of that kind of opportunism. Fins become feet. Gill arches become jaws. And self-models become models of others. In the AST, the attention schema first evolved as a model of one’s own covert attention. But once the basic mechanism was in place, according to the theory, it was further adapted to model the attentional states of others, to allow for social prediction. Not only could the brain attribute consciousness to itself, it began to attribute consciousness to others.
  • In the AST’s evolutionary story, social cognition begins to ramp up shortly after the reptilian wulst evolved. Crocodiles may not be the most socially complex creatures on earth, but they live in large communities, care for their young, and can make loyal if somewhat dangerous pets.
  • If AST is correct, 300 million years of reptilian, avian, and mammalian evolution have allowed the self-model and the social model to evolve in tandem, each influencing the other. We understand other people by projecting ourselves onto them. But we also understand ourselves by considering the way other people might see us.
  • t the cortical networks in the human brain that allow us to attribute consciousness to others overlap extensively with the networks that construct our own sense of consciousness.
  • Language is perhaps the most recent big leap in the evolution of consciousness. Nobody knows when human language first evolved. Certainly we had it by 70 thousand years ago when people began to disperse around the world, since all dispersed groups have a sophisticated language. The relationship between language and consciousness is often debated, but we can be sure of at least this much: once we developed language, we could talk about consciousness and compare notes
  • Maybe partly because of language and culture, humans have a hair-trigger tendency to attribute consciousness to everything around us. We attribute consciousness to characters in a story, puppets and dolls, storms, rivers, empty spaces, ghosts and gods. Justin Barrett called it the Hyperactive Agency Detection Device, or HADD
  • the HADD goes way beyond detecting predators. It’s a consequence of our hyper-social nature. Evolution turned up the amplitude on our tendency to model others and now we’re supremely attuned to each other’s mind states. It gives us our adaptive edge. The inevitable side effect is the detection of false positives, or ghosts.
Javier E

Opinion | Humans Are Animals. Let's Get Over It. - The New York Times - 0 views

  • The separation of people from, and the superiority of people to, members of other species is a good candidate for the originating idea of Western thought. And a good candidate for the worst.
  • Like Plato, Hobbes associates anarchy with animality and civilization with the state, which gives to our merely animal motion moral content for the first time and orders us into a definite hierarchy.
  • It is rationality that gives us dignity, that makes a claim to moral respect that no mere animal can deserve. “The moral law reveals to me a life independent of animality,” writes Immanuel Kant in “Critique of Practical Reason.” In this assertion, at least, the Western intellectual tradition has been remarkably consistent.
  • ...15 more annotations...
  • the devaluation of animals and disconnection of us from them reflect a deeper devaluation of the material universe in general
  • In this scheme of things, we owe nature nothing; it is to yield us everything. This is the ideology of species annihilation and environmental destruction, and also of technological development.
  • Further trouble is caused when the distinctions between humans and animals are then used to draw distinctions among human beings
  • Some of us, in short, are animals — and some of us are better than that. This, it turns out, is a useful justification for colonialism, slavery and racism.
  • The classical source for this distinction is certainly Aristotle. In the “Politics,” he writes, “Where then there is such a difference as that between soul and body, or between men and animals (as in the case of those whose business is to use their body, and who can do nothing better), the lower sort are by nature slaves.
  • Every human hierarchy, insofar as it can be justified philosophically, is treated by Aristotle by analogy to the relation of people to animals.
  • One difficult thing to face about our animality is that it entails our deaths; being an animal is associated throughout philosophy with dying purposelessly, and so with living meaninglessly.
  • this line of thought also happens to justify colonizing or even extirpating the “savage,” the beast in human form.
  • Our supposed fundamental distinction from “beasts, “brutes” and “savages” is used to divide us from nature, from one another and, finally, from ourselves
  • In Plato’s “Republic,” Socrates divides the human soul into two parts. The soul of the thirsty person, he says, “wishes for nothing else than to drink.” But we can restrain ourselves. “That which inhibits such actions,” he concludes, “arises from the calculations of reason.” When we restrain or control ourselves, Plato argues, a rational being restrains an animal.
  • In this view, each of us is both a beast and a person — and the point of human life is to constrain our desires with rationality and purify ourselves of animality
  • These sorts of systematic self-divisions come to be refigured in Cartesian dualism, which separates the mind from the body, or in Sigmund Freud’s distinction between id and ego, or in the neurological contrast between the functions of the amygdala and the prefrontal cortex.
  • I don’t know how to refute it, exactly, except to say that I don’t feel myself to be a logic program running on an animal body; I’d like to consider myself a lot more integrated than that.
  • And I’d like to repudiate every political and environmental conclusion ever drawn by our supposed transcendence of the order of nature
  • There is no doubt that human beings are distinct from other animals, though not necessarily more distinct than other animals are from one another. But maybe we’ve been too focused on the differences for too long. Maybe we should emphasize what all us animals have in common.
Javier E

The varieties of denialism | Scientia Salon - 1 views

  • a stimulating conference at Clark University about “Manufacturing Denial,” which brought together scholars from wildly divergent disciplines — from genocide studies to political science to philosophy — to explore the idea that “denialism” may be a sufficiently coherent phenomenon underlying the willful disregard of factual evidence by ideologically motivated groups or individuals.
  • the Oxford defines a denialist as “a person who refuses to admit the truth of a concept or proposition that is supported by the majority of scientific or historical evidence,” which represents a whole different level of cognitive bias or rationalization. Think of it as bias on steroids.
  • First, as a scientist: it’s just not about the facts, indeed — as Brendan showed data in hand during his presentation — insisting on facts may have counterproductive effects, leading the denialist to double down on his belief.
  • ...22 more annotations...
  • if I think that simply explaining the facts to the other side is going to change their mind, then I’m in for a rude awakening.
  • As a philosopher, I found to be somewhat more disturbing the idea that denialism isn’t even about critical thinking.
  • what the large variety of denialisms have in common is a very strong, overwhelming, ideological commitment that helps define the denialist identity in a core manner. This commitment can be religious, ethnical or political in nature, but in all cases it fundamentally shapes the personal identity of the people involved, thus generating a strong emotional attachment, as well as an equally strong emotional backlash against critics.
  • To begin with, of course, they think of themselves as “skeptics,” thus attempting to appropriate a word with a venerable philosophical pedigree and which is supposed to indicate a cautiously rational approach to a given problem. As David Hume put it, a wise person (i.e., a proper skeptic) will proportion her beliefs to the evidence. But there is nothing of the Humean attitude in people who are “skeptical” of evolution, climate change, vaccines, and so forth.
  • Denialists have even begun to appropriate the technical language of informal logic: when told that a majority of climate scientists agree that the planet is warming up, they are all too happy to yell “argument from authority!” When they are told that they should distrust statements coming from the oil industry and from “think tanks” in their pockets they retort “genetic fallacy!” And so on. Never mind that informal fallacies are such only against certain background information, and that it is eminently sensible and rational to trust certain authorities (at the least provisionally), as well as to be suspicious of large organizations with deep pockets and an obvious degree of self-interest.
  • What commonalities can we uncover across instances of denialism that may allow us to tackle the problem beyond facts and elementary logic?
  • the evidence from the literature is overwhelming that denialists have learned to use the vocabulary of critical thinking against their opponents.
  • Another important issue to understand is that denialists exploit the inherently tentative nature of scientific or historical findings to seek refuge for their doctrines.
  • . Scientists have been wrong before, and doubtlessly will be again in the future, many times. But the issue is rather one of where it is most rational to place your bets as a Bayesian updater: with the scientific community or with Faux News?
  • Science should be portrayed as a human story of failure and discovery, not as a body of barely comprehensible facts arrived at by epistemic priests.
  • Is there anything that can be done in this respect? I personally like the idea of teaching “science appreciation” classes in high school and college [2], as opposed to more traditional (usually rather boring, both as a student and as a teacher) science instruction
  • Denialists also exploit the media’s self imposed “balanced” approach to presenting facts, which leads to the false impression that there really are two approximately equal sides to every debate.
  • This is a rather recent phenomenon, and it is likely the result of a number of factors affecting the media industry. One, of course, is the onset of the 24-hr media cycle, with its pernicious reliance on punditry. Another is the increasing blurring of the once rather sharp line between reporting and editorializing.
  • The problem with the media is of course made far worse by the ongoing crisis in contemporary journalism, with newspapers, magazines and even television channels constantly facing an uncertain future of revenues,
  • he push back against denialism, in all its varied incarnations, is likely to be more successful if we shift the focus from persuading individual members of the public to making political and media elites accountable.
  • This is a major result coming out of Brendan’s research. He showed data set after data set demonstrating two fundamental things: first, large sections of the general public do not respond to the presentation of even highly compelling facts, indeed — as mentioned above — are actually more likely to entrench further into their positions.
  • Second, whenever one can put pressure on either politicians or the media, they do change their tune, becoming more reasonable and presenting things in a truly (as opposed to artificially) balanced way.
  • Third, and most crucially, there is plenty of evidence from political science studies that the public does quickly rally behind a unified political leadership. This, as much as it is hard to fathom now, has happened a number of times even in somewhat recent times
  • when leaders really do lead, the people follow. It’s just that of late the extreme partisan bickering in Washington has made the two major parties entirely incapable of working together on the common ground that they have demonstrably had in the past.
  • Another thing we can do about denialism: we should learn from the detailed study of successful cases and see what worked and how it can be applied to other instances
  • Yet another thing we can do: seek allies. In the case of evolution denial — for which I have the most first-hand experience — it has been increasingly obvious to me that it is utterly counterproductive for a strident atheist like Dawkins (or even a relatively good humored one like yours truly) to engage creationists directly. It is far more effective when we have clergy (Barry Lynn of Americans United for the Separation of Church and State [6] comes to mind) and religious scientists
  • Make no mistake about it: denialism in its various forms is a pernicious social phenomenon, with potentially catastrophic consequences for our society. It requires a rallying call for all serious public intellectuals, academic or not, who have the expertise and the stamina to join the fray to make this an even marginally better world for us all. It’s most definitely worth the fight.
Javier E

What 'White Privilege' Really Means - NYTimes.com - 0 views

  • This week’s conversation is with Naomi Zack, a professor of philosophy at the University of Oregon and the author of “The Ethics and Mores of Race: Equality After the History of Philosophy.”
  • My first book, “Race and Mixed Race” (1991) was an analysis of the incoherence of U.S. black/white racial categories in their failure to allow for mixed race. In “Philosophy of Science and Race,” I examined the lack of a scientific foundation for biological notions of human races, and in “The Ethics and Mores of Race,” I turned to the absence of ideas of universal human equality in the Western philosophical tradition.
  • Critical philosophy of race, like critical race theory in legal studies, seeks to understand the disadvantages of nonwhite racial groups in society (blacks especially) by understanding social customs, laws, and legal practices.
  • ...14 more annotations...
  • What’s happening in Ferguson is the result of several recent historical factors and deeply entrenched racial attitudes, as well as a breakdown in participatory democracy.
  • In Ferguson, the American public has awakened to images of local police, fully decked out in surplus military gear from our recent wars in Iraq and Afghanistan, who are deploying all that in accordance with a now widespread “broken windows” policy, which was established on the hypothesis that if small crimes and misdemeanors are checked in certain neighborhoods, more serious crimes will be deterred. But this policy quickly intersected with police racial profiling already in existence to result in what has recently become evident as a propensity to shoot first.
  • How does this “broken windows” policy relate to the tragic deaths of young black men/boys? N.Z.:People are now stopped by the police for suspicion of misdemeanor offenses and those encounters quickly escalate.
  • Young black men are the convenient target of choice in the tragic intersection of the broken windows policy, the domestic effects of the war on terror and police racial profiling.
  • Why do you think that young black men are disproportionately targeted? N.Z.: Exactly why unarmed young black men are the target of choice, as opposed to unarmed young white women, or unarmed old black women, or even unarmed middle-aged college professors, is an expression of a long American tradition of suspicion and terrorization of members of those groups who have the lowest status in our society and have suffered the most extreme forms of oppression, for centuries.
  • Police in the United States are mostly white and mostly male. Some confuse their work roles with their own characters. As young males, they naturally pick out other young male opponents. They have to win, because they are the law, and they have the moral charge of protecting.
  • So young black males, who have less status than they do, and are already more likely to be imprisoned than young white males, are natural suspects.
  • Besides the police, a large segment of the white American public believes they are in danger from blacks, especially young black men, who they think want to rape young white women. This is an old piece of American mythology that has been invoked to justify crimes against black men, going back to lynching. The perceived danger of blacks becomes very intense when blacks are harmed.
  • The term “white privilege” is misleading. A privilege is special treatment that goes beyond a right. It’s not so much that being white confers privilege but that not being white means being without rights in many cases. Not fearing that the police will kill your child for no reason isn’t a privilege. It’s a right. 
  • that is what “white privilege” is meant to convey, that whites don’t have many of the worries nonwhites, especially blacks, do.
  • Other examples of white privilege include all of the ways that whites are unlikely to end up in prison for some of the same things blacks do, not having to worry about skin-color bias, not having to worry about being pulled over by the police while driving or stopped and frisked while walking in predominantly white neighborhoods, having more family wealth because your parents and other forebears were not subject to Jim Crow and slavery.
  • Probably all of the ways in which whites are better off than blacks in our society are forms of white privilege.
  • Over half a century later, it hasn’t changed much in the United States. Black people are still imagined to have a hyper-physicality in sports, entertainment, crime, sex, politics, and on the street. Black people are not seen as people with hearts and minds and hopes and skills but as cyphers that can stand in for anything whites themselves don’t want to be or think they can’t be.
  • race is through and through a social construct, previously constructed by science, now by society, including its most extreme victims. But, we cannot abandon race, because people would still discriminate and there would be no nonwhite identities from which to resist. Also, many people just don’t want to abandon race and they have a fundamental right to their beliefs. So race remains with us as something that needs to be put right.
Javier E

Quantum Computing Advance Begins New Era, IBM Says - The New York Times - 0 views

  • While researchers at Google in 2019 claimed that they had achieved “quantum supremacy” — a task performed much more quickly on a quantum computer than a conventional one — IBM’s researchers say they have achieved something new and more useful, albeit more modestly named.
  • “We’re entering this phase of quantum computing that I call utility,” said Jay Gambetta, a vice president of IBM Quantum. “The era of utility.”
  • Present-day computers are called digital, or classical, because they deal with bits of information that are either 1 or 0, on or off. A quantum computer performs calculations on quantum bits, or qubits, that capture a more complex state of information. Just as a thought experiment by the physicist Erwin Schrödinger postulated that a cat could be in a quantum state that is both dead and alive, a qubit can be both 1 and 0 simultaneously.
  • ...15 more annotations...
  • That allows quantum computers to make many calculations in one pass, while digital ones have to perform each calculation separately. By speeding up computation, quantum computers could potentially solve big, complex problems in fields like chemistry and materials science that are out of reach today.
  • When Google researchers made their supremacy claim in 2019, they said their quantum computer performed a calculation in 3 minutes 20 seconds that would take about 10,000 years on a state-of-the-art conventional supercomputer.
  • The IBM researchers in the new study performed a different task, one that interests physicists. They used a quantum processor with 127 qubits to simulate the behavior of 127 atom-scale bar magnets — tiny enough to be governed by the spooky rules of quantum mechanics — in a magnetic field. That is a simple system known as the Ising model, which is often used to study magnetism.
  • This problem is too complex for a precise answer to be calculated even on the largest, fastest supercomputers.
  • On the quantum computer, the calculation took less than a thousandth of a second to complete. Each quantum calculation was unreliable — fluctuations of quantum noise inevitably intrude and induce errors — but each calculation was quick, so it could be performed repeatedly.
  • Indeed, for many of the calculations, additional noise was deliberately added, making the answers even more unreliable. But by varying the amount of noise, the researchers could tease out the specific characteristics of the noise and its effects at each step of the calculation.“We can amplify the noise very precisely, and then we can rerun that same circuit,” said Abhinav Kandala, the manager of quantum capabilities and demonstrations at IBM Quantum and an author of the Nature paper. “And once we have results of these different noise levels, we can extrapolate back to what the result would have been in the absence of noise.”In essence, the researchers were able to subtract the effects of noise from the unreliable quantum calculations, a process they call error mitigation.
  • Altogether, the computer performed the calculation 600,000 times, converging on an answer for the overall magnetization produced by the 127 bar magnets.
  • Although an Ising model with 127 bar magnets is too big, with far too many possible configurations, to fit in a conventional computer, classical algorithms can produce approximate answers, a technique similar to how compression in JPEG images throws away less crucial data to reduce the size of the file while preserving most of the image’s details
  • Certain configurations of the Ising model can be solved exactly, and both the classical and quantum algorithms agreed on the simpler examples. For more complex but solvable instances, the quantum and classical algorithms produced different answers, and it was the quantum one that was correct.
  • Thus, for other cases where the quantum and classical calculations diverged and no exact solutions are known, “there is reason to believe that the quantum result is more accurate,”
  • Mr. Anand is currently trying to add a version of error mitigation for the classical algorithm, and it is possible that could match or surpass the performance of the quantum calculations.
  • In the long run, quantum scientists expect that a different approach, error correction, will be able to detect and correct calculation mistakes, and that will open the door for quantum computers to speed ahead for many uses.
  • Error correction is already used in conventional computers and data transmission to fix garbles. But for quantum computers, error correction is likely years away, requiring better processors able to process many more qubits
  • “This is one of the simplest natural science problems that exists,” Dr. Gambetta said. “So it’s a good one to start with. But now the question is, how do you generalize it and go to more interesting natural science problems?”
  • Those might include figuring out the properties of exotic materials, accelerating drug discovery and modeling fusion reactions.
kushnerha

The Psychology of Risk Perception Explains Why People Don't Fret the Pacific Northwest'... - 0 views

  • what psychology teaches us. Turns out most of us just aren’t that good at calculating risk, especially when it comes to huge natural events like earthquakes. That also means we’re not very good at mitigating those kinds of risks. Why? And is it possible to get around our short-sightedness, so that this time, we’re actually prepared? Risk perception is a vast, complex field of research. Here are just some of the core findings.
  • Studies show that when people calculate risk, especially when the stakes are high, we rely much more on feeling than fact. And we have trouble connecting emotionally to something scary if the odds of it happening today or tomorrow aren’t particularly high. So, if an earthquake, flood, tornado or hurricane isn’t immediately imminent, people are unlikely to act. “Perceiving risk is all about how scary or not do the facts feel,”
  • This feeling also relates to how we perceive natural, as opposed to human-made, threats. We tend to be more tolerant of nature than of other people who would knowingly impose risks upon us—terrorists being the clearest example. “We think that nature is out of our control—it’s not malicious, it’s not profiting from us, we just have to bear with it,”
  • ...8 more annotations...
  • And in many cases, though not all, people living in areas threatened by severe natural hazards do so by choice. If a risk has not been imposed on us, we take it much less seriously. Though Schulz’s piece certainly made a splash online, it is hard to imagine a mass exodus of Portlanders and Seattleites in response. Hey, they like it there.
  • They don’t have much to compare the future earthquake to. After all, there hasn’t been an earthquake or tsunami like it there since roughly 1700. Schulz poeticizes this problem, calling out humans for their “ignorance of or an indifference to those planetary gears which turn more slowly than our own.” Once again, this confounds our emotional connection to the risk.
  • The belief that an unlikely event won’t happen again for a while is called a gambler’s fallacy. Probability doesn’t work like that. The odds are the same with every roll of the dice.
  • But our “temporal parochialism,” as Schulz calls it, also undoes our grasp on probability. “We think probability happens with some sort of regularity or pattern,” says Ropeik. “If an earthquake is projected to hit within 50 years, when there hasn’t been one for centuries, we don’t think it’s going to happen.” Illogical thinking works in reverse, too: “If a minor earthquake just happened in Seattle, we think we’re safe.”
  • For individuals and government alike, addressing every point of concern requires a cost-benefit analysis. When kids barely have pencils and paper in schools that already exist, how much is appropriate to invest in earthquake preparedness? Even when that earthquake will kill thousands, displace millions, and cripple a region’s economy for decades to come—as Cascadia is projected to—the answer is complicated. “You immediately run into competing issues,” says Slovic. “When you’re putting resources into earthquake protection that has to be taken away from current social needs—that is a very difficult sell.”​
  • There are things people can do to combat our innate irrationality. The first is obvious: education. California has a seismic safety commission whose job is to publicize the risks of earthquakes and advocate for preparedness at household and state policy levels.
  • Another idea is similar to food safety ratings in the windows of some cities’ restaurants. Schulz reports that some 75 percent of Oregon’s structures aren’t designed to hold up to a really big Cascadia quake. “These buildings could have their risk and safety score publicly posted,” says Slovic. “That would motivate people to retrofit or mitigate those risks, particularly if they are schools.”
  • science points to a hard truth. Humans are simply inclined to be more concerned about what’s immediately in front of us: Snakes, fast-moving cars, unfamiliar chemical compounds in our breakfast cereal and the like will always elicit a quicker response than an abstract, far-off hazard.
Javier E

How Walking in Nature Changes the Brain - The New York Times - 0 views

  • A walk in the park may soothe the mind and, in the process, change the workings of our brains in ways that improve our mental health, according to an interesting new study
  • Various studies have found that urban dwellers with little access to green spaces have a higher incidence of psychological problems than people living near parks and that city dwellers who visit natural environments have lower levels of stress hormones immediately afterward than people who have not recently been outside.
  • Mr. Bratman and his collaborators decided to closely scrutinize what effect a walk might have on a person’s tendency to brood.
  • ...7 more annotations...
  • Brooding, which is known among cognitive scientists as morbid rumination, is a mental state familiar to most of us, in which we can’t seem to stop chewing over the ways in which things are wrong with ourselves and our lives. This broken-record fretting is not healthy or helpful. It can be a precursor to depression and is disproportionately common among city dwellers compared with people living outside urban areas
  • such rumination also is strongly associated with increased activity in a portion of the brain known as the subgenual prefrontal cortex.
  • These results “strongly suggest that getting out into natural environments” could be an easy and almost immediate way to improve moods for city dwellers, Mr. Bratman said.
  • walking along the highway had not soothed people’s minds. Blood flow to their subgenual prefrontal cortex was still high and their broodiness scores were unchanged.
  • the volunteers who had strolled along the quiet, tree-lined paths showed slight but meaningful improvements in their mental health, according to their scores on the questionnaire. They were not dwelling on the negative aspects of their lives as much as they had been before the walk. They also had less blood flow to the subgenual prefrontal cortex. That portion of their brains were quieter.
  • the scientists randomly assigned half of the volunteers to walk for 90 minutes through a leafy, quiet, parklike portion of the Stanford campus or next to a loud, hectic, multi-lane highway in Palo Alto. The volunteers were not allowed to have companions or listen to music. They were allowed to walk at their own pace.
  • many questions remain, he said, including how much time in nature is sufficient or ideal for our mental health, as well as what aspects of the natural world are most soothing. Is it the greenery, quiet, sunniness, loamy smells, all of those, or something else that lifts our moods? Do we need to be walking or otherwise physically active outside to gain the fullest psychological benefits? Should we be alone or could companionship amplify mood enhancements? “There’s a tremendous amount of study that still needs to be done,” Mr. Bratman said.
kushnerha

How Walking in Nature Changes the Brain - The New York Times - 0 views

  • Various studies have found that urban dwellers with little access to green spaces have a higher incidence of psychological problems than people living near parks and that city dwellers who visit natural environments have lower levels of stress hormones immediately afterward than people who have not recently been outside.
  • how a visit to a park or other green space might alter mood has been unclear. Does experiencing nature actually change our brains in some way that affects our emotional health?
  • found that volunteers who walked briefly through a lush, green portion of the Stanford campus were more attentive and happier afterward than volunteers who strolled for the same amount of time near heavy traffic.
  • ...5 more annotations...
  • Brooding, which is known among cognitive scientists as morbid rumination, is a mental state familiar to most of us, in which we can’t seem to stop chewing over the ways in which things are wrong with ourselves and our lives. This broken-record fretting is not healthy or helpful. It can be a precursor to depression and is disproportionately common among city dwellers compared with people living outside urban areas, studies show.
  • such rumination also is strongly associated with increased activity in a portion of the brain known as the subgenual prefrontal cortex.
  • gathered 38 healthy, adult city dwellers and asked them to complete a questionnaire to determine their normal level of morbid rumination. The researchers also checked for brain activity in each volunteer’s subgenual prefrontal cortex, using scans that track blood flow through the brain. Greater blood flow to parts of the brain usually signals more activity in those areas.
  • walking along the highway had not soothed people’s minds. Blood flow to their subgenual prefrontal cortex was still high and their broodiness scores were unchanged. But the volunteers who had strolled along the quiet, tree-lined paths showed slight but meaningful improvements in their mental health, according to their scores on the questionnaire. They were not dwelling on the negative aspects of their lives as much as they had been before the walk. They also had less blood flow to the subgenual prefrontal cortex. That portion of their brains were quieter.
  • many questions remain, he said, including how much time in nature is sufficient or ideal for our mental health, as well as what aspects of the natural world are most soothing. Is it the greenery, quiet, sunniness, loamy smells, all of those, or something else that lifts our moods?
Javier E

Jonathan Haidt and the Moral Matrix: Breaking Out of Our Righteous Minds | Guest Blog, ... - 2 views

  • What did satisfy Haidt’s natural thirst for understanding human beings was social psychology.
  • Haidt initially found moral psychology “really dull.” He described it to me as “really missing the heart of the matter and too cerebral.” This changed in his second year after he took a course from the anthropologist Allen Fiske and got interested in moral emotions.
  • “The Emotional Dog and its Rational Trail,” which he describes as “the most important article I’ve ever written.”
  • ...13 more annotations...
  • it helped shift moral psychology away from rationalist models that dominated in the 1980s and 1990s. In its place Haidt offered an understanding of morality from an intuitive and automatic level. As Haidt says on his website, “we are just not very good at thinking open-mindedly about moral issues, so rationalist models end up being poor descriptions of actual moral psychology.”
  • “the mind is divided into parts that sometimes conflict. Like a rider on the back of an elephant, the conscious, reasoning part of the mind has only limited control of what the elephant does.”
  • In the last few decades psychology began to understand the unconscious mind not as dark and suppressed as Freud did, but as intuitive, highly intelligent and necessary for good conscious reasoning. “Elephants,” he reminded me, “are really smart, much smarter than horses.”
  • we are 90 percent chimp 10 percent bee. That is to say, though we are inherently selfish, human nature is also about being what he terms “groupish.” He explained to me like this:
  • they developed the idea that humans possess six universal moral modules, or moral “foundations,” that get built upon to varying degrees across culture and time. They are: Care/harm, Fairness/cheating, Loyalty/betrayal, Authority/subversion, Sanctity/degradation, and Liberty/oppression. Haidt describes these six modules like a “tongue with six taste receptors.” “In this analogy,” he explains in the book, “the moral matrix of a culture is something like its cuisine: it’s a cultural construction, influenced by accidents of environment and history, but it’s not so flexible that anything goes. You can’t have a cuisine based on grass and tree bark, or even one based primarily on bitter tastes. Cuisines vary, but they all must please tongues equipped with the same five taste receptors. Moral matrices vary, but they all must please righteous minds equipped with the same six social receptors.”
  • The questionnaire eventually manifested itself into the website www.YourMorals.org, and it has since gathered over two hundred thousand data points. Here is what they found:
  • This is the crux of the disagreement between liberals and conservatives. As the graph illustrates, liberals value Care and Fairness much more than the other three moral foundations whereas conservative endorse all five more or less equally. This shouldn’t sound too surprising, liberals tend to value universal rights and reject the idea of the United States being superior while conservatives tend to be less concerned about the latest United Nation declaration and more partial to the United States as a superior nation.
  • Haidt began reading political psychology. Karen Stenner’s The Authoritarian Dynamic, “conveyed some key insights about protecting the group that were particularly insightful,” he said. The work of the French sociologist Emile Durkheim was also vital. In contrast to John Stuart Mill, a Durkheimian society, as Haidt explains in an essay for edge.org, “would value self-control over self-expression, duty over rights, and loyalty to one’s groups over concerns for out-groups.”
  • He was motivated to write The Righteous Mind after Kerry lost the 2004 election: “I thought he did a terrible job of making moral appeals so I began thinking about how I could apply moral psychology to understand political divisions. I started studying the politics of culture and realized how liberals and conservatives lived in their own closed worlds.” Each of these worlds, as Haidt explains in the book, “provides a complete, unified, and emotionally compelling worldview, easily justified by observable evidence and nearly impregnable to attack by arguments from outsiders.” He describes them as “moral matrices,” and thinks that moral psychology can help him understand them.
  • “When I say that human nature is selfish, I mean that our minds contain a variety of mental mechanisms that make us adept at promoting our own interests, in competition with our peers. When I say that human nature is also groupish, I mean that our minds contain a variety of mental mechanisms that make us adept at promoting our group’s interests, in competition with other groups. We are not saints, but we are sometimes good team players.” This is what people who had studied morality had not realized, “that we evolved not just so I can treat you well or compete with you, but at the same time we can compete with them.”
  • At first, Haidt reminds us that we are all trapped in a moral matrix where
  • our “elephants” only look for what confirms its moral intuitions while our “riders” play the role of the lawyer; we team up with people who share similar matrices and become close-minded; and we forget that morality is diverse. But on the other hand, Haidt is offering us a choice: take the blue pill and remain happily delusional about your worldview, or take the red pill, and, as he said in his 2008 TED talk, “learn some moral psychology and step outside your moral matrix.”
  • The great Asian religions, Haidt reminded the crowd at TED, swallowed their pride and took the red pill millennia ago. And by stepping out of their moral matrices they realized that societies flourish when they value all of the moral foundations to some degree. This is why Ying and Yang aren’t enemies, “they are both necessary, like night and day, for the functioning of the world.” Or, similarly, why the two of the high Gods in Hinduism, Vishnu the preserver (who stands for conservative principles) and Shiva the destroyer (who stands for liberal principles) work together.
Javier E

One of Us - Lapham's Quarterly - 0 views

  • On what seems like a monthly basis, scientific teams announce the results of new experiments, adding to a preponderance of evidence that we’ve been underestimating animal minds, even those of us who have rated them fairly highly
  • an international group of prominent neuroscientists meeting at the University of Cambridge issued “The Cambridge Declaration on Consciousness in Non-Human Animals,” a document stating that “humans are not unique in possessing the neurological substrates that generate consciousness.” It goes further to conclude that numerous documented animal behaviors must be considered “consistent with experienced feeling states.”
  • Only with the Greeks does there enter the notion of a formal divide between our species, our animal, and every other on earth.
  • ...7 more annotations...
  • there’s that exquisite verse, one of the most beautiful in the Bible, the one that says if God cares deeply about sparrows, don’t you think He cares about you? One is so accustomed to dwelling on the second, human, half of the equation, the comforting part, but when you put your hand over that and consider only the first, it’s a little startling: God cares deeply about the sparrows. Not just that, He cares about them individually. “Are not five sparrows sold for two pennies?” Jesus says. “Yet not one of them is forgotten in God’s sight.”
  • The modern conversation on animal consciousness proceeds, with the rest of the Enlightenment, from the mind of René Descartes, whose take on animals was vividly (and approvingly) paraphrased by the French philosopher Nicolas Malebranche: they “eat without pleasure, cry without pain, grow without knowing it; they desire nothing, fear nothing, know nothing.” Descartes’ term for them was automata
  • In On the Origin of Species, Charles Darwin made the intriguing claim that among the naturalists he knew it was consistently the case that the better a researcher got to know a certain species, the more each individual animal’s actions appeared attributable to “reason and the less to unlearnt instinct.” The more you knew, the more you suspected that they were rational. That marks an important pivot, that thought, insofar as it took place in the mind of someone devoted to extremely close and meticulous study of living animals, a mind that had trained itself not to sentimentalize.
  • The sheer number and variety of experiments carried out in the twentieth century—and with, if anything, a renewed intensity in the twenty-first—exceeds summary. Reasoning, language, neurology, the science of emotions—every chamber where “consciousness” is thought to hide has been probed. Birds and chimps and dolphins have been made to look at themselves in mirrors—to observe whether, on the basis of what they see, they groom or preen (a measure, if somewhat arbitrary, of self-awareness). Dolphins have been found to grieve. Primates have learned symbolic or sign languages and then been interrogated with them. Their answers show thinking but have proved stubbornly open to interpretation on the issue of “consciousness,” with critics warning, as always, about the dangers of anthropomorphism, animal-rights bias, etc.
  • If we put aside the self-awareness standard—and really, how arbitrary and arrogant is that, to take the attribute of consciousness we happen to possess over all creatures and set it atop the hierarchy, proclaiming it the very definition of consciousness (Georg Christoph Lichtenberg wrote something wise in his notebooks, to the effect of: only a man can draw a self-portrait, but only a man wants to)—it becomes possible to say at least the following: the overwhelming tendency of all this scientific work, of its results, has been toward more consciousness. More species having it, and species having more of it than assumed.
  • The animal kingdom is symphonic with mental activity, and of its millions of wavelengths, we’re born able to understand the minutest sliver. The least we can do is have a proper respect for our ignorance.
  • The philosopher Thomas Nagel wrote an essay in 1974 titled, “What Is It Like To Be a Bat?”, in which he put forward perhaps the least overweening, most useful definition of “animal consciousness” ever written, one that channels Spinoza’s phrase about “that nature belonging to him wherein he has his being.” Animal consciousness occurs, Nagel wrote, when “there is something that it is to be that organism—something it is like for the organism.” The strangeness of his syntax carries the genuine texture of the problem. We’ll probably never be able to step far enough outside of our species-reality to say much about what is going on with them, beyond saying how like or unlike us they are. Many things are conscious on the earth, and we are one, and our consciousness feels like this; one of the things it causes us to do is doubt the existence of the consciousness of the other millions of species. But it also allows us to imagine a time when we might stop doing that.
Javier E

Will ChatGPT Kill the Student Essay? - The Atlantic - 0 views

  • Essay generation is neither theoretical nor futuristic at this point. In May, a student in New Zealand confessed to using AI to write their papers, justifying it as a tool like Grammarly or spell-check: ​​“I have the knowledge, I have the lived experience, I’m a good student, I go to all the tutorials and I go to all the lectures and I read everything we have to read but I kind of felt I was being penalised because I don’t write eloquently and I didn’t feel that was right,” they told a student paper in Christchurch. They don’t feel like they’re cheating, because the student guidelines at their university state only that you’re not allowed to get somebody else to do your work for you. GPT-3 isn’t “somebody else”—it’s a program.
  • The essay, in particular the undergraduate essay, has been the center of humanistic pedagogy for generations. It is the way we teach children how to research, think, and write. That entire tradition is about to be disrupted from the ground up
  • “You can no longer give take-home exams/homework … Even on specific questions that involve combining knowledge across domains, the OpenAI chat is frankly better than the average MBA at this point. It is frankly amazing.”
  • ...18 more annotations...
  • In the modern tech world, the value of a humanistic education shows up in evidence of its absence. Sam Bankman-Fried, the disgraced founder of the crypto exchange FTX who recently lost his $16 billion fortune in a few days, is a famously proud illiterate. “I would never read a book,” he once told an interviewer. “I don’t want to say no book is ever worth reading, but I actually do believe something pretty close to that.”
  • Elon Musk and Twitter are another excellent case in point. It’s painful and extraordinary to watch the ham-fisted way a brilliant engineering mind like Musk deals with even relatively simple literary concepts such as parody and satire. He obviously has never thought about them before.
  • The extraordinary ignorance on questions of society and history displayed by the men and women reshaping society and history has been the defining feature of the social-media era. Apparently, Mark Zuckerberg has read a great deal about Caesar Augustus, but I wish he’d read about the regulation of the pamphlet press in 17th-century Europe. It might have spared America the annihilation of social trust.
  • These failures don’t derive from mean-spiritedness or even greed, but from a willful obliviousness. The engineers do not recognize that humanistic questions—like, say, hermeneutics or the historical contingency of freedom of speech or the genealogy of morality—are real questions with real consequences
  • Everybody is entitled to their opinion about politics and culture, it’s true, but an opinion is different from a grounded understanding. The most direct path to catastrophe is to treat complex problems as if they’re obvious to everyone. You can lose billions of dollars pretty quickly that way.
  • As the technologists have ignored humanistic questions to their peril, the humanists have greeted the technological revolutions of the past 50 years by committing soft suicide.
  • As of 2017, the number of English majors had nearly halved since the 1990s. History enrollments have declined by 45 percent since 2007 alone
  • the humanities have not fundamentally changed their approach in decades, despite technology altering the entire world around them. They are still exploding meta-narratives like it’s 1979, an exercise in self-defeat.
  • Contemporary academia engages, more or less permanently, in self-critique on any and every front it can imagine.
  • the situation requires humanists to explain why they matter, not constantly undermine their own intellectual foundations.
  • The humanities promise students a journey to an irrelevant, self-consuming future; then they wonder why their enrollments are collapsing. Is it any surprise that nearly half of humanities graduates regret their choice of major?
  • Despite the clear value of a humanistic education, its decline continues. Over the past 10 years, STEM has triumphed, and the humanities have collapsed. The number of students enrolled in computer science is now nearly the same as the number of students enrolled in all of the humanities combined.
  • now there’s GPT-3. Natural-language processing presents the academic humanities with a whole series of unprecedented problems
  • Practical matters are at stake: Humanities departments judge their undergraduate students on the basis of their essays. They give Ph.D.s on the basis of a dissertation’s composition. What happens when both processes can be significantly automated?
  • despite the drastic divide of the moment, natural-language processing is going to force engineers and humanists together. They are going to need each other despite everything. Computer scientists will require basic, systematic education in general humanism: The philosophy of language, sociology, history, and ethics are not amusing questions of theoretical speculation anymore. They will be essential in determining the ethical and creative use of chatbots, to take only an obvious example.
  • The humanists will need to understand natural-language processing because it’s the future of language
  • that space for collaboration can exist, both sides will have to take the most difficult leaps for highly educated people: Understand that they need the other side, and admit their basic ignorance.
  • But that’s always been the beginning of wisdom, no matter what technological era we happen to inhabit.
Javier E

Julian Assange on Living in a Surveillance Society - NYTimes.com - 0 views

  • Describing the atomic bomb (which had only two months before been used to flatten Hiroshima and Nagasaki) as an “inherently tyrannical weapon,” he predicts that it will concentrate power in the hands of the “two or three monstrous super-states” that have the advanced industrial and research bases necessary to produce it. Suppose, he asks, “that the surviving great nations make a tacit agreement never to use the atomic bomb against one another? Suppose they only use it, or the threat of it, against people who are unable to retaliate?”
  • The likely result, he concludes, will be “an epoch as horribly stable as the slave empires of antiquity.” Inventing the term, he predicts “a permanent state of ‘cold war,"’ a “peace that is no peace,” in which “the outlook for subject peoples and oppressed classes is still more hopeless.”
  • the destruction of privacy widens the existing power imbalance between the ruling factions and everyone else, leaving “the outlook for subject peoples and oppressed classes,” as Orwell wrote, “still more hopeless.
  • ...10 more annotations...
  • At present even those leading the charge against the surveillance state continue to treat the issue as if it were a political scandal that can be blamed on the corrupt policies of a few bad men who must be held accountable. It is widely hoped that all our societies need to do to fix our problems is to pass a few laws.
  • The cancer is much deeper than this. We live not only in a surveillance state, but in a surveillance society. Totalitarian surveillance is not only embodied in our governments; it is embedded in our economy, in our mundane uses of technology and in our everyday interactions.
  • The very concept of the Internet — a single, global, homogenous network that enmeshes the world — is the essence of a surveillance state. The Internet was built in a surveillance-friendly way because governments and serious players in the commercial Internet wanted it that way. There were alternatives at every step of the way. They were ignored.
  • Unlike intelligence agencies, which eavesdrop on international telecommunications lines, the commercial surveillance complex lures billions of human beings with the promise of “free services.” Their business model is the industrial destruction of privacy. And yet even the more strident critics of NSA surveillance do not appear to be calling for an end to Google and Facebook
  • At their core, companies like Google and Facebook are in the same business as the U.S. government’s National Security Agency. They collect a vast amount of information about people, store it, integrate it and use it to predict individual and group behavior, which they then sell to advertisers and others. This similarity made them natural partners for the NSA
  • there is an undeniable “tyrannical” side to the Internet. But the Internet is too complex to be unequivocally categorized as a “tyrannical” or a “democratic” phenomenon.
  • It is possible for more people to communicate and trade with others in more places in a single instant than it ever has been in history. The same developments that make our civilization easier to surveil make it harder to predict. They have made it easier for the larger part of humanity to educate itself, to race to consensus, and to compete with entrenched power groups.
  • If there is a modern analogue to Orwell’s “simple” and “democratic weapon,” which “gives claws to the weak” it is cryptography, the basis for the mathematics behind Bitcoin and the best secure communications programs. It is cheap to produce: cryptographic software can be written on a home computer. It is even cheaper to spread: software can be copied in a way that physical objects cannot. But it is also insuperable — the mathematics at the heart of modern cryptography are sound, and can withstand the might of a superpower. The same technologies that allowed the Allies to encrypt their radio communications against Axis intercepts can now be downloaded over a dial-up Internet connection and deployed with a cheap laptop.
  • It is too early to say whether the “democratizing” or the “tyrannical” side of the Internet will eventually win out. But acknowledging them — and perceiving them as the field of struggle — is the first step toward acting effectively
  • Humanity cannot now reject the Internet, but clearly we cannot surrender it either. Instead, we have to fight for it. Just as the dawn of atomic weapons inaugurated the Cold War, the manifold logic of the Internet is the key to understanding the approaching war for the intellectual center of our civilization
« First ‹ Previous 41 - 60 of 196 Next › Last »
Showing 20 items per page