Skip to main content

Home/ TOK Friends/ Group items tagged Cognitive

Rss Feed Group items tagged

Javier E

Psychological nativism - Wikipedia - 0 views

  • In the field of psychology, nativism is the view that certain skills or abilities are "native" or hard-wired into the brain at birth. This is in contrast to the "blank slate" or tabula rasa view, which states that the brain has inborn capabilities for learning from the environment but does not contain content such as innate beliefs.
  • Some nativists believe that specific beliefs or preferences are "hard-wired". For example, one might argue that some moral intuitions are innate or that color preferences are innate. A less established argument is that nature supplies the human mind with specialized learning devices. This latter view differs from empiricism only to the extent that the algorithms that translate experience into information may be more complex and specialized in nativist theories than in empiricist theories. However, empiricists largely remain open to the nature of learning algorithms and are by no means restricted to the historical associationist mechanisms of behaviorism.
  • Nativism has a history in philosophy, particularly as a reaction to the straightforward empiricist views of John Locke and David Hume. Hume had given persuasive logical arguments that people cannot infer causality from perceptual input. The most one could hope to infer is that two events happen in succession or simultaneously. One response to this argument involves positing that concepts not supplied by experience, such as causality, must exist prior to any experience and hence must be innate.
  • ...14 more annotations...
  • The philosopher Immanuel Kant (1724–1804) argued in his Critique of Pure Reason that the human mind knows objects in innate, a priori ways. Kant claimed that humans, from birth, must experience all objects as being successive (time) and juxtaposed (space). His list of inborn categories describes predicates that the mind can attribute to any object in general. Arthur Schopenhauer (1788–1860) agreed with Kant, but reduced the number of innate categories to one—causality—which presupposes the others.
  • Modern nativism is most associated with the work of Jerry Fodor (1935–2017), Noam Chomsky (b. 1928), and Steven Pinker (b. 1954), who argue that humans from birth have certain cognitive modules (specialised genetically inherited psychological abilities) that allow them to learn and acquire certain skills, such as language.
  • For example, children demonstrate a facility for acquiring spoken language but require intensive training to learn to read and write. This poverty of the stimulus observation became a principal component of Chomsky's argument for a "language organ"—a genetically inherited neurological module that confers a somewhat universal understanding of syntax that all neurologically healthy humans are born with, which is fine-tuned by an individual's experience with their native language
  • In The Blank Slate (2002), Pinker similarly cites the linguistic capabilities of children, relative to the amount of direct instruction they receive, as evidence that humans have an inborn facility for speech acquisition (but not for literacy acquisition).
  • A number of other theorists[1][2][3] have disagreed with these claims. Instead, they have outlined alternative theories of how modularization might emerge over the course of development, as a result of a system gradually refining and fine-tuning its responses to environmental stimuli.[4]
  • Many empiricists are now also trying to apply modern learning models and techniques to the question of language acquisition, with marked success.[20] Similarity-based generalization marks another avenue of recent research, which suggests that children may be able to rapidly learn how to use new words by generalizing about the usage of similar words that they already know (see also the distributional hypothesis).[14][21][22][23]
  • The term universal grammar (or UG) is used for the purported innate biological properties of the human brain, whatever exactly they turn out to be, that are responsible for children's successful acquisition of a native language during the first few years of life. The person most strongly associated with the hypothesising of UG is Noam Chomsky, although the idea of Universal Grammar has clear historical antecedents at least as far back as the 1300s, in the form of the Speculative Grammar of Thomas of Erfurt.
  • This evidence is all the more impressive when one considers that most children do not receive reliable corrections for grammatical errors.[9] Indeed, even children who for medical reasons cannot produce speech, and therefore have no possibility of producing an error in the first place, have been found to master both the lexicon and the grammar of their community's language perfectly.[10] The fact that children succeed at language acquisition even when their linguistic input is severely impoverished, as it is when no corrective feedback is available, is related to the argument from the poverty of the stimulus, and is another claim for a central role of UG in child language acquisition.
  • Researchers at Blue Brain discovered a network of about fifty neurons which they believed were building blocks of more complex knowledge but contained basic innate knowledge that could be combined in different more complex ways to give way to acquired knowledge, like memory.[11
  • experience, the tests would bring about very different characteristics for each rat. However, the rats all displayed similar characteristics which suggest that their neuronal circuits must have been established previously to their experiences. The Blue Brain Project research suggests that some of the "building blocks" of knowledge are genetic and present at birth.[11]
  • modern nativist theory makes little in the way of specific falsifiable and testable predictions, and has been compared by some empiricists to a pseudoscience or nefarious brand of "psychological creationism". As influential psychologist Henry L. Roediger III remarked that "Chomsky was and is a rationalist; he had no uses for experimental analyses or data of any sort that pertained to language, and even experimental psycholinguistics was and is of little interest to him".[13]
  • , Chomsky's poverty of the stimulus argument is controversial within linguistics.[14][15][16][17][18][19]
  • Neither the five-year-old nor the adults in the community can easily articulate the principles of the grammar they are following. Experimental evidence shows that infants come equipped with presuppositions that allow them to acquire the rules of their language.[6]
  • Paul Griffiths, in "What is Innateness?", argues that innateness is too confusing a concept to be fruitfully employed as it confuses "empirically dissociated" concepts. In a previous paper, Griffiths argued that innateness specifically confuses these three distinct biological concepts: developmental fixity, species nature, and intended outcome. Developmental fixity refers to how insensitive a trait is to environmental input, species nature reflects what it is to be an organism of a certain kind, and the intended outcome is how an organism is meant to develop.[24]
Javier E

Opinion | Noam Chomsky: The False Promise of ChatGPT - The New York Times - 0 views

  • we fear that the most popular and fashionable strain of A.I. — machine learning — will degrade our science and debase our ethics by incorporating into our technology a fundamentally flawed conception of language and knowledge.
  • OpenAI’s ChatGPT, Google’s Bard and Microsoft’s Sydney are marvels of machine learning. Roughly speaking, they take huge amounts of data, search for patterns in it and become increasingly proficient at generating statistically probable outputs — such as seemingly humanlike language and thought
  • if machine learning programs like ChatGPT continue to dominate the field of A.I
  • ...22 more annotations...
  • , we know from the science of linguistics and the philosophy of knowledge that they differ profoundly from how humans reason and use language. These differences place significant limitations on what these programs can do, encoding them with ineradicable defects.
  • It is at once comic and tragic, as Borges might have noted, that so much money and attention should be concentrated on so little a thing — something so trivial when contrasted with the human mind, which by dint of language, in the words of Wilhelm von Humboldt, can make “infinite use of finite means,” creating ideas and theories with universal reach.
  • The human mind is not, like ChatGPT and its ilk, a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question
  • the human mind is a surprisingly efficient and even elegant system that operates with small amounts of information; it seeks not to infer brute correlations among data points but to create explanations
  • such programs are stuck in a prehuman or nonhuman phase of cognitive evolution. Their deepest flaw is the absence of the most critical capacity of any intelligence: to say not only what is the case, what was the case and what will be the case — that’s description and prediction — but also what is not the case and what could and could not be the case
  • Those are the ingredients of explanation, the mark of true intelligence.
  • Here’s an example. Suppose you are holding an apple in your hand. Now you let the apple go. You observe the result and say, “The apple falls.” That is a description. A prediction might have been the statement “The apple will fall if I open my hand.”
  • an explanation is something more: It includes not only descriptions and predictions but also counterfactual conjectures like “Any such object would fall,” plus the additional clause “because of the force of gravity” or “because of the curvature of space-time” or whatever. That is a causal explanation: “The apple would not have fallen but for the force of gravity.” That is thinking.
  • The crux of machine learning is description and prediction; it does not posit any causal mechanisms or physical laws
  • any human-style explanation is not necessarily correct; we are fallible. But this is part of what it means to think: To be right, it must be possible to be wrong. Intelligence consists not only of creative conjectures but also of creative criticism. Human-style thought is based on possible explanations and error correction, a process that gradually limits what possibilities can be rationally considered.
  • ChatGPT and similar programs are, by design, unlimited in what they can “learn” (which is to say, memorize); they are incapable of distinguishing the possible from the impossible.
  • Whereas humans are limited in the kinds of explanations we can rationally conjecture, machine learning systems can learn both that the earth is flat and that the earth is round. They trade merely in probabilities that change over time.
  • For this reason, the predictions of machine learning systems will always be superficial and dubious.
  • some machine learning enthusiasts seem to be proud that their creations can generate correct “scientific” predictions (say, about the motion of physical bodies) without making use of explanations (involving, say, Newton’s laws of motion and universal gravitation). But this kind of prediction, even when successful, is pseudoscienc
  • While scientists certainly seek theories that have a high degree of empirical corroboration, as the philosopher Karl Popper noted, “we do not seek highly probable theories but explanations; that is to say, powerful and highly improbable theories.”
  • The theory that apples fall to earth because mass bends space-time (Einstein’s view) is highly improbable, but it actually tells you why they fall. True intelligence is demonstrated in the ability to think and express improbable but insightful things.
  • This means constraining the otherwise limitless creativity of our minds with a set of ethical principles that determines what ought and ought not to be (and of course subjecting those principles themselves to creative criticism)
  • True intelligence is also capable of moral thinking
  • To be useful, ChatGPT must be empowered to generate novel-looking output; to be acceptable to most of its users, it must steer clear of morally objectionable content
  • In 2016, for example, Microsoft’s Tay chatbot (a precursor to ChatGPT) flooded the internet with misogynistic and racist content, having been polluted by online trolls who filled it with offensive training data. How to solve the problem in the future? In the absence of a capacity to reason from moral principles, ChatGPT was crudely restricted by its programmers from contributing anything novel to controversial — that is, important — discussions. It sacrificed creativity for a kind of amorality.
  • Here, ChatGPT exhibits something like the banality of evil: plagiarism and apathy and obviation. It summarizes the standard arguments in the literature by a kind of super-autocomplete, refuses to take a stand on anything, pleads not merely ignorance but lack of intelligence and ultimately offers a “just following orders” defense, shifting responsibility to its creators.
  • In short, ChatGPT and its brethren are constitutionally unable to balance creativity with constraint. They either overgenerate (producing both truths and falsehoods, endorsing ethical and unethical decisions alike) or undergenerate (exhibiting noncommitment to any decisions and indifference to consequences). Given the amorality, faux science and linguistic incompetence of these systems, we can only laugh or cry at their popularity.
Javier E

Why Facts Don't Change Our Minds | The New Yorker - 0 views

  • n 1975, researchers at Stanford invited a group of undergraduates to take part in a study about suicide. They were presented with pairs of suicide notes. In each pair, one note had been composed by a random individual, the other by a person who had subsequently taken his own life. The students were then asked to distinguish between the genuine notes and the fake ones.
  • Out of twenty-five pairs of notes, they correctly identified the real one twenty-four times
  • Others discovered that they were hopeless. They identified the real note in only ten instance
  • ...11 more annotations...
  • The students who’d been told they were almost always right were, on average, no more discerning than those who had been told they were mostly wrong.
  • In the second phase of the study, the deception was revealed. The students were told that the real point of the experiment was to gauge their responses to thinking they were right or wrong.
  • Once again, midway through the study, the students were informed that they’d been misled, and that the information they’d received was entirely fictitious. The students were then asked to describe their own beliefs
  • The students who’d received the first packet thought that he would avoid it. The students in the second group thought he’d embrace it.
  • Even after the evidence “for their beliefs has been totally refuted, people fail to make appropriate revisions in those beliefs,” the researchers noted. In this case, the failure was “particularly impressive,”
  • Humans’ biggest advantage over other species is our ability to coöperate. Coöperation is difficult to establish and almost as difficult to sustain
  • Reason is an adaptation to the hypersocial niche humans have evolved for themselves
  • Consider what’s become known as “confirmation bias,” the tendency people have to embrace information that supports their beliefs and reject information that contradicts them
  • Of the many forms of faulty thinking that have been identified, confirmation bias is among the best catalogued; it’s the subject of entire textbooks’ worth of experiments
  • One implication of the naturalness with which we divide cognitive labor,” they write, is that there’s “no sharp boundary between one person’s ideas and knowledge” and “those of other members” of the group
  • reason is an evolved trait, like bipedalism or three-color vision. It emerged on the savannas of Africa, and has to be understood in that context
Javier E

Elon Musk May Kill Us Even If Donald Trump Doesn't - 0 views

  • In his extraordinary 2021 book, The Constitution of Knowledge: A Defense of Truth, Jonathan Rauch, a scholar at Brookings, writes that modern societies have developed an implicit “epistemic” compact–an agreement about how we determine truth–that rests on a broad public acceptance of science and reason, and a respect and forbearance towards institutions charged with advancing knowledge.
  • Today, Rauch writes, those institutions have given way to digital “platforms” that traffic in “information” rather than knowledge and disseminate that information not according to its accuracy but its popularity. And what is popular is sensation, shock, outrage. The old elite consensus has given way to an algorithm. Donald Trump, an entrepreneur of outrage, capitalized on the new technology to lead what Rauch calls “an epistemic secession.”
  • Rauch foresees the arrival of “Internet 3.0,” in which the big companies accept that content regulation is in their interest and erect suitable “guardrails.” In conversation with me, Rauch said that social media companies now recognize that their algorithm are “toxic,” and spoke hopefully of alternative models like Mastodon, which eschews algorithms and allows users to curate their own feeds
  • ...10 more annotations...
  • In an Atlantic essay, “Why The Past Ten Years of American Life have Been Uniquely Stupid,” and in a follow-up piece, Haidt argued that the Age of Gutenberg–of books and the depth understanding that comes with them–ended somewhere around 2014 with the rise of “Share,” “Like” and “Retweet” buttons that opened the way for trolls, hucksters and Trumpists
  • The new age of “hyper-virality,” he writes, has given us both January 6 and cancel culture–ugly polarization in both directions. On the subject of stupidification, we should add the fact that high school students now get virtually their entire stock of knowledge about the world from digital platforms.
  • Haidt proposed several reforms, including modifying Facebook’s “Share” function and requiring “user verification” to get rid of trolls. But he doesn’t really believe in his own medicine
  • Haidt said that the era of “shared understanding” is over–forever. When I asked if he could envision changes that would help protect democracy, Haidt quoted Goldfinger: “Do you expect me to talk?” “No, Mr. Bond, I expect you to die!”
  • Social media is a public health hazard–the cognitive equivalent of tobacco and sugary drinks. Adopting a public health model, we could, for examople, ban the use of algorithms to reduce virality, or even require social media platforms to adopt a subscription rather than advertising revenue model and thus remove their incentive to amass ev er more eyeballs.
  • We could, but we won’t, because unlike other public health hazards, digital platforms are forms of speech. Fox New is probably responsible for more polarization than all social media put together, but the federal government could not compel it–and all other media firms–to change its revenue model.
  • If Mark Zuckerberg or Elon Musk won’t do so out of concern for the public good–a pretty safe bet–they could be compelled to do so only by public or competitive pressure. 
  • Taiwan has provide resilient because its society is resilient; people reject China’s lies. We, here, don’t lack for fact-checkers, but rather for people willing to believe them. The problem is not the technology, but ourselves.
  • you have to wonder if people really are repelled by our poisonous discourse, or by the hailstorm of disinformation, or if they just want to live comfortably inside their own bubble, and not somebody else’
  • If Jonathan Haidt is right, it’s not because we’ve created a self-replicating machine that is destined to annihilate reason; it’s because we are the self-replicating machine.
Javier E

The new science of death: 'There's something happening in the brain that makes no sense... - 0 views

  • Jimo Borjigin, a professor of neurology at the University of Michigan, had been troubled by the question of what happens to us when we die. She had read about the near-death experiences of certain cardiac-arrest survivors who had undergone extraordinary psychic journeys before being resuscitated. Sometimes, these people reported travelling outside of their bodies towards overwhelming sources of light where they were greeted by dead relatives. Others spoke of coming to a new understanding of their lives, or encountering beings of profound goodness
  • Borjigin didn’t believe the content of those stories was true – she didn’t think the souls of dying people actually travelled to an afterworld – but she suspected something very real was happening in those patients’ brains. In her own laboratory, she had discovered that rats undergo a dramatic storm of many neurotransmitters, including serotonin and dopamine, after their hearts stop and their brains lose oxygen. She wondered if humans’ near-death experiences might spring from a similar phenomenon, and if it was occurring even in people who couldn’t be revived
  • when she looked at the scientific literature, she found little enlightenment. “To die is such an essential part of life,” she told me recently. “But we knew almost nothing about the dying brain.” So she decided to go back and figure out what had happened inside the brains of people who died at the University of Michigan neurointensive care unit.
  • ...43 more annotations...
  • Since the 1960s, advances in resuscitation had helped to revive thousands of people who might otherwise have died. About 10% or 20% of those people brought with them stories of near-death experiences in which they felt their souls or selves departing from their bodies
  • According to several international surveys and studies, one in 10 people claims to have had a near-death experience involving cardiac arrest, or a similar experience in circumstances where they may have come close to death. That’s roughly 800 million souls worldwide who may have dipped a toe in the afterlife.
  • In the 1970s, a small network of cardiologists, psychiatrists, medical sociologists and social psychologists in North America and Europe began investigating whether near-death experiences proved that dying is not the end of being, and that consciousness can exist independently of the brain. The field of near-death studies was born.
  • in 1975, an American medical student named Raymond Moody published a book called Life After Life.
  • Meanwhile, new technologies and techniques were helping doctors revive more and more people who, in earlier periods of history, would have almost certainly been permanently deceased.
  • “We are now at the point where we have both the tools and the means to scientifically answer the age-old question: What happens when we die?” wrote Sam Parnia, an accomplished resuscitation specialist and one of the world’s leading experts on near-death experiences, in 2006. Parnia himself was devising an international study to test whether patients could have conscious awareness even after they were found clinically dead.
  • Borjigin, together with several colleagues, took the first close look at the record of electrical activity in the brain of Patient One after she was taken off life support. What they discovered – in results reported for the first time last year – was almost entirely unexpected, and has the potential to rewrite our understanding of death.
  • “I believe what we found is only the tip of a vast iceberg,” Borjigin told me. “What’s still beneath the surface is a full account of how dying actually takes place. Because there’s something happening in there, in the brain, that makes no sense.”
  • Over the next 30 years, researchers collected thousands of case reports of people who had had near-death experiences
  • near-death studies was already splitting into several schools of belief, whose tensions continue to this day. One influential camp was made up of spiritualists, some of them evangelical Christians, who were convinced that near-death experiences were genuine sojourns in the land of the dead and divine
  • Moody was their most important spokesman; he eventually claimed to have had multiple past lives and built a “psychomanteum” in rural Alabama where people could attempt to summon the spirits of the dead by gazing into a dimly lit mirror.
  • It is no longer unheard of for people to be revived even six hours after being declared clinically dead. In 2011, Japanese doctors reported the case of a young woman who was found in a forest one morning after an overdose stopped her heart the previous night; using advanced technology to circulate blood and oxygen through her body, the doctors were able to revive her more than six hours later, and she was able to walk out of the hospital after three weeks of care
  • The second, and largest, faction of near-death researchers were the parapsychologists, those interested in phenomena that seemed to undermine the scientific orthodoxy that the mind could not exist independently of the brain. These researchers, who were by and large trained scientists following well established research methods, tended to believe that near-death experiences offered evidence that consciousness could persist after the death of the individua
  • Their aim was to find ways to test their theories of consciousness empirically, and to turn near-death studies into a legitimate scientific endeavour.
  • Finally, there emerged the smallest contingent of near-death researchers, who could be labelled the physicalists. These were scientists, many of whom studied the brain, who were committed to a strictly biological account of near-death experiences. Like dreams, the physicalists argued, near-death experiences might reveal psychological truths, but they did so through hallucinatory fictions that emerged from the workings of the body and the brain.
  • Between 1975, when Moody published Life After Life, and 1984, only 17 articles in the PubMed database of scientific publications mentioned near-death experiences. In the following decade, there were 62. In the most recent 10-year span, there were 221.
  • Today, there is a widespread sense throughout the community of near-death researchers that we are on the verge of great discoveries
  • “We really are in a crucial moment where we have to disentangle consciousness from responsiveness, and maybe question every state that we consider unconscious,”
  • “I think in 50 or 100 years time we will have discovered the entity that is consciousness,” he told me. “It will be taken for granted that it wasn’t produced by the brain, and it doesn’t die when you die.”
  • it is in large part because of a revolution in our ability to resuscitate people who have suffered cardiac arrest
  • In a medical setting, “clinical death” is said to occur at the moment the heart stops pumping blood, and the pulse stops. This is widely known as cardiac arrest
  • Loss of oxygen to the brain and other organs generally follows within seconds or minutes, although the complete cessation of activity in the heart and brain – which is often called “flatlining” or, in the case of the latter, “brain death” – may not occur for many minutes or even hours.
  • That began to change in 1960, when the combination of mouth-to-mouth ventilation, chest compressions and external defibrillation known as cardiopulmonary resuscitation, or CPR, was formalised. Shortly thereafter, a massive campaign was launched to educate clinicians and the public on CPR’s basic techniques, and soon people were being revived in previously unthinkable, if still modest, numbers.
  • scientists learned that, even in its acute final stages, death is not a point, but a process. After cardiac arrest, blood and oxygen stop circulating through the body, cells begin to break down, and normal electrical activity in the brain gets disrupted. But the organs don’t fail irreversibly right away, and the brain doesn’t necessarily cease functioning altogether. There is often still the possibility of a return to life. In some cases, cell death can be stopped or significantly slowed, the heart can be restarted, and brain function can be restored. In other words, the process of death can be reversed.
  • In his book, Moody distilled the reports of 150 people who had had intense, life-altering experiences in the moments surrounding a cardiac arrest. Although the reports varied, he found that they often shared one or more common features or themes. The narrative arc of the most detailed of those reports – departing the body and travelling through a long tunnel, having an out-of-body experience, encountering spirits and a being of light, one’s whole life flashing before one’s eyes, and returning to the body from some outer limit – became so canonical that the art critic Robert Hughes could refer to it years later as “the familiar kitsch of near-death experience”.
  • In 2019, a British woman named Audrey Schoeman who was caught in a snowstorm spent six hours in cardiac arrest before doctors brought her back to life with no evident brain damage.
  • That is a key tenet of the parapsychologists’ arguments: if there is consciousness without brain activity, then consciousness must dwell somewhere beyond the brain
  • Some of the parapsychologists speculate that it is a “non-local” force that pervades the universe, like electromagnetism. This force is received by the brain, but is not generated by it, the way a television receives a broadcast.
  • In order for this argument to hold, something else has to be true: near-death experiences have to happen during death, after the brain shuts down
  • To prove this, parapsychologists point to a number of rare but astounding cases known as “veridical” near-death experiences, in which patients seem to report details from the operating room that they might have known only if they had conscious awareness during the time that they were clinically dead.
  • At the very least, Parnia and his colleagues have written, such phenomena are “inexplicable through current neuroscientific models”. Unfortunately for the parapsychologists, however, none of the reports of post-death awareness holds up to strict scientific scrutiny. “There are many claims of this kind, but in my long decades of research into out-of-body and near-death experiences I never met any convincing evidence that this is true,”
  • In other cases, there’s not enough evidence to prove that the experiences reported by cardiac arrest survivors happened when their brains were shut down, as opposed to in the period before or after they supposedly “flatlined”. “So far, there is no sufficiently rigorous, convincing empirical evidence that people can observe their surroundings during a near-death experience,”
  • The parapsychologists tend to push back by arguing that even if each of the cases of veridical near-death experiences leaves room for scientific doubt, surely the accumulation of dozens of these reports must count for something. But that argument can be turned on its head: if there are so many genuine instances of consciousness surviving death, then why should it have so far proven impossible to catch one empirically?
  • The spiritualists and parapsychologists are right to insist that something deeply weird is happening to people when they die, but they are wrong to assume it is happening in the next life rather than this one. At least, that is the implication of what Jimo Borjigin found when she investigated the case of Patient One.
  • In the moments after Patient One was taken off oxygen, there was a surge of activity in her dying brain. Areas that had been nearly silent while she was on life support suddenly thrummed with high-frequency electrical signals called gamma waves. In particular, the parts of the brain that scientists consider a “hot zone” for consciousness became dramatically alive. In one section, the signals remained detectable for more than six minutes. In another, they were 11 to 12 times higher than they had been before Patient One’s ventilator was removed.
  • “As she died, Patient One’s brain was functioning in a kind of hyperdrive,” Borjigin told me. For about two minutes after her oxygen was cut off, there was an intense synchronisation of her brain waves, a state associated with many cognitive functions, including heightened attention and memory. The synchronisation dampened for about 18 seconds, then intensified again for more than four minutes. It faded for a minute, then came back for a third time.
  • n those same periods of dying, different parts of Patient One’s brain were suddenly in close communication with each other. The most intense connections started immediately after her oxygen stopped, and lasted for nearly four minutes. There was another burst of connectivity more than five minutes and 20 seconds after she was taken off life support. In particular, areas of her brain associated with processing conscious experience – areas that are active when we move through the waking world, and when we have vivid dreams – were communicating with those involved in memory formation. So were parts of the brain associated with empathy. Even as she slipped irre
  • something that looked astonishingly like life was taking place over several minutes in Patient One’s brain.
  • Although a few earlier instances of brain waves had been reported in dying human brains, nothing as detailed and complex as what occurred in Patient One had ever been detected.
  • Given the levels of activity and connectivity in particular regions of her dying brain, Borjigin believes it’s likely that Patient One had a profound near-death experience with many of its major features: out-of-body sensations, visions of light, feelings of joy or serenity, and moral re-evaluations of one’s life. Of course,
  • “The brain, contrary to everybody’s belief, is actually super active during cardiac arrest,” Borjigin said. Death may be far more alive than we ever thought possible.
  • “The brain is so resilient, the heart is so resilient, that it takes years of abuse to kill them,” she pointed out. “Why then, without oxygen, can a perfectly healthy person die within 30 minutes, irreversibly?”
  • Evidence is already emerging that even total brain death may someday be reversible. In 2019, scientists at Yale University harvested the brains of pigs that had been decapitated in a commercial slaughterhouse four hours earlier. Then they perfused the brains for six hours with a special cocktail of drugs and synthetic blood. Astoundingly, some of the cells in the brains began to show metabolic activity again, and some of the synapses even began firing.
Javier E

Opinion | America's Irrational Macreconomic Freak Out - The New York Times - 0 views

  • The same inflationary forces that pushed these prices higher have also pushed wages to be 22 percent higher than on the eve of the pandemic. Official statistics show that the stuff that a typical American buys now costs 20 percent more over the same period. Some prices rose a little more, some a little less, but they all roughly rose in parallel.
  • It follows that the typical worker can now afford two percent more stuff. That doesn’t sound like a lot, but it’s a faster rate of improvement than the average rate of real wage growth over the past few decades.
  • many folks feel that they’re falling behind, even when a careful analysis of the numbers suggests they’re not.
  • ...16 more annotations...
  • That’s because real people — and yes, even professional economists — tend to process the parallel rise of prices and wages in quite different ways.
  • In brief, researchers have found that we tend to internalize the gains due to inflation and externalize the losses. These different processes yield different emotional responses.
  • Let’s start with higher prices. Sticker shock hurts. Even as someone who closely studies the inflation statistics, I’m still often surprised by higher prices. They feel unfair. They undermine my spending power, and my sense of control and order.
  • in reality, higher prices are only the first act of the inflationary play. It’s a play that economists have seen before. In episode after episode, surges in prices have led to — or been preceded by — a proportional surge in wages.
  • Even though wages tend to rise hand-in-hand with prices, we tell ourselves a different story, in which the wage rises we get have nothing to do with price rises that cause them.
  • But then my economist brain took over, and slowly it sunk in that my raise wasn’t a reward for hard work, but rather a cost-of-living adjustment
  • Internalizing the gain and externalizing the cost of inflation protects you from this deflating realization. But it also distorts your sense of reality.
  • The reason so many Americans feel that inflation is stealing their purchasing power is that they give themselves unearned credit for the offsetting wage rises that actually restore it.
  • younger folks — anyone under 60 — had never experienced sustained inflation rates greater than 5 percent in their adult lives. And I think this explains why they’re so angry about today’s inflation.
  • While older Americans understood that the pain of inflation is transitory, younger folks aren’t so sure. Inflation is a lot scarier when you fear that today’s price rises will permanently undermine your ability to make ends meet.
  • Perhaps this explains why the recent moderate burst of inflation has created seemingly more anxiety than previous inflationary episodes.
  • More generally, being an economist makes me an optimist. Social media is awash with (false) claims that we’re in a “silent depression,” and those who want to make American great again are certain it was once so much better.
  • in reality, our economy this year is larger, more productive and will yield higher average incomes than in any prior year on record in American history
  • And because the United States is the world’s richest major economy, we can now say that we are almost certainly part of the richest large society in its richest year in the history of humanity.
  • The income of the average American will double approximately every 39 years. And so when my kids are my age, average income will be roughly double what it is today. Far from being fearful for my kids, I’m envious of the extraordinary riches their generation will enjoy.
  • Psychologists describe anxiety disorders as occurring when the panic you feel is out of proportion to the danger you face. By this definition, we’re in the midst of a macroeconomic anxiety attack.
Javier E

Why It's So Hard To Pay Attention, Explained By Science - Fast Company - 0 views

  • Today, each of us individually generates more information than ever before in human history. Our world is now awash in an unprecedented volume of data. The trouble is, our brains haven’t evolved to be able to process it all.
  • information “tumbles faster and faster through bigger and bigger computers down to everybody’s fingertips, which are holding devices with more processing power than the Apollo mission control.”
  • Information scientists have quantified all this: In 2011, Americans took in five times as much information every day as they did in 1986—the equivalent of 174 newspapers.
  • ...18 more annotations...
  • During our leisure time, not counting work, each of us processes 34 gigabytes, or 100,000 words, every day
  • The world’s 21,274 television stations produce 85,000 hours of original programming every day as we watch an average of five hours of television daily, the equivalent of 20 gigabytes of audio-video images
  • That’s not counting YouTube, which uploads 6,000 hours of video every hour.
  • We’ve created a world with 300 exabytes (300,000,000,000,000,000,000 pieces) of human-made information. If each of those pieces of information were written on a 3-by-5-inch index card and then spread out side by side, just one person’s share—your share of this information—would cover every square inch of Massachusetts and Connecticut combined.
  • Neurons are living cells with a metabolism; they need oxygen and glucose to survive, and when they’ve been working hard, we experience fatigue. Every status update you read on Facebook, every tweet or text message you get from a friend, is competing for resources in your brain with important things like whether to put your savings in stocks or bonds,
  • The processing capacity of the conscious mind has been estimated (by the researcher Mihaly Csikszentmihalyi and, independently, by Bell Labs engineer Robert Lucky) at 120 bits per second. That bandwidth, or window, is the speed limit for the traffic of information we can pay conscious attention to at any one time.
  • While a great deal occurs below the threshold of our awareness, and this has an impact on how we feel and what our life is going to be like, in order for something to become encoded as part of your experience, you need to have paid conscious attention to it.
  • What does this bandwidth restriction—this information speed limit—mean in terms of our interactions with others? In order to understand one person speaking to us, we need to process 60 bits of information per second. With a processing limit of 120 bits per second, this means you can barely understand two people talking to you at the same time
  • We’re surrounded on this planet by billions of other humans, but we can understand only two at a time at the most! It’s no wonder that the world is filled with so much misunderstanding.
  • With such attentional restrictions, it’s clear why many of us feel overwhelmed by managing some of the most basic aspects of life. Part of the reason is that our brains evolved to help us deal with life during the hunter-gatherer phase of human history
  • Attention is the most essential mental resource for any organism. It determines which aspects of the environment we deal with, and most of the time, various automatic, subconscious processes make the correct choice about what gets passed through to our conscious awareness. For this to happen, millions of neurons are constantly monitoring the environment to select the most important things for us to focus on.
  • These neurons are collectively the “attentional filter.” They work largely in the background, outside of our conscious awareness. This is why most of the perceptual detritus of our daily lives doesn’t registe
  • The attentional filter is one of evolution’s greatest achievements. In nonhumans, it ensures that they don’t get distracted by irrelevant things
  • When our protohuman ancestors left the cover of the trees to seek new sources of food, they simultaneously opened up a vast range of new possibilities for nourishment and exposed themselves to a wide range of new predators. Being alert and vigilant to threatening sounds and visual cues is what allowed them to survive; this meant allowing an increasing amount of information through the attentional filter.
  • Ten thousand years ago, humans plus their pets and livestock accounted for about 0.1% of the terrestrial vertebrate biomass inhabiting the earth; we now account for 98%
  • Humans are, by most biological measures, the most successful species our planet has seen. We have managed to survive in nearly every climate our planet has offered (so far), and the rate of our population expansion exceeds that of any other known organism
  • Our success owes in large part to our cognitive capacity, the ability of our brains to flexibly handle information. But our brains evolved in a much simpler world with far less information coming at us. Today, our attentional filters easily become overwhelmed.
  • Successful people—or those who can afford it—employ layers of other people whose job it is to narrow their own attentional filters.
  •  
    This article is adapted from The Organized Mind: Thinking Straight in the Age of Information Overload by Daniel J. Levitin (Plume/Penguin Random House, 2014).
« First ‹ Previous 401 - 408 of 408
Showing 20 items per page