Skip to main content

Home/ TOK Friends/ Group items tagged experiment

Rss Feed Group items tagged

Javier E

The new science of death: 'There's something happening in the brain that makes no sense... - 0 views

  • Jimo Borjigin, a professor of neurology at the University of Michigan, had been troubled by the question of what happens to us when we die. She had read about the near-death experiences of certain cardiac-arrest survivors who had undergone extraordinary psychic journeys before being resuscitated. Sometimes, these people reported travelling outside of their bodies towards overwhelming sources of light where they were greeted by dead relatives. Others spoke of coming to a new understanding of their lives, or encountering beings of profound goodness
  • Borjigin didn’t believe the content of those stories was true – she didn’t think the souls of dying people actually travelled to an afterworld – but she suspected something very real was happening in those patients’ brains. In her own laboratory, she had discovered that rats undergo a dramatic storm of many neurotransmitters, including serotonin and dopamine, after their hearts stop and their brains lose oxygen. She wondered if humans’ near-death experiences might spring from a similar phenomenon, and if it was occurring even in people who couldn’t be revived
  • when she looked at the scientific literature, she found little enlightenment. “To die is such an essential part of life,” she told me recently. “But we knew almost nothing about the dying brain.” So she decided to go back and figure out what had happened inside the brains of people who died at the University of Michigan neurointensive care unit.
  • ...43 more annotations...
  • Since the 1960s, advances in resuscitation had helped to revive thousands of people who might otherwise have died. About 10% or 20% of those people brought with them stories of near-death experiences in which they felt their souls or selves departing from their bodies
  • According to several international surveys and studies, one in 10 people claims to have had a near-death experience involving cardiac arrest, or a similar experience in circumstances where they may have come close to death. That’s roughly 800 million souls worldwide who may have dipped a toe in the afterlife.
  • In the 1970s, a small network of cardiologists, psychiatrists, medical sociologists and social psychologists in North America and Europe began investigating whether near-death experiences proved that dying is not the end of being, and that consciousness can exist independently of the brain. The field of near-death studies was born.
  • in 1975, an American medical student named Raymond Moody published a book called Life After Life.
  • Meanwhile, new technologies and techniques were helping doctors revive more and more people who, in earlier periods of history, would have almost certainly been permanently deceased.
  • “We are now at the point where we have both the tools and the means to scientifically answer the age-old question: What happens when we die?” wrote Sam Parnia, an accomplished resuscitation specialist and one of the world’s leading experts on near-death experiences, in 2006. Parnia himself was devising an international study to test whether patients could have conscious awareness even after they were found clinically dead.
  • Borjigin, together with several colleagues, took the first close look at the record of electrical activity in the brain of Patient One after she was taken off life support. What they discovered – in results reported for the first time last year – was almost entirely unexpected, and has the potential to rewrite our understanding of death.
  • “I believe what we found is only the tip of a vast iceberg,” Borjigin told me. “What’s still beneath the surface is a full account of how dying actually takes place. Because there’s something happening in there, in the brain, that makes no sense.”
  • Over the next 30 years, researchers collected thousands of case reports of people who had had near-death experiences
  • Moody was their most important spokesman; he eventually claimed to have had multiple past lives and built a “psychomanteum” in rural Alabama where people could attempt to summon the spirits of the dead by gazing into a dimly lit mirror.
  • near-death studies was already splitting into several schools of belief, whose tensions continue to this day. One influential camp was made up of spiritualists, some of them evangelical Christians, who were convinced that near-death experiences were genuine sojourns in the land of the dead and divine
  • It is no longer unheard of for people to be revived even six hours after being declared clinically dead. In 2011, Japanese doctors reported the case of a young woman who was found in a forest one morning after an overdose stopped her heart the previous night; using advanced technology to circulate blood and oxygen through her body, the doctors were able to revive her more than six hours later, and she was able to walk out of the hospital after three weeks of care
  • The second, and largest, faction of near-death researchers were the parapsychologists, those interested in phenomena that seemed to undermine the scientific orthodoxy that the mind could not exist independently of the brain. These researchers, who were by and large trained scientists following well established research methods, tended to believe that near-death experiences offered evidence that consciousness could persist after the death of the individua
  • Their aim was to find ways to test their theories of consciousness empirically, and to turn near-death studies into a legitimate scientific endeavour.
  • Finally, there emerged the smallest contingent of near-death researchers, who could be labelled the physicalists. These were scientists, many of whom studied the brain, who were committed to a strictly biological account of near-death experiences. Like dreams, the physicalists argued, near-death experiences might reveal psychological truths, but they did so through hallucinatory fictions that emerged from the workings of the body and the brain.
  • Between 1975, when Moody published Life After Life, and 1984, only 17 articles in the PubMed database of scientific publications mentioned near-death experiences. In the following decade, there were 62. In the most recent 10-year span, there were 221.
  • Today, there is a widespread sense throughout the community of near-death researchers that we are on the verge of great discoveries
  • “We really are in a crucial moment where we have to disentangle consciousness from responsiveness, and maybe question every state that we consider unconscious,”
  • “I think in 50 or 100 years time we will have discovered the entity that is consciousness,” he told me. “It will be taken for granted that it wasn’t produced by the brain, and it doesn’t die when you die.”
  • it is in large part because of a revolution in our ability to resuscitate people who have suffered cardiac arrest
  • In his book, Moody distilled the reports of 150 people who had had intense, life-altering experiences in the moments surrounding a cardiac arrest. Although the reports varied, he found that they often shared one or more common features or themes. The narrative arc of the most detailed of those reports – departing the body and travelling through a long tunnel, having an out-of-body experience, encountering spirits and a being of light, one’s whole life flashing before one’s eyes, and returning to the body from some outer limit – became so canonical that the art critic Robert Hughes could refer to it years later as “the familiar kitsch of near-death experience”.
  • Loss of oxygen to the brain and other organs generally follows within seconds or minutes, although the complete cessation of activity in the heart and brain – which is often called “flatlining” or, in the case of the latter, “brain death” – may not occur for many minutes or even hours.
  • That began to change in 1960, when the combination of mouth-to-mouth ventilation, chest compressions and external defibrillation known as cardiopulmonary resuscitation, or CPR, was formalised. Shortly thereafter, a massive campaign was launched to educate clinicians and the public on CPR’s basic techniques, and soon people were being revived in previously unthinkable, if still modest, numbers.
  • scientists learned that, even in its acute final stages, death is not a point, but a process. After cardiac arrest, blood and oxygen stop circulating through the body, cells begin to break down, and normal electrical activity in the brain gets disrupted. But the organs don’t fail irreversibly right away, and the brain doesn’t necessarily cease functioning altogether. There is often still the possibility of a return to life. In some cases, cell death can be stopped or significantly slowed, the heart can be restarted, and brain function can be restored. In other words, the process of death can be reversed.
  • In a medical setting, “clinical death” is said to occur at the moment the heart stops pumping blood, and the pulse stops. This is widely known as cardiac arrest
  • In 2019, a British woman named Audrey Schoeman who was caught in a snowstorm spent six hours in cardiac arrest before doctors brought her back to life with no evident brain damage.
  • That is a key tenet of the parapsychologists’ arguments: if there is consciousness without brain activity, then consciousness must dwell somewhere beyond the brain
  • Some of the parapsychologists speculate that it is a “non-local” force that pervades the universe, like electromagnetism. This force is received by the brain, but is not generated by it, the way a television receives a broadcast.
  • In order for this argument to hold, something else has to be true: near-death experiences have to happen during death, after the brain shuts down
  • To prove this, parapsychologists point to a number of rare but astounding cases known as “veridical” near-death experiences, in which patients seem to report details from the operating room that they might have known only if they had conscious awareness during the time that they were clinically dead.
  • At the very least, Parnia and his colleagues have written, such phenomena are “inexplicable through current neuroscientific models”. Unfortunately for the parapsychologists, however, none of the reports of post-death awareness holds up to strict scientific scrutiny. “There are many claims of this kind, but in my long decades of research into out-of-body and near-death experiences I never met any convincing evidence that this is true,”
  • In other cases, there’s not enough evidence to prove that the experiences reported by cardiac arrest survivors happened when their brains were shut down, as opposed to in the period before or after they supposedly “flatlined”. “So far, there is no sufficiently rigorous, convincing empirical evidence that people can observe their surroundings during a near-death experience,”
  • The parapsychologists tend to push back by arguing that even if each of the cases of veridical near-death experiences leaves room for scientific doubt, surely the accumulation of dozens of these reports must count for something. But that argument can be turned on its head: if there are so many genuine instances of consciousness surviving death, then why should it have so far proven impossible to catch one empirically?
  • The spiritualists and parapsychologists are right to insist that something deeply weird is happening to people when they die, but they are wrong to assume it is happening in the next life rather than this one. At least, that is the implication of what Jimo Borjigin found when she investigated the case of Patient One.
  • Given the levels of activity and connectivity in particular regions of her dying brain, Borjigin believes it’s likely that Patient One had a profound near-death experience with many of its major features: out-of-body sensations, visions of light, feelings of joy or serenity, and moral re-evaluations of one’s life. Of course,
  • “As she died, Patient One’s brain was functioning in a kind of hyperdrive,” Borjigin told me. For about two minutes after her oxygen was cut off, there was an intense synchronisation of her brain waves, a state associated with many cognitive functions, including heightened attention and memory. The synchronisation dampened for about 18 seconds, then intensified again for more than four minutes. It faded for a minute, then came back for a third time.
  • n those same periods of dying, different parts of Patient One’s brain were suddenly in close communication with each other. The most intense connections started immediately after her oxygen stopped, and lasted for nearly four minutes. There was another burst of connectivity more than five minutes and 20 seconds after she was taken off life support. In particular, areas of her brain associated with processing conscious experience – areas that are active when we move through the waking world, and when we have vivid dreams – were communicating with those involved in memory formation. So were parts of the brain associated with empathy. Even as she slipped irre
  • something that looked astonishingly like life was taking place over several minutes in Patient One’s brain.
  • Although a few earlier instances of brain waves had been reported in dying human brains, nothing as detailed and complex as what occurred in Patient One had ever been detected.
  • In the moments after Patient One was taken off oxygen, there was a surge of activity in her dying brain. Areas that had been nearly silent while she was on life support suddenly thrummed with high-frequency electrical signals called gamma waves. In particular, the parts of the brain that scientists consider a “hot zone” for consciousness became dramatically alive. In one section, the signals remained detectable for more than six minutes. In another, they were 11 to 12 times higher than they had been before Patient One’s ventilator was removed.
  • “The brain, contrary to everybody’s belief, is actually super active during cardiac arrest,” Borjigin said. Death may be far more alive than we ever thought possible.
  • “The brain is so resilient, the heart is so resilient, that it takes years of abuse to kill them,” she pointed out. “Why then, without oxygen, can a perfectly healthy person die within 30 minutes, irreversibly?”
  • Evidence is already emerging that even total brain death may someday be reversible. In 2019, scientists at Yale University harvested the brains of pigs that had been decapitated in a commercial slaughterhouse four hours earlier. Then they perfused the brains for six hours with a special cocktail of drugs and synthetic blood. Astoundingly, some of the cells in the brains began to show metabolic activity again, and some of the synapses even began firing.
Javier E

At the Existentialist Café: Freedom, Being, and Apricot Cocktails with Jean-P... - 0 views

  • The phenomenologists’ leading thinker, Edmund Husserl, provided a rallying cry, ‘To the things themselves!’ It meant: don’t waste time on the interpretations that accrue upon things, and especially don’t waste time wondering whether the things are real. Just look at this that’s presenting itself to you, whatever this may be, and describe it as precisely as possible.
  • You might think you have defined me by some label, but you are wrong, for I am always a work in progress. I create myself constantly through action, and this is so fundamental to my human condition that, for Sartre, it is the human condition, from the moment of first consciousness to the moment when death wipes it out. I am my own freedom: no more, no less.
  • Sartre wrote like a novelist — not surprisingly, since he was one. In his novels, short stories and plays as well as in his philosophical treatises, he wrote about the physical sensations of the world and the structures and moods of human life. Above all, he wrote about one big subject: what it meant to be free. Freedom, for him, lay at the heart of all human experience, and this set humans apart from all other kinds of object.
  • ...97 more annotations...
  • Sartre listened to his problem and said simply, ‘You are free, therefore choose — that is to say, invent.’ No signs are vouchsafed in this world, he said. None of the old authorities can relieve you of the burden of freedom. You can weigh up moral or practical considerations as carefully as you like, but ultimately you must take the plunge and do something, and it’s up to you what that something is.
  • Even if the situation is unbearable — perhaps you are facing execution, or sitting in a Gestapo prison, or about to fall off a cliff — you are still free to decide what to make of it in mind and deed. Starting from where you are now, you choose. And in choosing, you also choose who you will be.
  • The war had made people realise that they and their fellow humans were capable of departing entirely from civilised norms; no wonder the idea of a fixed human nature seemed questionable.
  • If this sounds difficult and unnerving, it’s because it is. Sartre does not deny that the need to keep making decisions brings constant anxiety. He heightens this anxiety by pointing out that what you do really matters. You should make your choices as though you were choosing on behalf of the whole of humanity, taking the entire burden of responsibility for how the human race behaves. If you avoid this responsibility by fooling yourself that you are the victim of circumstance or of someone else’s bad advice, you are failing to meet the demands of human life and choosing a fake existence, cut off from your own ‘authenticity’.
  • Along with the terrifying side of this comes a great promise: Sartre’s existentialism implies that it is possible to be authentic and free, as long as you keep up the effort.
  • almost all agreed that it was, as an article in Les nouvelles littéraires phrased it, a ‘sickening mixture of philosophic pretentiousness, equivocal dreams, physiological technicalities, morbid tastes and hesitant eroticism … an introspective embryo that one would take distinct pleasure in crushing’.
  • he offered a philosophy designed for a species that had just scared the hell out of itself, but that finally felt ready to grow up and take responsibility.
  • In this rebellious world, just as with the Parisian bohemians and Dadaists in earlier generations, everything that was dangerous and provocative was good, and everything that was nice or bourgeois was bad.
  • Such interweaving of ideas and life had a long pedigree, although the existentialists gave it a new twist. Stoic and Epicurean thinkers in the classical world had practised philosophy as a means of living well, rather than of seeking knowledge or wisdom for their own sake. By reflecting on life’s vagaries in philosophical ways, they believed they could become more resilient, more able to rise above circumstances, and better equipped to manage grief, fear, anger, disappointment or anxiety.
  • In the tradition they passed on, philosophy is neither a pure intellectual pursuit nor a collection of cheap self-help tricks, but a discipline for flourishing and living a fully human, responsible life.
  • For Kierkegaard, Descartes had things back to front. In his own view, human existence comes first: it is the starting point for everything we do, not the result of a logical deduction. My existence is active: I live it and choose it, and this precedes any statement I can make about myself.
  • Studying our own moral genealogy cannot help us to escape or transcend ourselves. But it can enable us to see our illusions more clearly and lead a more vital, assertive existence.
  • What was needed, he felt, was not high moral or theological ideals, but a deeply critical form of cultural history or ‘genealogy’ that would uncover the reasons why we humans are as we are, and how we came to be that way. For him, all philosophy could even be redefined as a form of psychology, or history.
  • For those oppressed on grounds of race or class, or for those fighting against colonialism, existentialism offered a change of perspective — literally, as Sartre proposed that all situations be judged according to how they appeared in the eyes of those most oppressed, or those whose suffering was greatest.
  • She observed that we need not expect moral philosophers to ‘live by’ their ideas in a simplistic way, as if they were following a set of rules. But we can expect them to show how their ideas are lived in. We should be able to look in through the windows of a philosophy, as it were, and see how people occupy it, how they move about and how they conduct themselves.
  • the existentialists inhabited their historical and personal world, as they inhabited their ideas. This notion of ‘inhabited philosophy’ is one I’ve borrowed from the English philosopher and novelist Iris Murdoch, who wrote the first full-length book on Sartre and was an early adopter of existentialism
  • What is existentialism anyway?
  • An existentialist who is also phenomenological provides no easy rules for dealing with this condition, but instead concentrates on describing lived experience as it presents itself. — By describing experience well, he or she hopes to understand this existence and awaken us to ways of living more authentic lives.
  • Existentialists concern themselves with individual, concrete human existence. — They consider human existence different from the kind of being other things have. Other entities are what they are, but as a human I am whatever I choose to make of myself at every moment. I am free — — and therefore I’m responsible for everything I do, a dizzying fact which causes — an anxiety inseparable from human existence itself.
  • On the other hand, I am only free within situations, which can include factors in my own biology and psychology as well as physical, historical and social variables of the world into which I have been thrown. — Despite the limitations, I always want more: I am passionately involved in personal projects of all kinds. — Human existence is thus ambiguous: at once boxed in by borders and yet transcendent and exhilarating. —
  • The first part of this is straightforward: a phenomenologist’s job is to describe. This is the activity that Husserl kept reminding his students to do. It meant stripping away distractions, habits, clichés of thought, presumptions and received ideas, in order to return our attention to what he called the ‘things themselves’. We must fix our beady gaze on them and capture them exactly as they appear, rather than as we think they are supposed to be.
  • Husserl therefore says that, to phenomenologically describe a cup of coffee, I should set aside both the abstract suppositions and any intrusive emotional associations. Then I can concentrate on the dark, fragrant, rich phenomenon in front of me now. This ‘setting aside’ or ‘bracketing out’ of speculative add-ons Husserl called epoché — a term borrowed from the ancient Sceptics,
  • The point about rigour is crucial; it brings us back to the first half of the command to describe phenomena. A phenomenologist cannot get away with listening to a piece of music and saying, ‘How lovely!’ He or she must ask: is it plaintive? is it dignified? is it colossal and sublime? The point is to keep coming back to the ‘things themselves’ — phenomena stripped of their conceptual baggage — so as to bail out weak or extraneous material and get to the heart of the experience.
  • Husserlian ‘bracketing out’ or epoché allows the phenomenologist to temporarily ignore the question ‘But is it real?’, in order to ask how a person experiences his or her world. Phenomenology gives a formal mode of access to human experience. It lets philosophers talk about life more or less as non-philosophers do, while still being able to tell themselves they are being methodical and rigorous.
  • Besides claiming to transform the way we think about reality, phenomenologists promised to change how we think about ourselves. They believed that we should not try to find out what the human mind is, as if it were some kind of substance. Instead, we should consider what it does, and how it grasps its experiences.
  • For Brentano, this reaching towards objects is what our minds do all the time. Our thoughts are invariably of or about something, he wrote: in love, something is loved, in hatred, something is hated, in judgement, something is affirmed or denied. Even when I imagine an object that isn’t there, my mental structure is still one of ‘about-ness’ or ‘of-ness’.
  • Except in deepest sleep, my mind is always engaged in this aboutness: it has ‘intentionality’. Having taken the germ of this from Brentano, Husserl made it central to his whole philosophy.
  • Husserl saw in the idea of intentionality a way to sidestep two great unsolved puzzles of philosophical history: the question of what objects ‘really’ are, and the question of what the mind ‘really’ is. By doing the epoché and bracketing out all consideration of reality from both topics, one is freed to concentrate on the relationship in the middle. One can apply one’s descriptive energies to the endless dance of intentionality that takes place in our lives: the whirl of our minds as they seize their intended phenomena one after the other and whisk them around the floor,
  • Understood in this way, the mind hardly is anything at all: it is its aboutness. This makes the human mind (and possibly some animal minds) different from any other naturally occurring entity. Nothing else can be as thoroughly about or of things as the mind is:
  • Some Eastern meditation techniques aim to still this scurrying creature, but the extreme difficulty of this shows how unnatural it is to be mentally inert. Left to itself, the mind reaches out in all directions as long as it is awake — and even carries on doing it in the dreaming phase of its sleep.
  • a mind that is experiencing nothing, imagining nothing, or speculating about nothing can hardly be said to be a mind at all.
  • Three simple ideas — description, phenomenon, intentionality — provided enough inspiration to keep roomfuls of Husserlian assistants busy in Freiburg for decades. With all of human existence awaiting their attention, how could they ever run out of things to do?
  • For Sartre, this gives the mind an immense freedom. If we are nothing but what we think about, then no predefined ‘inner nature’ can hold us back. We are protean.
  • way of this interpretation. Real, not real; inside, outside; what difference did it make? Reflecting on this, Husserl began turning his phenomenology into a branch of ‘idealism’ — the philosophical tradition which denied external reality and defined everything as a kind of private hallucination.
  • For Sartre, if we try to shut ourselves up inside our own minds, ‘in a nice warm room with the shutters closed’, we cease to exist. We have no cosy home: being out on the dusty road is the very definition of what we are.
  • One might think that, if Heidegger had anything worth saying, he could have communicated it in ordinary language. The fact is that he does not want to be ordinary, and he may not even want to communicate in the usual sense. He wants to make the familiar obscure, and to vex us. George Steiner thought that Heidegger’s purpose was less to be understood than to be experienced through a ‘felt strangeness’.
  • He takes Dasein in its most ordinary moments, then talks about it in the most innovative way he can. For Heidegger, Dasein’s everyday Being is right here: it is Being-in-the-world, or In-der-Welt-sein. The main feature of Dasein’s everyday Being-in-the-world right here is that it is usually busy doing something.
  • Thus, for Heidegger, all Being-in-the-world is also a ‘Being-with’ or Mitsein. We cohabit with others in a ‘with-world’, or Mitwelt. The old philosophical problem of how we prove the existence of other minds has now vanished. Dasein swims in the with-world long before it wonders about other minds.
  • Sometimes the best-educated people were those least inclined to take the Nazis seriously, dismissing them as too absurd to last. Karl Jaspers was one of those who made this mistake, as he later recalled, and Beauvoir observed similar dismissive attitudes among the French students in Berlin.
  • In any case, most of those who disagreed with Hitler’s ideology soon learned to keep their view to themselves. If a Nazi parade passed on the street, they would either slip out of view or give the obligatory salute like everyone else, telling themselves that the gesture meant nothing if they did not believe in it. As the psychologist Bruno Bettelheim later wrote of this period, few people will risk their life for such a small thing as raising an arm — yet that is how one’s powers of resistance are eroded away, and eventually one’s responsibility and integrity go with them.
  • for Arendt, if you do not respond adequately when the times demand it, you show a lack of imagination and attention that is as dangerous as deliberately committing an abuse. It amounts to disobeying the one command she had absorbed from Heidegger in those Marburg days: Think!
  • ‘Everything takes place under a kind of anaesthesia. Objectively dreadful events produce a thin, puny emotional response. Murders are committed like schoolboy pranks. Humiliation and moral decay are accepted like minor incidents.’ Haffner thought modernity itself was partly to blame: people had become yoked to their habits and to mass media, forgetting to stop and think, or to disrupt their routines long enough to question what was going on.
  • Heidegger’s former lover and student Hannah Arendt would argue, in her 1951 study The Origins of Totalitarianism, that totalitarian movements thrived at least partly because of this fragmentation in modern lives, which made people more vulnerable to being swept away by demagogues. Elsewhere, she coined the phrase ‘the banality of evil’ to describe the most extreme failures of personal moral awareness.
  • His communicative ideal fed into a whole theory of history: he traced all civilisation to an ‘Axial Period’ in the fifth century BC, during which philosophy and culture exploded simultaneously in Europe, the Middle East and Asia, as though a great bubble of minds had erupted from the earth’s surface. ‘True philosophy needs communion to come into existence,’ he wrote, and added, ‘Uncommunicativeness in a philosopher is virtually a criterion of the untruth of his thinking.’
  • The idea of being called to authenticity became a major theme in later existentialism, the call being interpreted as saying something like ‘Be yourself!’, as opposed to being phony. For Heidegger, the call is more fundamental than that. It is a call to take up a self that you didn’t know you had: to wake up to your Being. Moreover, it is a call to action. It requires you to do something: to take a decision of some sort.
  • Being and Time contained at least one big idea that should have been of use in resisting totalitarianism. Dasein, Heidegger wrote there, tends to fall under the sway of something called das Man or ‘the they’ — an impersonal entity that robs us of the freedom to think for ourselves. To live authentically requires resisting or outwitting this influence, but this is not easy because das Man is so nebulous. Man in German does not mean ‘man’ as in English (that’s der Mann), but a neutral abstraction, something like ‘one’ in the English phrase ‘one doesn’t do that’,
  • for Heidegger, das Man is me. It is everywhere and nowhere; it is nothing definite, but each of us is it. As with Being, it is so ubiquitous that it is difficult to see. If I am not careful, however, das Man takes over the important decisions that should be my own. It drains away my responsibility or ‘answerability’. As Arendt might put it, we slip into banality, failing to think.
  • Jaspers focused on what he called Grenzsituationen — border situations, or limit situations. These are the moments when one finds oneself constrained or boxed in by what is happening, but at the same time pushed by these events towards the limits or outer edge of normal experience. For example, you might have to make a life-or-death choice, or something might remind you suddenly of your mortality,
  • Jaspers’ interest in border situations probably had much to do with his own early confrontation with mortality. From childhood, he had suffered from a heart condition so severe that he always expected to die at any moment. He also had emphysema, which forced him to speak slowly, taking long pauses to catch his breath. Both illnesses meant that he had to budget his energies with care in order to get his work done without endangering his life.
  • If I am to resist das Man, I must become answerable to the call of my ‘voice of conscience’. This call does not come from God, as a traditional Christian definition of the voice of conscience might suppose. It comes from a truly existentialist source: my own authentic self. Alas, this voice is one I do not recognise and may not hear, because it is not the voice of my habitual ‘they-self’. It is an alien or uncanny version of my usual voice. I am familiar with my they-self, but not with my unalienated voice — so, in a weird twist, my real voice is the one that sounds strangest to me.
  • Marcel developed a strongly theological branch of existentialism. His faith distanced him from both Sartre and Heidegger, but he shared a sense of how history makes demands on individuals. In his essay ‘On the Ontological Mystery’, written in 1932 and published in the fateful year of 1933, Marcel wrote of the human tendency to become stuck in habits, received ideas, and a narrow-minded attachment to possessions and familiar scenes. Instead, he urged his readers to develop a capacity for remaining ‘available’ to situations as they arise. Similar ideas of disponibilité or availability had been explored by other writers,
  • Marcel made it his central existential imperative. He was aware of how rare and difficult it was. Most people fall into what he calls ‘crispation’: a tensed, encrusted shape in life — ‘as though each one of us secreted a kind of shell which gradually hardened and imprisoned him’.
  • Bettelheim later observed that, under Nazism, only a few people realised at once that life could not continue unaltered: these were the ones who got away quickly. Bettelheim himself was not among them. Caught in Austria when Hitler annexed it, he was sent first to Dachau and then to Buchenwald, but was then released in a mass amnesty to celebrate Hitler’s birthday in 1939 — an extraordinary reprieve, after which he left at once for America.
  • we are used to reading philosophy as offering a universal message for all times and places — or at least as aiming to do so. But Heidegger disliked the notion of universal truths or universal humanity, which he considered a fantasy. For him, Dasein is not defined by shared faculties of reason and understanding, as the Enlightenment philosophers thought. Still less is it defined by any kind of transcendent eternal soul, as in religious tradition. We do not exist on a higher, eternal plane at all. Dasein’s Being is local: it has a historical situation, and is constituted in time and place.
  • For Marcel, learning to stay open to reality in this way is the philosopher’s prime job. Everyone can do it, but the philosopher is the one who is called on above all to stay awake, so as to be the first to sound the alarm if something seems wrong.
  • Second, it also means understanding that we are historical beings, and grasping the demands our particular historical situation is making on us. In what Heidegger calls ‘anticipatory resoluteness’, Dasein discovers ‘that its uttermost possibility lies in giving itself up’. At that moment, through Being-towards-death and resoluteness in facing up to one’s time, one is freed from the they-self and attains one’s true, authentic self.
  • If we are temporal beings by our very nature, then authentic existence means accepting, first, that we are finite and mortal. We will die: this all-important realisation is what Heidegger calls authentic ‘Being-towards-Death’, and it is fundamental to his philosophy.
  • Hannah Arendt, instead, left early on: she had the benefit of a powerful warning. Just after the Nazi takeover, in spring 1933, she had been arrested while researching materials on anti-Semitism for the German Zionist Organisation at Berlin’s Prussian State Library. Her apartment was searched; both she and her mother were locked up briefly, then released. They fled, without stopping to arrange travel documents. They crossed to Czechoslovakia (then still safe) by a method that sounds almost too fabulous to be true: a sympathetic German family on the border had a house with its front door in Germany and its back door in Czechoslovakia. The family would invite people for dinner, then let them leave through the back door at night.
  • As Sartre argued in his 1943 review of The Stranger, basic phenomenological principles show that experience comes to us already charged with significance. A piano sonata is a melancholy evocation of longing. If I watch a soccer match, I see it as a soccer match, not as a meaningless scene in which a number of people run around taking turns to apply their lower limbs to a spherical object. If the latter is what I’m seeing, then I am not watching some more essential, truer version of soccer; I am failing to watch it properly as soccer at all.
  • Much as they liked Camus personally, neither Sartre nor Beauvoir accepted his vision of absurdity. For them, life is not absurd, even when viewed on a cosmic scale, and nothing can be gained by saying it is. Life for them is full of real meaning, although that meaning emerges differently for each of us.
  • For Sartre, we show bad faith whenever we portray ourselves as passive creations of our race, class, job, history, nation, family, heredity, childhood influences, events, or even hidden drives in our subconscious which we claim are out of our control. It is not that such factors are unimportant: class and race, in particular, he acknowledged as powerful forces in people’s lives, and Simone de Beauvoir would soon add gender to that list.
  • Sartre takes his argument to an extreme point by asserting that even war, imprisonment or the prospect of imminent death cannot take away my existential freedom. They form part of my ‘situation’, and this may be an extreme and intolerable situation, but it still provides only a context for whatever I choose to do next. If I am about to die, I can decide how to face that death. Sartre here resurrects the ancient Stoic idea that I may not choose what happens to me, but I can choose what to make of it, spiritually speaking.
  • But the Stoics cultivated indifference in the face of terrible events, whereas Sartre thought we should remain passionately, even furiously engaged with what happens to us and with what we can achieve. We should not expect freedom to be anything less than fiendishly difficult.
  • Freedom does not mean entirely unconstrained movement, and it certainly does not mean acting randomly. We often mistake the very things that enable us to be free — context, meaning, facticity, situation, a general direction in our lives — for things that define us and take away our freedom. It is only with all of these that we can be free in a real sense.
  • Nor did he mean that privileged groups have the right to pontificate to the poor and downtrodden about the need to ‘take responsibility’ for themselves. That would be a grotesque misreading of Sartre’s point, since his sympathy in any encounter always lay with the more oppressed side. But for each of us — for me — to be in good faith means not making excuses for myself.
  • Camus’ novel gives us a deliberately understated vision of heroism and decisive action compared to those of Sartre and Beauvoir. One can only do so much. It can look like defeatism, but it shows a more realistic perception of what it takes to actually accomplish difficult tasks like liberating one’s country.
  • Camus just kept returning to his core principle: no torture, no killing — at least not with state approval. Beauvoir and Sartre believed they were taking a more subtle and more realistic view. If asked why a couple of innocuous philosophers had suddenly become so harsh, they would have said it was because the war had changed them in profound ways. It had shown them that one’s duties to humanity could be more complicated than they seemed. ‘The war really divided my life in two,’ Sartre said later.
  • Poets and artists ‘let things be’, but they also let things come out and show themselves. They help to ease things into ‘unconcealment’ (Unverborgenheit), which is Heidegger’s rendition of the Greek term alētheia, usually translated as ‘truth’. This is a deeper kind of truth than the mere correspondence of a statement to reality, as when we say ‘The cat is on the mat’ and point to a mat with a cat on it. Long before we can do this, both cat and mat must ‘stand forth out of concealedness’. They must un-hide themselves.
  • Heidegger does not use the word ‘consciousness’ here because — as with his earlier work — he is trying to make us think in a radically different way about ourselves. We are not to think of the mind as an empty cavern, or as a container filled with representations of things. We are not even supposed to think of it as firing off arrows of intentional ‘aboutness’, as in the earlier phenomenology of Brentano. Instead, Heidegger draws us into the depths of his Schwarzwald, and asks us to imagine a gap with sunlight filtering in. We remain in the forest, but we provide a relatively open spot where other beings can bask for a moment. If we did not do this, everything would remain in the thickets, hidden even to itself.
  • The astronomer Carl Sagan began his 1980 television series Cosmos by saying that human beings, though made of the same stuff as the stars, are conscious and are therefore ‘a way for the cosmos to know itself’. Merleau-Ponty similarly quoted his favourite painter Cézanne as saying, ‘The landscape thinks itself in me, and I am its consciousness.’ This is something like what Heidegger thinks humanity contributes to the earth. We are not made of spiritual nothingness; we are part of Being, but we also bring something unique with us. It is not much: a little open space, perhaps with a path and a bench like the one the young Heidegger used to sit on to do his homework. But through us, the miracle occurs.
  • Beauty aside, Heidegger’s late writing can also be troubling, with its increasingly mystical notion of what it is to be human. If one speaks of a human being mainly as an open space or a clearing, or a means of ‘letting beings be’ and dwelling poetically on the earth, then one doesn’t seem to be talking about any recognisable person. The old Dasein has become less human than ever. It is now a forestry feature.
  • Even today, Jaspers, the dedicated communicator, is far less widely read than Heidegger, who has influenced architects, social theorists, critics, psychologists, artists, film-makers, environmental activists, and innumerable students and enthusiasts — including the later deconstructionist and post-structuralist schools, which took their starting point from his late thinking. Having spent the late 1940s as an outsider and then been rehabilitated, Heidegger became the overwhelming presence in university philosophy all over the European continent from then on.
  • As Levinas reflected on this experience, it helped to lead him to a philosophy that was essentially ethical, rather than ontological like Heidegger’s. He developed his ideas from the work of Jewish theologian Martin Buber, whose I and Thou in 1923 had distinguished between my relationship with an impersonal ‘it’ or ‘them’, and the direct personal encounter I have with a ‘you’. Levinas took it further: when I encounter you, we normally meet face-to-face, and it is through your face that you, as another person, can make ethical demands on me. This is very different from Heidegger’s Mitsein or Being-with, which suggests a group of people standing alongside one another, shoulder to shoulder as if in solidarity — perhaps as a unified nation or Volk.
  • For Levinas, we literally face each other, one individual at a time, and that relationship becomes one of communication and moral expectation. We do not merge; we respond to one another. Instead of being co-opted into playing some role in my personal drama of authenticity, you look me in the eyes — and you remain Other. You remain you.
  • This relationship is more fundamental than the self, more fundamental than consciousness, more fundamental even than Being — and it brings an unavoidable ethical obligation. Ever since Husserl, phenomenologists and existentialists had being trying to stretch the definition of existence to incorporate our social lives and relationships. Levinas did more: he turned philosophy around entirely so that these relationships were the foundation of our existence, not an extension of it.
  • Her last work, The Need for Roots, argues, among other things, that none of us has rights, but each one of us has a near-infinite degree of duty and obligation to the other. Whatever the underlying cause of her death — and anorexia nervosa seems to have been involved — no one could deny that she lived out her philosophy with total commitment. Of all the lives touched on in this book, hers is surely the most profound and challenging application of Iris Murdoch’s notion that a philosophy can be ‘inhabited’.
  • Other thinkers took radical ethical turns during the war years. The most extreme was Simone Weil, who actually tried to live by the principle of putting other people’s ethical demands first. Having returned to France after her travels through Germany in 1932, she had worked in a factory so as to experience the degrading nature of such work for herself. When France fell in 1940, her family fled to Marseilles (against her protests), and later to the US and to Britain. Even in exile, Weil made extraordinary sacrifices. If there were people in the world who could not sleep in a bed, she would not do so either, so she slept on the floor.
  • The mystery tradition had roots in Kierkegaard’s ‘leap of faith’. It owed much to the other great nineteenth-century mystic of the impossible, Dostoevsky, and to older theological notions. But it also grew from the protracted trauma that was the first half of the twentieth century. Since 1914, and especially since 1939, people in Europe and elsewhere had come to the realisation that we cannot fully know or trust ourselves; that we have no excuses or explanations for what we do — and yet that we must ground our existence and relationships on something firm, because otherwise we cannot survive.
  • One striking link between these radical ethical thinkers, all on the fringes of our main story, is that they had religious faith. They also granted a special role to the notion of ‘mystery’ — that which cannot be known, calculated or understood, especially when it concerns our relationships with each other. Heidegger was different from them, since he rejected the religion he grew up with and had no real interest in ethics — probably as a consequence of his having no real interest in the human.
  • Meanwhile, the Christian existentialist Gabriel Marcel was also still arguing, as he had since the 1930s, that ethics trumps everything else in philosophy and that our duty to each other is so great as to play the role of a transcendent ‘mystery’. He too had been led to this position partly by a wartime experience: during the First World War he had worked for the Red Cross’ Information Service, with the unenviable job of answering relatives’ inquiries about missing soldiers. Whenever news came, he passed it on, and usually it was not good. As Marcel later said, this task permanently inoculated him against warmongering rhetoric of any kind, and it made him aware of the power of what is unknown in our lives.
  • As the play’s much-quoted and frequently misunderstood final line has it: ‘Hell is other people.’ Sartre later explained that he did not mean to say that other people were hellish in general. He meant that after death we become frozen in their view, unable any longer to fend off their interpretation. In life, we can still do something to manage the impression we make; in death, this freedom goes and we are left entombed in other’s people’s memories and perceptions.
  • We have to do two near-impossible things at once: understand ourselves as limited by circumstances, and yet continue to pursue our projects as though we are truly in control. In Beauvoir’s view, existentialism is the philosophy that best enables us to do this, because it concerns itself so deeply with both freedom and contingency. It acknowledges the radical and terrifying scope of our freedom in life, but also the concrete influences that other philosophies tend to ignore: history, the body, social relationships and the environment.
  • The aspects of our existence that limit us, Merleau-Ponty says, are the very same ones that bind us to the world and give us scope for action and perception. They make us what we are. Sartre acknowledged the need for this trade-off, but he found it more painful to accept. Everything in him longed to be free of bonds, of impediments and limitations
  • Of course we have to learn this skill of interpreting and anticipating the world, and this happens in early childhood, which is why Merleau-Ponty thought child psychology was essential to philosophy. This is an extraordinary insight. Apart from Rousseau, very few philosophers before him had taken childhood seriously; most wrote as though all human experience were that of a fully conscious, rational, verbal adult who has been dropped into this world from the sky — perhaps by a stork.
  • For Merleau-Ponty, we cannot understand our experience if we don’t think of ourselves in part as overgrown babies. We fall for optical illusions because we once learned to see the world in terms of shapes, objects and things relevant to our own interests. Our first perceptions came to us in tandem with our first active experiments in observing the world and reaching out to explore it, and are still linked with those experiences.
  • Another factor in all of this, for Merleau-Ponty, is our social existence: we cannot thrive without others, or not for long, and we need this especially in early life. This makes solipsistic speculation about the reality of others ridiculous; we could never engage in such speculation if we hadn’t already been formed by them.
  • As Descartes could have said (but didn’t), ‘I think, therefore other people exist.’ We grow up with people playing with us, pointing things out, talking, listening, and getting us used to reading emotions and movements; this is how we become capable, reflective, smoothly integrated beings.
  • In general, Merleau-Ponty thinks human experience only makes sense if we abandon philosophy’s time-honoured habit of starting with a solitary, capsule-like, immobile adult self, isolated from its body and world, which must then be connected up again — adding each element around it as though adding clothing to a doll. Instead, for him, we slide from the womb to the birth canal to an equally close and total immersion in the world. That immersion continues as long as we live, although we may also cultivate the art of partially withdrawing from time to time when we want to think or daydream.
  • When he looks for his own metaphor to describe how he sees consciousness, he comes up with a beautiful one: consciousness, he suggests, is like a ‘fold’ in the world, as though someone had crumpled a piece of cloth to make a little nest or hollow. It stays for a while, before eventually being unfolded and smoothed away. There is something seductive, even erotic, in this idea of my conscious self as an improvised pouch in the cloth of the world. I still have my privacy — my withdrawing room. But I am part of the world’s fabric, and I remain formed out of it for as long as I am here.
  • By the time of these works, Merleau-Ponty is taking his desire to describe experience to the outer limits of what language can convey. Just as with the late Husserl or Heidegger, or Sartre in his Flaubert book, we see a philosopher venturing so far from shore that we can barely follow. Emmanuel Levinas would head out to the fringes too, eventually becoming incomprehensible to all but his most patient initiates.
  • Sartre once remarked — speaking of a disagreement they had about Husserl in 1941 — that ‘we discovered, astounded, that our conflicts had, at times, stemmed from our childhood, or went back to the elementary differences of our two organisms’. Merleau-Ponty also said in an interview that Sartre’s work seemed strange to him, not because of philosophical differences, but because of a certain ‘register of feeling’, especially in Nausea, that he could not share. Their difference was one of temperament and of the whole way the world presented itself to them.
  • The two also differed in their purpose. When Sartre writes about the body or other aspects of experience, he generally does it in order to make a different point. He expertly evokes the grace of his café waiter, gliding between the tables, bending at an angle just so, steering the drink-laden tray through the air on the tips of his fingers — but he does it all in order to illustrate his ideas about bad faith. When Merleau-Ponty writes about skilled and graceful movement, the movement itself is his point. This is the thing he wants to understand.
  • We can never move definitively from ignorance to certainty, for the thread of the inquiry will constantly lead us back to ignorance again. This is the most attractive description of philosophy I’ve ever read, and the best argument for why it is worth doing, even (or especially) when it takes us no distance at all from our starting point.
  • By prioritising perception, the body, social life and childhood development, Merleau-Ponty gathered up philosophy’s far-flung outsider subjects and brought them in to occupy the centre of his thought.
  • In his inaugural lecture at the Collège de France on 15 January 1953, published as In Praise of Philosophy, he said that philosophers should concern themselves above all with whatever is ambiguous in our experience. At the same time, they should think clearly about these ambiguities, using reason and science. Thus, he said, ‘The philosopher is marked by the distinguishing trait that he possesses inseparably the taste for evidence and the feeling for ambiguity.’ A constant movement is required between these two
  • As Sartre wrote in response to Hiroshima, humanity had now gained the power to wipe itself out, and must decide every single day that it wanted to live. Camus also wrote that humanity faced the task of choosing between collective suicide and a more intelligent use of its technology — ‘between hell and reason’. After 1945, there seemed little reason to trust in humanity’s ability to choose well.
  • Merleau-Ponty observed in a lecture of 1951 that, more than any previous century, the twentieth century had reminded people how ‘contingent’ their lives were — how at the mercy of historical events and other changes that they could not control. This feeling went on long after the war ended. After the A-bombs were dropped on Hiroshima and Nagasaki, many feared that a Third World War would not be long in coming, this time between the Soviet Union and the United States.
Javier E

Psych, Lies, and Audiotape: The Tarnished Legacy of the Milgram Shock Experiments | - 2 views

  • subjects — 780 New Haven residents who volunteered — helped make an untenured assistant professor named Stanley Milgram a national celebrity. Over the next five decades, his obedience experiments provided the inspiration for films, fiction, plays, documentaries, pop music, prime-time dramas, and reality television. Today, the Milgram experiments are considered among the most famous and most controversial experiments of all time. They are also often used in expert testimony in cases where situational obedience leads to crime
  • Perry’s evidence raises larger questions regarding a study that is still firmly entrenched in American scientific and popular culture: if Milgram lied once about his compromised neutrality, to what extent can we trust anything he said? And how could a blatant breach in objectivity in one of the most analyzed experiments in history go undetected for so long?
  • the debate has never addressed this question: to what extent can we trust his raw data in the first place? In her riveting new book, Behind the Shock Machine: The Untold Story of the Notorious Milgram Psychology Experiments, Australian psychologist Gina Perry tackles this very topic, taking nothing for granted
  • ...10 more annotations...
  • Her chilling investigation of the experiments and their aftereffects suggests that Milgram manipulated results, misled the public, and flat out lied in order to deflect criticism and further the thesis for which he would become famous
  • She contends that serious factual inaccuracies cloud our understanding of Milgram’s work, inaccuracies which she believes arose “partly because of Milgram’s presentation of his findings — his downplaying of contradictions and inconsistencies — and partly because it was the heart-attack variation that was embraced by the popular media
  • Perry reveals that Milgram massaged the facts in order to deliver the outcome he sought. When Milgram presented his finding — namely, high levels of obedience — both in early papers and in his 1974 book, Obedience to Authority, he stated that if the subject refused the lab coat’s commands more than four times, the subject would be classified as disobedient. But Perry finds that this isn’t what really happened. The further Milgram got in his research, the more he pushed participants to obey.
  • Milgram’s studies — which suggest that nearly two-thirds of subjects will, under certain conditions, administer dangerously powerful electrical shocks to a stranger when commanded to do so by an authority figure — have become a staple of psychology departments around the world. They have even helped shape the rules that govern experiments on human subjects. Along with Zimbardo’s 1971 Stanford prison experiment, which showed that college students assigned the role of “prison guard” quickly started abusing college students assigned the role of “prisoner,” Milgram’s experiments are the starting point for any meaningful discussion of the “I was only following orders” defense, and for determining how the relationship between situational factors and obedience can lead seemingly good people to do horrible things.
  • If the Milgram of Obedience to Authority were the narrator in a novel, I wouldn’t have found him terribly reliable. So why had I believed such a narrator in a work of nonfiction?
  • The answer, I found, was disturbingly simple: I trust scientists
  • I do trust them not to lie about the rules or results of their experiments. And if a scientist does lie, especially in such a famous experiment, I trust that another scientist will quickly uncover the deception. Or at least I used to.
  • At the time, Milgram was 27, fresh out of grad school and needing to make a name for himself in a hyper-competitive department, and Perry suggests that his “career depended on [the subjects’] obedience; all his preparations were aimed at making them obey.”
  • only after criticism of his ethics surfaced, and long after the completion of the studies, did Milgram claim that “a careful post-experimental treatment was administered to all subjects,” in which “at the very least all subjects were told that the victim had not received dangerous electric shocks.” This was, quite simply, a lie. Milgram didn’t want word to spread through New Haven that he was duping his subjects, which could taint the results of his future trials.
  • While Milgram’s defenders point to subsequent recreations of his experiments that have replicated his findings, the unethical nature, not to mention the scope and cost, of the original version have not allowed for full duplications.
kushnerha

Consciousness Isn't a Mystery. It's Matter. - The New York Times - 3 views

  • Every day, it seems, some verifiably intelligent person tells us that we don’t know what consciousness is. The nature of consciousness, they say, is an awesome mystery. It’s the ultimate hard problem. The current Wikipedia entry is typical: Consciousness “is the most mysterious aspect of our lives”; philosophers “have struggled to comprehend the nature of consciousness.”
  • I find this odd because we know exactly what consciousness is — where by “consciousness” I mean what most people mean in this debate: experience of any kind whatever. It’s the most familiar thing there is, whether it’s experience of emotion, pain, understanding what someone is saying, seeing, hearing, touching, tasting or feeling. It is in fact the only thing in the universe whose ultimate intrinsic nature we can claim to know. It is utterly unmysterious.
  • The nature of physical stuff, by contrast, is deeply mysterious, and physics grows stranger by the hour. (Richard Feynman’s remark about quantum theory — “I think I can safely say that nobody understands quantum mechanics” — seems as true as ever.) Or rather, more carefully: The nature of physical stuff is mysterious except insofar as consciousness is itself a form of physical stuff.
  • ...12 more annotations...
  • “We know nothing about the intrinsic quality of physical events,” he wrote, “except when these are mental events that we directly experience.”
  • I think Russell is right: Human conscious experience is wholly a matter of physical goings-on in the body and in particular the brain. But why does he say that we know nothing about the intrinsic quality of physical events except when these are mental events we directly experience? Isn’t he exaggerating? I don’t think so
  • I need to try to reply to those (they’re probably philosophers) who doubt that we really know what conscious experience is.The reply is simple. We know what conscious experience is because the having is the knowing: Having conscious experience is knowing what it is. You don’t have to think about it (it’s really much better not to). You just have to have it. It’s true that people can make all sorts of mistakes about what is going on when they have experience, but none of them threaten the fundamental sense in which we know exactly what experience is just in having it.
  • If someone continues to ask what it is, one good reply (although Wittgenstein disapproved of it) is “you know what it is like from your own case.” Ned Block replies by adapting the response Louis Armstrong reportedly gave to someone who asked him what jazz was: “If you gotta ask, you ain’t never going to know.”
  • So we all know what consciousness is. Once we’re clear on this we can try to go further, for consciousness does of course raise a hard problem. The problem arises from the fact that we accept that consciousness is wholly a matter of physical goings-on, but can’t see how this can be so. We examine the brain in ever greater detail, using increasingly powerful techniques like fMRI, and we observe extraordinarily complex neuroelectrochemical goings-on, but we can’t even begin to understand how these goings-on can be (or give rise to) conscious experiences.
  • 1966 movie “Fantastic Voyage,” or imagine the ultimate brain scanner. Leibniz continued, “Suppose we do: visiting its insides, we will never find anything but parts pushing each other — never anything that could explain a conscious state.”
  • His mistake is to go further, and conclude that physical goings-on can’t possibly be conscious goings-on. Many make the same mistake today — the Very Large Mistake (as Winnie-the-Pooh might put it) of thinking that we know enough about the nature of physical stuff to know that conscious experience can’t be physical. We don’t. We don’t know the intrinsic nature of physical stuff, except — Russell again — insofar as we know it simply through having a conscious experience.
  • We find this idea extremely difficult because we’re so very deeply committed to the belief that we know more about the physical than we do, and (in particular) know enough to know that consciousness can’t be physical. We don’t see that the hard problem is not what consciousness is, it’s what matter is — what the physical is.
  • This point about the limits on what physics can tell us is rock solid, and it arises before we begin to consider any of the deep problems of understanding that arise within physics — problems with “dark matter” or “dark energy,” for example — or with reconciling quantum mechanics and general relativity theory.
  • Those who make the Very Large Mistake (of thinking they know enough about the nature of the physical to know that consciousness can’t be physical) tend to split into two groups. Members of the first group remain unshaken in their belief that consciousness exists, and conclude that there must be some sort of nonphysical stuff: They tend to become “dualists.” Members of the second group, passionately committed to the idea that everything is physical, make the most extraordinary move that has ever been made in the history of human thought. They deny the existence of consciousness: They become “eliminativists.”
  • no one has to react in either of these ways. All they have to do is grasp the fundamental respect in which we don’t know the intrinsic nature of physical stuff in spite of all that physics tells us. In particular, we don’t know anything about the physical that gives us good reason to think that consciousness can’t be wholly physical. It’s worth adding that one can fully accept this even if one is unwilling to agree with Russell that in having conscious experience we thereby know something about the intrinsic nature of physical reality.
  • It’s not the physics picture of matter that’s the problem; it’s the ordinary everyday picture of matter. It’s ironic that the people who are most likely to doubt or deny the existence of consciousness (on the ground that everything is physical, and that consciousness can’t possibly be physical) are also those who are most insistent on the primacy of science, because it is precisely science that makes the key point shine most brightly: the point that there is a fundamental respect in which ultimate intrinsic nature of the stuff of the universe is unknown to us — except insofar as it is consciousness.
Javier E

What's Wrong With the Teenage Mind? - WSJ.com - 1 views

  • What happens when children reach puberty earlier and adulthood later? The answer is: a good deal of teenage weirdness. Fortunately, developmental psychologists and neuroscientists are starting to explain the foundations of that weirdness.
  • The crucial new idea is that there are two different neural and psychological systems that interact to turn children into adults. Over the past two centuries, and even more over the past generation, the developmental timing of these two systems has changed. That, in turn, has profoundly changed adolescence and produced new kinds of adolescent woe. The big question for anyone who deals with young people today is how we can go about bringing these cogs of the teenage mind into sync once again
  • The first of these systems has to do with emotion and motivation. It is very closely linked to the biological and chemical changes of puberty and involves the areas of the brain that respond to rewards. This is the system that turns placid 10-year-olds into restless, exuberant, emotionally intense teenagers, desperate to attain every goal, fulfill every desire and experience every sensation. Later, it turns them back into relatively placid adults.
  • ...23 more annotations...
  • adolescents aren't reckless because they underestimate risks, but because they overestimate rewards—or, rather, find rewards more rewarding than adults do. The reward centers of the adolescent brain are much more active than those of either children or adults.
  • What teenagers want most of all are social rewards, especially the respect of their peers
  • Becoming an adult means leaving the world of your parents and starting to make your way toward the future that you will share with your peers. Puberty not only turns on the motivational and emotional system with new force, it also turns it away from the family and toward the world of equals.
  • The second crucial system in our brains has to do with control; it channels and harnesses all that seething energy. In particular, the prefrontal cortex reaches out to guide other parts of the brain, including the parts that govern motivation and emotion. This is the system that inhibits impulses and guides decision-making, that encourages long-term planning and delays gratification.
  • Today's adolescents develop an accelerator a long time before they can steer and brake.
  • Expertise comes with experience.
  • In gatherer-hunter and farming societies, childhood education involves formal and informal apprenticeship. Children have lots of chances to practice the skills that they need to accomplish their goals as adults, and so to become expert planners and actors.
  • In the past, to become a good gatherer or hunter, cook or caregiver, you would actually practice gathering, hunting, cooking and taking care of children all through middle childhood and early adolescence—tuning up just the prefrontal wiring you'd need as an adult. But you'd do all that under expert adult supervision and in the protected world of childhood
  • In contemporary life, the relationship between these two systems has changed dramatically. Puberty arrives earlier, and the motivational system kicks in earlier too. At the same time, contemporary children have very little experience with the kinds of tasks that they'll have to perform as grown-ups.
  • The experience of trying to achieve a real goal in real time in the real world is increasingly delayed, and the growth of the control system depends on just those experiences.
  • This control system depends much more on learning. It becomes increasingly effective throughout childhood and continues to develop during adolescence and adulthood, as we gain more experience.
  • An ever longer protected period of immaturity and dependence—a childhood that extends through college—means that young humans can learn more than ever before. There is strong evidence that IQ has increased dramatically as more children spend more time in school
  • children know more about more different subjects than they ever did in the days of apprenticeships.
  • Wide-ranging, flexible and broad learning, the kind we encourage in high-school and college, may actually be in tension with the ability to develop finely-honed, controlled, focused expertise in a particular skill, the kind of learning that once routinely took place in human societies.
  • this new explanation based on developmental timing elegantly accounts for the paradoxes of our particular crop of adolescents.
  • First, experience shapes the brain.
  • the brain is so powerful precisely because it is so sensitive to experience. It's as true to say that our experience of controlling our impulses make the prefrontal cortex develop as it is to say that prefrontal development makes us better at controlling our impulses
  • Second, development plays a crucial role in explaining human nature
  • there is more and more evidence that genes are just the first step in complex developmental sequences, cascades of interactions between organism and environment, and that those developmental processes shape the adult brain. Even small changes in developmental timing can lead to big changes in who we become.
  • Brain research is often taken to mean that adolescents are really just defective adults—grown-ups with a missing part.
  • But the new view of the adolescent brain isn't that the prefrontal lobes just fail to show up; it's that they aren't properly instructed and exercised
  • Instead of simply giving adolescents more and more school experiences—those extra hours of after-school classes and homework—we could try to arrange more opportunities for apprenticeship
  • Summer enrichment activities like camp and travel, now so common for children whose parents have means, might be usefully alternated with summer jobs, with real responsibilities.
  •  
    The two brain systems, the increasing gap between them, and the implications for adolescent education.
Javier E

Enlightenment's Evil Twin - The Atlantic - 0 views

  • The first time I can remember feeling like I didn’t exist, I was 15. I was sitting on a train and all of a sudden I felt like I’d been dropped into someone else’s body. My memories, experiences, and feelings—the things that make up my intrinsic sense of “me-ness”—projected across my mind like phantasmagoria, but I felt like they belonged to someone else. Like I was experiencing life in the third person.
  • It’s characterized by a pervasive and disturbing sense of unreality in both the experience of self (called “depersonalization”) and one’s surroundings (known as “derealization”); accounts of it are surreal, obscure, shrouded in terms like “unreality” and “dream,” but they’re often conveyed with an almost incongruous lucidity.
  • It’s not a psychotic condition; the sufferers are aware that what they’re perceiving is unusual. “We call it an ‘as if’ disorder. People say they feel as if they’re in a movie, as if they’re a robot,” Medford says.
  • ...13 more annotations...
  • Studies carried out with college students have found that brief episodes are common in young people, with a prevalence ranging from 30 to 70 percent. It can happen when you’re jet-lagged, hungover, or stressed. But for roughly 1 to 2 percent of the population, it becomes persistent, and distressing
  • Research suggests that areas of the brain that are key to emotional and physical sensations, such as the amygdala and the insula, appear to be less responsive in chronic depersonalization sufferers. You might become less empathetic; your pain threshold might increase. These numbing effects mean that it’s commonly conceived as a defense mechanism; Hunter calls it a “psychological trip switch” which can be triggered in times of stress.
  • Have you ever played that game when you repeat a word over and over again until it loses all meaning? It’s called semantic satiation. Like words, can a sense of self be broken down into arbitrary, socially-constructed components?
  • That question may be why the phenomenon has attracted a lot of interest from philosophers. In a sense, the experience presupposes certain notions of how the self is meant to feel. We think of a self as an essential thing—a soul or an ego that everyone has and is aware of—but scientists and philosophers have been telling us for a while now that the self isn’t quite as it seems
  • there is no center in the brain where the self is generated. “What we experience is a powerful depiction generated by our brains for our benefit,” he writes. Brains make sense of data that would otherwise be overwhelming. “Experiences are fragmented episodes unless they are woven together in a meaningful narrative,” he writes, with the self being the story that “pulls it all together.”
  • “The unity [of self that] we experience, which allows us legitimately to talk of ‘I,’ is a result of the Ego Trick—the remarkable way in which a complicated bundle of mental events, made possible by the brain, creates a singular self, without there being a singular thing underlying it,”
  • depersonalization is both a burden, a horrible burden—but it’s in some strange way a blessing, to reach some depths, some meaning which somehow comes only in the broken mirror,” Bezzubova says. “It’s a Dostoyevsky style illumination—where clarity cannot be distinguished from pain.”
  • for her, the experience is pleasant. “It’s helped me in my life,” she says. Over the past few years, she has learned to interpret her experiences in a Buddhist context, and she describes depersonalization as a “deconditioning” of sorts: “The significance I place on the world is all in my mind,”
  • “I believe I am on the path to enlightenment,” she says.
  • The crossover between dark mental states and Buddhist practices is being investigated
  • Mindfulness has become increasingly popular in the West over the past few years, but as Britton told The Atlantic, the practice in its original form isn’t just about relaxation: It’s about the often painstaking process of coming to terms with three specific insights of the Theravadin Buddhist tradition, which are anicca, or impermanence; dukkha, or dissatisfaction; and anatta, or not-self.
  • depersonalization must cause the patient distress and have an impact on her daily functioning for it to be classified as clinically significant. In this sense, it seems inappropriate to call Alice’s experiences pathological. “We have ways of measuring disorders, but you have to ask if it’s meaningful. It’s an open question,”
  • “I think calling it a loss of self is maybe a convenient shorthand for something that’s hard to capture,” he says. “I prefer to talk about experience—because that’s what’s important in psychiatry.”
Javier E

Opinion | What College Students Need Is a Taste of the Monk's Life - The New York Times - 0 views

  • When she registered last fall for the seminar known around campus as the monk class, she wasn’t sure what to expect.
  • “You give up technology, and you can’t talk for a month,” Ms. Rodriguez told me. “That’s all I’d heard. I didn’t know why.” What she found was a course that challenges students to rethink the purpose of education, especially at a time when machine learning is getting way more press than the human kind.
  • Each week, students would read about a different monastic tradition and adopt some of its practices. Later in the semester, they would observe a one-month vow of silence (except for discussions during Living Deliberately) and fast from technology, handing over their phones to him.
  • ...50 more annotations...
  • Yes, he knew they had other classes, jobs and extracurriculars; they could make arrangements to do that work silently and without a computer.
  • The class eased into the vow of silence, first restricting speech to 100 words a day. Other rules began on Day 1: no jewelry or makeup in class. Men and women sat separately and wore different “habits”: white shirts for the men, women in black. (Nonbinary and transgender students sat with the gender of their choice.)
  • Dr. McDaniel discouraged them from sharing personal information; they should get to know one another only through ideas. “He gave us new names, based on our birth time and day, using a Thai birth chart,”
  • “We were practicing living a monastic life. We had to wake up at 5 a.m. and journal every 30 minutes.”
  • If you tried to cruise to a C, you missed the point: “I realized the only way for me to get the most out of this class was to experience it all,” she said. (She got Dr. McDaniel’s permission to break her vow of silence in order to talk to patients during her clinical rotation.)
  • Dr. McDaniel also teaches a course called Existential Despair. Students meet once a week from 5 p.m. to midnight in a building with comfy couches, turn over their phones and curl up to read an assigned novel (cover to cover) in one sitting — books like James Baldwin’s “Giovanni’s Room” and José Saramago’s “Blindness.” Then they stay up late discussing it.
  • The course is not about hope, overcoming things, heroic stories,” Dr. McDaniel said. Many of the books “start sad. In the middle they’re sad. They stay sad. I’m not concerned with their 20-year-old self. I’m worried about them at my age, dealing with breast cancer, their dad dying, their child being an addict, a career that never worked out — so when they’re dealing with the bigger things in life, they know they’re not alone.”
  • Both courses have long wait lists. Students are hungry for a low-tech, introspective experience —
  • Research suggests that underprivileged young people have far fewer opportunities to think for unbroken stretches of time, so they may need even more space in college to develop what social scientists call cognitive endurance.
  • Yet the most visible higher ed trends are moving in the other direction
  • Rather than ban phones and laptops from class, some professors are brainstorming ways to embrace students’ tech addictions with class Facebook and Instagram accounts, audience response apps — and perhaps even including the friends and relatives whom students text during class as virtual participants in class discussion.
  • Then there’s that other unwelcome classroom visitor: artificial intelligence.
  • stop worrying and love the bot by designing assignments that “help students develop their prompting skills” or “use ChatGPT to generate a first draft,” according to a tip sheet produced by the Center for Teaching and Learning at Washington University in St. Louis.
  • It’s not at all clear that we want a future dominated by A.I.’s amoral, Cheez Whiz version of human thought
  • It is abundantly clear that texting, tagging and chatbotting are making students miserable right now.
  • One recent national survey found that 60 percent of American college students reported the symptoms of at least one mental health problem and that 15 percent said they were considering suicide
  • A recent meta-analysis of 36 studies of college students’ mental health found a significant correlation between longer screen time and higher risk of anxiety and depression
  • And while social media can sometimes help suffering students connect with peers, research on teenagers and college students suggests that overall, the support of a virtual community cannot compensate for the vortex of gossip, bullying and Instagram posturing that is bound to rot any normal person’s self-esteem.
  • We need an intervention: maybe not a vow of silence but a bold move to put the screens, the pinging notifications and creepy humanoid A.I. chatbots in their proper place
  • it does mean selectively returning to the university’s roots in the monastic schools of medieval Europe and rekindling the old-fashioned quest for meaning.
  • Colleges should offer a radically low-tech first-year program for students who want to apply: a secular monastery within the modern university, with a curated set of courses that ban glowing rectangles of any kind from the classroom
  • Students could opt to live in dorms that restrict technology, too
  • I prophesy that universities that do this will be surprised by how much demand there is. I frequently talk to students who resent the distracting laptops all around them during class. They feel the tug of the “imaginary string attaching me to my phone, where I have to constantly check it,”
  • Many, if not most, students want the elusive experience of uninterrupted thought, the kind where a hash of half-baked notions slowly becomes an idea about the world.
  • Even if your goal is effective use of the latest chatbot, it behooves you to read books in hard copies and read enough of them to learn what an elegant paragraph sounds like. How else will students recognize when ChatGPT churns out decent prose instead of bureaucratic drivel?
  • Most important, students need head space to think about their ultimate values.
  • His course offers a chance to temporarily exchange those unconscious structures for a set of deliberate, countercultural ones.
  • here are the student learning outcomes universities should focus on: cognitive endurance and existential clarity.
  • Contemplation and marathon reading are not ends in themselves or mere vacations from real life but are among the best ways to figure out your own answer to the question of what a human being is for
  • When students finish, they can move right into their area of specialization and wire up their skulls with all the technology they want, armed with the habits and perspective to do so responsibly
  • it’s worth learning from the radicals. Dr. McDaniel, the religious studies professor at Penn, has a long history with different monastic traditions. He grew up in Philadelphia, educated by Hungarian Catholic monks. After college, he volunteered in Thailand and Laos and lived as a Buddhist monk.
  • e found that no amount of academic reading could help undergraduates truly understand why “people voluntarily take on celibacy, give up drinking and put themselves under authorities they don’t need to,” he told me. So for 20 years, he has helped students try it out — and question some of their assumptions about what it means to find themselves.
  • “On college campuses, these students think they’re all being individuals, going out and being wild,” he said. “But they’re in a playpen. I tell them, ‘You know you’ll be protected by campus police and lawyers. You have this entire apparatus set up for you. You think you’re being an individual, but look at your four friends: They all look exactly like you and sound like you. We exist in these very strict structures we like to pretend don’t exist.’”
  • Colleges could do all this in classes integrated with general education requirements: ideally, a sequence of great books seminars focused on classic texts from across different civilizations.
  • “For the last 1,500 years, Benedictines have had to deal with technology,” Placid Solari, the abbot there, told me. “For us, the question is: How do you use the tool so it supports and enhances your purpose or mission and you don’t get owned by it?”
  • for novices at his monastery, “part of the formation is discipline to learn how to control technology use.” After this initial time of limited phone and TV “to wean them away from overdependence on technology and its stimulation,” they get more access and mostly make their own choices.
  • Evan Lutz graduated this May from Belmont Abbey with a major in theology. He stressed the special Catholic context of Belmont’s resident monks; if you experiment with monastic practices without investigating the whole worldview, it can become a shallow kind of mindfulness tourism.
  • The monks at Belmont Abbey do more than model contemplation and focus. Their presence compels even non-Christians on campus to think seriously about vocation and the meaning of life. “Either what the monks are doing is valuable and based on something true, or it’s completely ridiculous,” Mr. Lutz said. “In both cases, there’s something striking there, and it asks people a question.”
  • Pondering ultimate questions and cultivating cognitive endurance should not be luxury goods.
  • David Peña-Guzmán, who teaches philosophy at San Francisco State University, read about Dr. McDaniel’s Existential Despair course and decided he wanted to create a similar one. He called it the Reading Experiment. A small group of humanities majors gathered once every two weeks for five and a half hours in a seminar room equipped with couches and a big round table. They read authors ranging from Jean-Paul Sartre to Frantz Fanon
  • “At the beginning of every class I’d ask students to turn off their phones and put them in ‘the Basket of Despair,’ which was a plastic bag,” he told me. “I had an extended chat with them about accessibility. The point is not to take away the phone for its own sake but to take away our primary sources of distraction. Students could keep the phone if they needed it. But all of them chose to part with their phones.”
  • Dr. Peña-Guzmán’s students are mostly working-class, first-generation college students. He encouraged them to be honest about their anxieties by sharing his own: “I said, ‘I’m a very slow reader, and it’s likely some or most of you will get further in the text than me because I’m E.S.L. and read quite slowly in English.’
  • For his students, the struggle to read long texts is “tied up with the assumption that reading can happen while multitasking and constantly interacting with technologies that are making demands on their attention, even at the level of a second,”
  • “These draw you out of the flow of reading. You get back to the reading, but you have to restart the sentence or even the paragraph. Often, because of these technological interventions into the reading experience, students almost experience reading backward — as constant regress, without any sense of progress. The more time they spend, the less progress they make.”
  • Dr. Peña-Guzmán dismissed the idea that a course like his is suitable only for students who don’t have to worry about holding down jobs or paying off student debt. “I’m worried by this assumption that certain experiences that are important for the development of personality, for a certain kind of humanistic and spiritual growth, should be reserved for the elite, especially when we know those experiences are also sources of cultural capital,
  • Courses like the Reading Experiment are practical, too, he added. “I can’t imagine a field that wouldn’t require some version of the skill of focused attention.”
  • The point is not to reject new technology but to help students retain the upper hand in their relationship with i
  • Ms. Rodriguez said that before she took Living Deliberately and Existential Despair, she didn’t distinguish technology from education. “I didn’t think education ever went without technology. I think that’s really weird now. You don’t need to adapt every piece of technology to be able to learn better or more,” she said. “It can form this dependency.”
  • The point of college is to help students become independent humans who can choose the gods they serve and the rules they follow rather than allow someone else to choose for them
  • The first step is dethroning the small silicon idol in their pocket — and making space for the uncomfortable silence and questions that follow
Javier E

How a dose of MDMA transformed a white supremacist - BBC Future - 0 views

  • February 2020, Harriet de Wit, a professor of psychiatry and behavioural science at the University of Chicago, was running an experiment on whether the drug MDMA increased the pleasantness of social touch in healthy volunteers
  • The latest participant in the double-blind trial, a man named Brendan, had filled out a standard questionnaire at the end. Strangely, at the very bottom of the form, Brendan had written in bold letters: "This experience has helped me sort out a debilitating personal issue. Google my name. I now know what I need to do."
  • They googled Brendan's name, and up popped a disturbing revelation: until just a couple of months before, Brendan had been the leader of the US Midwest faction of Identity Evropa, a notorious white nationalist group rebranded in 2019 as the American Identity Movement. Two months earlier, activists at Chicago Antifascist Action had exposed Brendan's identity, and he had lost his job.
  • ...40 more annotations...
  • "Go ask him what he means by 'I now know what I need to do,'" she instructed Bremmer. "If it's a matter of him picking up an automatic rifle or something, we have to intervene."
  • As he clarified to Bremmer, love is what he had just realised he had to do. "Love is the most important thing," he told the baffled research assistant. "Nothing matters without
  • When de Wit recounted this story to me nearly two years after the fact, she still could hardly believe it. "Isn't that amazing?" she said. "It's what everyone says about this damn drug, that it makes people feel love. To think that a drug could change somebody's beliefs and thoughts without any expectations – it's mind-boggling."
  • Over the past few years, I've been investigating the scientific research and medical potential of MDMA for a book called "I Feel Love: MDMA and the Quest for Connection in a Fractured World". I learnt how this once-vilified drug is now remerging as a therapeutic agent – a role it previously played in the 1970s and 1980s, prior to its criminalisation
  • He attended the notorious "Unite the Right" rally in Charlottesville and quickly rose up the ranks of his organisation, first becoming the coordinator for Illinois and then the entire Midwest. He travelled to Europe and around the US to meet other white nationalist groups, with the ultimate goal of taking the movement mainstream
  • some researchers have begun to wonder if it could be an effective tool for pushing people who are already somehow primed to reconsider their ideology toward a new way of seeing things
  • While MDMA cannot fix societal-level drivers of prejudice and disconnection, on an individual basis it can make a difference. In certain cases, the drug may even be able to help people see through the fog of discrimination and fear that divides so many of us.
  • in December 2021 I paid Brendan a visit
  • What I didn't expect was how ordinary the 31-year-old who answered the door would appear to be: blue plaid button-up shirt, neatly cropped hair, and a friendly smile.
  • Brendan grew up in an affluent Chicago suburb in an Irish Catholic family. He leaned liberal in high school but got sucked into white nationalism at the University of Illinois Urbana-Champaign, where he joined a fraternity mostly composed of conservative Republican men, began reading antisemitic conspiracy books, and fell down a rabbit hole of racist, sexist content online. Brendan was further emboldened by the populist rhetoric of Donald Trump during his presidential campaign. "His speech talking about Mexicans being rapists, the fixation on the border wall and deporting everyone, the Muslim ban – I didn't really get white nationalism until Trump started running for president," Brendan said.
  • If this comes to pass, MDMA – and other psychedelics-assisted therapy – could transform the field of mental health through widespread clinical use in the US and beyond, for addressing trauma and possibly other conditions as well, including substance use disorders, depression and eating disorders.
  • A group of anti-fascist activists published identifying information about him and more than 100 other people in Identity Evropa. He was immediately fired from his job and ostracised by his siblings and friends outside white nationalism.
  • When Brendan saw a Facebook ad in early 2020 for some sort of drug trial at the University of Chicago, he decided to apply just to have something to do and to earn a little money
  • At the time, Brendan was "still in the denial stage" following his identity becoming public, he said. He was racked with regret – not over his bigoted views, which he still held, but over the missteps that had landed him in this predicament.
  • About 30 minutes after taking the pill, he started to feel peculiar. "Wait a second – why am I doing this? Why am I thinking this way?" he began to wonder. "Why did I ever think it was okay to jeopardise relationships with just about everyone in my life?"
  • Just then, Bremmer came to collect Brendan to start the experiment. Brendan slid into an MRI, and Bremmer started tickling his forearm with a brush and asked him to rate how pleasant it felt. "I noticed it was making me happier – the experience of the touch," Brendan recalled. "I started progressively rating it higher and higher." As he relished in the pleasurable feeling, a single, powerful word popped into his mind: connection.
  • It suddenly seemed so obvious: connections with other people were all that mattered. "This is stuff you can't really put into words, but it was so profound," Brendan said. "I conceived of my relationships with other people not as distinct boundaries with distinct entities, but more as we-are-all-on
  • I realised I'd been fixated on stuff that doesn't really matter, and is just so messed up, and that I'd been totally missing the point. I hadn't been soaking up the joy that life has to offer."
  • Brendan hired a diversity, equity, and inclusion consultant to advise him, enrolled in therapy, began meditating, and started working his way through a list of educational books. S still regularly communicates with Brendan and, for his part, thinks that Brendan is serious in his efforts to change
  • "I think he is trying to better himself and work on himself, and I do think that experience with MDMA had an impact on him. It's been a touchstone for growth, and over time, I think, the reflection on that experience has had a greater impact on him than necessarily the experience itself."
  • Brendan is still struggling, though, to make the connections with others that he craves. When I visited him, he'd just spent Thanksgiving alone
  • He also has not completely abandoned his bigoted ideology, and is not sure that will ever be possible. "There are moments when I have racist or antisemitic thoughts, definitely," he said. "But now I can recognise that those kinds of thought patterns are harming me more than anyone else."
  • it's not without precedent. In the 1980s, for example, an acquaintance of early MDMA-assisted therapy practitioner Requa Greer administered the drug to a pilot who had grown up in a racist home and had inherited those views. The pilot had always accepted his bigoted way of thinking as being a normal, accurate reflection of the way things were. MDMA, however, "gave him a clear vision that unexamined racism was both wrong and mean," Greer says
  • Encouraging stories of seemingly spontaneous change appear to be exceptions to the norm, however, and from a neurological point of view, this makes sense
  • Research shows that oxytocin – one of the key hormones that MDMA triggers neurons to release – drives a "tend and defend" response across the animal kingdom. The same oxytocin that causes a mother bear to nurture her newborn, for example, also fuels her rage when she perceives a threat to her cub. In people, oxytocin likewise strengthens caregiving tendencies toward liked members of a person's in-group and strangers perceived to belong to the same group, but it increases hostility toward individuals from disliked groups
  • In a 2010 study published in Science, for example, men who inhaled oxytocin were three times more likely to donate money to members of their team in an economic game, as well as more likely to harshly punish competing players for not donating enough. (Read more: "The surprising downsides of empathy.")
  • According to research published this week in Nature by Johns Hopkins University neuroscientist Gül Dölen, MDMA and other psychedelics – including psilocybin, LSD, ketamine and ibogaine – work therapeutically by reopening a critical period in the brain. Critical periods are finite windows of impressionability that typically occur in childhood, when our brains are more malleable and primed to learn new things
  • Dölen and her colleagues' findings likewise indicate that, without the proper set and setting, MDMA and other psychedelics probably do not reopen critical periods, which means they will not have a spontaneous, revelatory effect for ridding someone of bigoted beliefs.
  • In the West, plenty of members of right-wing authoritarian political movements, including neo-Nazi groups, also have track records of taking MDMA and other psychedelics
  • This suggests, researchers write, that psychedelics are nonspecific, "politically pluripotent" amplifiers of whatever is going on in somebody's head, with no particular directional leaning "on the axes of conservatism-liberalism or authoritarianism-egalitarianism."
  • That said, a growing body of scientific evidence indicates that the human capacity for compassion, kindness, empathy, gratitude, altruism, fairness, trust, and cooperation are core features of our natures
  • As Emory University primatologist Frans de Waal wrote, "Empathy is the one weapon in the human repertoire that can rid us of the curse of xenophobia."
  • Ginsberg also envisions using the drug in workshops aimed at eliminating racism, or as a means of bringing people together from opposite sides of shared cultural histories to help heal intergenerational trauma. "I think all psychedelics have a role to play, but I think MDMA has a particularly key role because you're both expanded and present, heart-open and really able to listen in a new way," Ginsberg says. "That's something really powerful."
  • "If you give MDMA to hard-core haters on each side of an issue, I don't think it'll do a lot of good,"
  • if you start with open-minded people on both sides, then I think it can work. You can improve communications and build empathy between groups, and help people be more capable of analysing the world from a more balanced perspective rather than from fear-based, anxiety-based distrust."
  • In 2021, Ginsberg and Doblin were coauthors on a study investigating the possibility of using ayahuasca – a plant-based psychedelic – in group contexts to bridge divides between Palestinians and Israelis, with positive findings
  • "I kind of have a fantasy that maybe as we get more reacquainted with psychedelics, there could be group-based experiences that build community resiliency and are intentionally oriented toward breaking down barriers between people, having people see things from other perspectives and detribalising our society,
  • "But that's not going to happen on its own. It would have to be intentional, and – if it happens – it would probably take multiple generations."
  • Based on his experience with extremism, Brendan agreed with expert takes that no drug, on its own, will spontaneously change the minds of white supremacists or end political conflict in the US
  • he does think that, with the right framing and mindset, MDMA could be useful for people who are already at least somewhat open to reconsidering their ideologies, just as it was for him. "It helped me see things in a different way that no amount of therapy or antiracist literature ever would have done," he said. "I really think it was a breakthrough experience."
Javier E

For Chat-Based AI, We Are All Once Again Tech Companies' Guinea Pigs - WSJ - 0 views

  • The companies touting new chat-based artificial-intelligence systems are running a massive experiment—and we are the test subjects.
  • In this experiment, Microsoft, MSFT -2.18% OpenAI and others are rolling out on the internet an alien intelligence that no one really understands, which has been granted the ability to influence our assessment of what’s true in the world. 
  • Companies have been cautious in the past about unleashing this technology on the world. In 2019, OpenAI decided not to release an earlier version of the underlying model that powers both ChatGPT and the new Bing because the company’s leaders deemed it too dangerous to do so, they said at the time.
  • ...26 more annotations...
  • Microsoft leaders felt “enormous urgency” for it to be the company to bring this technology to market, because others around the world are working on similar tech but might not have the resources or inclination to build it as responsibly, says Sarah Bird, a leader on Microsoft’s responsible AI team.
  • One common starting point for such models is what is essentially a download or “scrape” of most of the internet. In the past, these language models were used to try to understand text, but the new generation of them, part of the revolution in “generative” AI, uses those same models to create texts by trying to guess, one word at a time, the most likely word to come next in any given sequence.
  • Wide-scale testing gives Microsoft and OpenAI a big competitive edge by enabling them to gather huge amounts of data about how people actually use such chatbots. Both the prompts users input into their systems, and the results their AIs spit out, can then be fed back into a complicated system—which includes human content moderators paid by the companies—to improve it.
  • , being first to market with a chat-based AI gives these companies a huge initial lead over companies that have been slower to release their own chat-based AIs, such as Google.
  • rarely has an experiment like Microsoft and OpenAI’s been rolled out so quickly, and at such a broad scale.
  • Among those who build and study these kinds of AIs, Mr. Altman’s case for experimenting on the global public has inspired responses ranging from raised eyebrows to condemnation.
  • The fact that we’re all guinea pigs in this experiment doesn’t mean it shouldn’t be conducted, says Nathan Lambert, a research scientist at the AI startup Huggingface.
  • “I would kind of be happier with Microsoft doing this experiment than a startup, because Microsoft will at least address these issues when the press cycle gets really bad,” says Dr. Lambert. “I think there are going to be a lot of harms from this kind of AI, and it’s better people know they are coming,” he adds.
  • Others, particularly those who study and advocate for the concept of “ethical AI” or “responsible AI,” argue that the global experiment Microsoft and OpenAI are conducting is downright dangerous
  • Celeste Kidd, a professor of psychology at University of California, Berkeley, studies how people acquire knowledge
  • Her research has shown that people learning about new things have a narrow window in which they form a lasting opinion. Seeing misinformation during this critical initial period of exposure to a new concept—such as the kind of misinformation that chat-based AIs can confidently dispense—can do lasting harm, she says.
  • Dr. Kidd likens OpenAI’s experimentation with AI to exposing the public to possibly dangerous chemicals. “Imagine you put something carcinogenic in the drinking water and you were like, ‘We’ll see if it’s carcinogenic.’ After, you can’t take it back—people have cancer now,”
  • Part of the challenge with AI chatbots is that they can sometimes simply make things up. Numerous examples of this tendency have been documented by users of both ChatGPT and OpenA
  • These models also tend to be riddled with biases that may not be immediately apparent to users. For example, they can express opinions gleaned from the internet as if they were verified facts
  • When millions are exposed to these biases across billions of interactions, this AI has the potential to refashion humanity’s views, at a global scale, says Dr. Kidd.
  • OpenAI has talked publicly about the problems with these systems, and how it is trying to address them. In a recent blog post, the company said that in the future, users might be able to select AIs whose “values” align with their own.
  • “We believe that AI should be a useful tool for individual people, and thus customizable by each user up to limits defined by society,” the post said.
  • Eliminating made-up information and bias from chat-based search engines is impossible given the current state of the technology, says Mark Riedl, a professor at Georgia Institute of Technology who studies artificial intelligence
  • He believes the release of these technologies to the public by Microsoft and OpenAI is premature. “We are putting out products that are still being actively researched at this moment,” he adds. 
  • in other areas of human endeavor—from new drugs and new modes of transportation to advertising and broadcast media—we have standards for what can and cannot be unleashed on the public. No such standards exist for AI, says Dr. Riedl.
  • To modify these AIs so that they produce outputs that humans find both useful and not-offensive, engineers often use a process called “reinforcement learning through human feedback.
  • that’s a fancy way of saying that humans provide input to the raw AI algorithm, often by simply saying which of its potential responses to a query are better—and also which are not acceptable at all.
  • Microsoft’s and OpenAI’s globe-spanning experiments on millions of people are yielding a fire hose of data for both companies. User-entered prompts and the AI-generated results are fed back through a network of paid human AI trainers to further fine-tune the models,
  • Huggingface’s Dr. Lambert says that any company, including his own, that doesn’t have this river of real-world usage data helping it improve its AI is at a huge disadvantage
  • In chatbots, in some autonomous-driving systems, in the unaccountable AIs that decide what we see on social media, and now, in the latest applications of AI, again and again we are the guinea pigs on which tech companies are testing new technology.
  • It may be the case that there is no other way to roll out this latest iteration of AI—which is already showing promise in some areas—at scale. But we should always be asking, at times like these: At what price?
Javier E

Heaven Is Real: A Doctor's Experience With the Afterlife - Print View - The Daily Beast - 0 views

  • As a neurosurgeon, I did not believe in the phenomenon of near-death experiences. I grew up in a scientific world, the son of a neurosurgeon. I followed my father’s path and became an academic neurosurgeon, teaching at Harvard Medical School and other universities. I understand what happens to the brain when people are near death, and I had always believed there were good scientific explanations for the heavenly out-of-body journeys described by those who narrowly escaped death.
  • In the fall of 2008, however, after seven days in a coma during which the human part of my brain, the neocortex, was inactivated, I experienced something so profound that it gave me a scientific reason to believe in consciousness after death.
  • All the chief arguments against near-death experiences suggest that these experiences are the results of minimal, transient, or partial malfunctioning of the cortex. My near-death experience, however, took place not while my cortex was malfunctioning, but while it was simply off. This is clear from the severity and duration of my meningitis, and from the global cortical involvement documented by CT scans and neurological examinations. According to current medical understanding of the brain and mind, there is absolutely no way that I could have experienced even a dim and limited consciousness during my time in the coma, much less the hyper-vivid and completely coherent odyssey I underwent.
  • ...2 more annotations...
  • What happened to me demands explanation. Modern physics tells us that the universe is a unity—that it is undivided. Though we seem to live in a world of separation and difference, physics tells us that beneath the surface, every object and event in the universe is completely woven up with every other object and event. There is no true separation. Before my experience these ideas were abstractions. Today they are realities. Not only is the universe defined by unity, it is also—I now know—defined by love. The universe as I experienced it in my coma is—I have come to see with both shock and joy—the same one that both Einstein and Jesus were speaking of in their (very) different ways.
  • Today many believe that the living spiritual truths of religion have lost their power, and that science, not faith, is the road to truth. Before my experience I strongly suspected that this was the case myself. But I now understand that such a view is far too simple. The plain fact is that the materialist picture of the body and brain as the producers, rather than the vehicles, of human consciousness is doomed. In its place a new view of mind and body will emerge, and in fact is emerging already. This view is scientific and spiritual in equal measure and will value what the greatest scientists of history themselves always valued above all: truth.
Javier E

The decline effect and the scientific method : The New Yorker - 3 views

  • The test of replicability, as it’s known, is the foundation of modern research. Replicability is how the community enforces itself. It’s a safeguard for the creep of subjectivity. Most of the time, scientists know what results they want, and that can influence the results they get. The premise of replicability is that the scientific community can correct for these flaws.
  • But now all sorts of well-established, multiply confirmed findings have started to look increasingly uncertain. It’s as if our facts were losing their truth: claims that have been enshrined in textbooks are suddenly unprovable.
  • This phenomenon doesn’t yet have an official name, but it’s occurring across a wide range of fields, from psychology to ecology.
  • ...39 more annotations...
  • If replication is what separates the rigor of science from the squishiness of pseudoscience, where do we put all these rigorously validated findings that can no longer be proved? Which results should we believe?
  • Schooler demonstrated that subjects shown a face and asked to describe it were much less likely to recognize the face when shown it later than those who had simply looked at it. Schooler called the phenomenon “verbal overshadowing.”
  • The most likely explanation for the decline is an obvious one: regression to the mean. As the experiment is repeated, that is, an early statistical fluke gets cancelled out. The extrasensory powers of Schooler’s subjects didn’t decline—they were simply an illusion that vanished over time.
  • yet Schooler has noticed that many of the data sets that end up declining seem statistically solid—that is, they contain enough data that any regression to the mean shouldn’t be dramatic. “These are the results that pass all the tests,” he says. “The odds of them being random are typically quite remote, like one in a million. This means that the decline effect should almost never happen. But it happens all the time!
  • this is why Schooler believes that the decline effect deserves more attention: its ubiquity seems to violate the laws of statistics
  • In 2001, Michael Jennions, a biologist at the Australian National University, set out to analyze “temporal trends” across a wide range of subjects in ecology and evolutionary biology. He looked at hundreds of papers and forty-four meta-analyses (that is, statistical syntheses of related studies), and discovered a consistent decline effect over time, as many of the theories seemed to fade into irrelevance.
  • Jennions admits that his findings are troubling, but expresses a reluctance to talk about them
  • publicly. “This is a very sensitive issue for scientists,” he says. “You know, we’re supposed to be dealing with hard facts, the stuff that’s supposed to stand the test of time. But when you see these trends you become a little more skeptical of things.”
  • While publication bias almost certainly plays a role in the decline effect, it remains an incomplete explanation. For one thing, it fails to account for the initial prevalence of positive results among studies that never even get submitted to journals. It also fails to explain the experience of people like Schooler, who have been unable to replicate their initial data despite their best efforts.
  • Jennions, similarly, argues that the decline effect is largely a product of publication bias, or the tendency of scientists and scientific journals to prefer positive data over null results, which is what happens when no effect is found. The bias was first identified by the statistician Theodore Sterling, in 1959, after he noticed that ninety-seven per cent of all published psychological studies with statistically significant data found the effect they were looking for
  • Sterling saw that if ninety-seven per cent of psychology studies were proving their hypotheses, either psychologists were extraordinarily lucky or they published only the outcomes of successful experiments.
  • One of his most cited papers has a deliberately provocative title: “Why Most Published Research Findings Are False.”
  • suspects that an equally significant issue is the selective reporting of results—the data that scientists choose to document in the first place. Palmer’s most convincing evidence relies on a statistical tool known as a funnel graph. When a large number of studies have been done on a single subject, the data should follow a pattern: studies with a large sample size should all cluster around a common value—the true result—whereas those with a smaller sample size should exhibit a random scattering, since they’re subject to greater sampling error. This pattern gives the graph its name, since the distribution resembles a funnel.
  • after Palmer plotted every study of fluctuating asymmetry, he noticed that the distribution of results with smaller sample sizes wasn’t random at all but instead skewed heavily toward positive results. Palmer has since documented a similar problem in several other contested subject areas. “Once I realized that selective reporting is everywhere in science, I got quite depressed,” Palmer told me. “As a researcher, you’re always aware that there might be some nonrandom patterns, but I had no idea how widespread it is.”
  • Palmer summarized the impact of selective reporting on his field: “We cannot escape the troubling conclusion that some—perhaps many—cherished generalities are at best exaggerated in their biological significance and at worst a collective illusion nurtured by strong a-priori beliefs often repeated.”
  • Palmer emphasizes that selective reporting is not the same as scientific fraud. Rather, the problem seems to be one of subtle omissions and unconscious misperceptions, as researchers struggle to make sense of their results. Stephen Jay Gould referred to this as the “sho
  • horning” process.
  • “A lot of scientific measurement is really hard,” Simmons told me. “If you’re talking about fluctuating asymmetry, then it’s a matter of minuscule differences between the right and left sides of an animal. It’s millimetres of a tail feather. And so maybe a researcher knows that he’s measuring a good male”—an animal that has successfully mated—“and he knows that it’s supposed to be symmetrical. Well, that act of measurement is going to be vulnerable to all sorts of perception biases. That’s not a cynical statement. That’s just the way human beings work.”
  • For Simmons, the steep rise and slow fall of fluctuating asymmetry is a clear example of a scientific paradigm, one of those intellectual fads that both guide and constrain research: after a new paradigm is proposed, the peer-review process is tilted toward positive results. But then, after a few years, the academic incentives shift—the paradigm has become entrenched—so that the most notable results are now those that disprove the theory.
  • John Ioannidis, an epidemiologist at Stanford University, argues that such distortions are a serious issue in biomedical research. “These exaggerations are why the decline has become so common,” he says. “It’d be really great if the initial studies gave us an accurate summary of things. But they don’t. And so what happens is we waste a lot of money treating millions of patients and doing lots of follow-up studies on other themes based on results that are misleading.”
  • In 2005, Ioannidis published an article in the Journal of the American Medical Association that looked at the forty-nine most cited clinical-research studies in three major medical journals.
  • the data Ioannidis found were disturbing: of the thirty-four claims that had been subject to replication, forty-one per cent had either been directly contradicted or had their effect sizes significantly downgraded.
  • the most troubling fact emerged when he looked at the test of replication: out of four hundred and thirty-two claims, only a single one was consistently replicable. “This doesn’t mean that none of these claims will turn out to be true,” he says. “But, given that most of them were done badly, I wouldn’t hold my breath.”
  • According to Ioannidis, the main problem is that too many researchers engage in what he calls “significance chasing,” or finding ways to interpret the data so that it passes the statistical test of significance—the ninety-five-per-cent boundary invented by Ronald Fisher.
  • One of the classic examples of selective reporting concerns the testing of acupuncture in different countries. While acupuncture is widely accepted as a medical treatment in various Asian countries, its use is much more contested in the West. These cultural differences have profoundly influenced the results of clinical trials.
  • The problem of selective reporting is rooted in a fundamental cognitive flaw, which is that we like proving ourselves right and hate being wrong.
  • “It feels good to validate a hypothesis,” Ioannidis said. “It feels even better when you’ve got a financial interest in the idea or your career depends upon it. And that’s why, even after a claim has been systematically disproven”—he cites, for instance, the early work on hormone replacement therapy, or claims involving various vitamins—“you still see some stubborn researchers citing the first few studies
  • That’s why Schooler argues that scientists need to become more rigorous about data collection before they publish. “We’re wasting too much time chasing after bad studies and underpowered experiments,”
  • The current “obsession” with replicability distracts from the real problem, which is faulty design.
  • “Every researcher should have to spell out, in advance, how many subjects they’re going to use, and what exactly they’re testing, and what constitutes a sufficient level of proof. We have the tools to be much more transparent about our experiments.”
  • Schooler recommends the establishment of an open-source database, in which researchers are required to outline their planned investigations and document all their results. “I think this would provide a huge increase in access to scientific work and give us a much better way to judge the quality of an experiment,”
  • scientific research will always be shadowed by a force that can’t be curbed, only contained: sheer randomness. Although little research has been done on the experimental dangers of chance and happenstance, the research that exists isn’t encouraging.
  • The disturbing implication of the Crabbe study is that a lot of extraordinary scientific data are nothing but noise. The hyperactivity of those coked-up Edmonton mice wasn’t an interesting new fact—it was a meaningless outlier, a by-product of invisible variables we don’t understand.
  • The problem, of course, is that such dramatic findings are also the most likely to get published in prestigious journals, since the data are both statistically significant and entirely unexpected
  • This suggests that the decline effect is actually a decline of illusion. While Karl Popper imagined falsification occurring with a single, definitive experiment—Galileo refuted Aristotelian mechanics in an afternoon—the process turns out to be much messier than that.
  • Many scientific theories continue to be considered true even after failing numerous experimental tests.
  • Even the law of gravity hasn’t always been perfect at predicting real-world phenomena. (In one test, physicists measuring gravity by means of deep boreholes in the Nevada desert found a two-and-a-half-per-cent discrepancy between the theoretical predictions and the actual data.)
  • Such anomalies demonstrate the slipperiness of empiricism. Although many scientific ideas generate conflicting results and suffer from falling effect sizes, they continue to get cited in the textbooks and drive standard medical practice. Why? Because these ideas seem true. Because they make sense. Because we can’t bear to let them go. And this is why the decline effect is so troubling. Not because it reveals the human fallibility of science, in which data are tweaked and beliefs shape perceptions. (Such shortcomings aren’t surprising, at least for scientists.) And not because it reveals that many of our most exciting theories are fleeting fads and will soon be rejected. (That idea has been around since Thomas Kuhn.)
  • The decline effect is troubling because it reminds us how difficult it is to prove anything. We like to pretend that our experiments define the truth for us. But that’s often not the case. Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true. When the experiments are done, we still have to choose what to believe. ♦
Javier E

For Some, 'Tis a Gift to Be Simple - NYTimes.com - 0 views

  • older people often draw as much happiness from ordinary experiences — like a day in the library — as they do from extraordinary ones.
  • If you can cover basic expenses, pursuing inexpensive, everyday things that bring comfort and satisfaction can lead to happiness equal to jetting about on international trips in your 70s and 80s.
  • Ms. Mogilner wanted to know what sort of experiences made people the most happy and why.
  • ...7 more annotations...
  • they conducted eight studies in which they asked participants about their recollections of, planning for or daydreaming about various happiness-making experiences. They also checked to see what sort of things their subjects were posting about on Facebook
  • ordinary experiences happen often and occur in the course of everyday life while extraordinary ones are much more rare.
  • the older people got, the more happiness ordinary experiences delivered. In fact, the happiness-making potential of everyday pursuits eventually grows equal to that of ones that are rarer.
  • Ms. Mogilner explored some of the factors besides frequency that separate ordinary and extraordinary experiences and seized on one in particular: the tendency for extraordinary experiences to be self-defining in some way.
  • One way to think about this is to consider the various adventures younger people pursue to find themselves. “That sort of exploration to see what fits and feels like you may be the process by which you can start to figure out what sort of ordinary life to build,”
  • Once you know yourself, the deliberate pursuit of more ordinary things can then deliver that same level of happiness. It doesn’t hurt, either, that you may appreciate the ordinary much more once you’re more aware of the decreasing number of years you have left to enjoy it.
  • there ought to be much comfort in the evidence that everyday things that cost little or nothing can deliver the same amount of joy. A garden. The elaborate meal that emerges from it and the spare time to invent the recipes. A return to a neglected musical instrument. All-you-can-consume subscriptions to Netflix and Spotify, with watchlists and playlists that stretch on for years.
Javier E

The Dark Knight of the Soul - Tomas Rocha - The Atlantic - 1 views

  • Her investigation of this phenomenon, called "The Dark Night Project," is an effort to document, analyze, and publicize accounts of the adverse effects of contemplative practices.
  • According to a survey by the National Institutes of Health, 10 percent of respondents—representing more than 20 million adult Americans—tried meditating between 2006 and 2007, a 1.8 percent increase from a similar survey in 2002. At that rate, by 2017, there may be more than 27 million American adults with a recent meditation experience.
  • "We're not being thorough or honest in our study of contemplative practice," says Britton, a critique she extends to the entire field of researchers studying meditation, including herself.
  • ...9 more annotations...
  • this widespread assumption—that meditation exists only for stress reduction and labor productivity, "because that's what Americans value"—narrows the scope of the scientific lens. When the time comes to develop hypotheses around the effects of meditation, the only acceptable—and fundable—research questions are the ones that promise to deliver the answers we want to hear.
  • the oscillations of spiritual life parallel the experience of learning to walk, very similar to the metaphor Saint John of the Cross uses in terms of a mother weaning a child … first you are held up by a parent and it is exhilarating and wonderful, and then they take their hands away and it is terrifying and the child feels abandoned."
  • while meditators can better avoid difficult experiences under the guidance of seasoned teachers, there are cases where such experiences are useful signs of progress in contemplative development. Distinguishing between the two, however, remains a challenge.
  • One of her team's preliminary tasks—a sort of archeological literature review—was to pore through the written canons of Theravadin, Tibetan, and Zen Buddhism, as well as texts within Christianity, Judaism, and Sufism. "Not every text makes clear reference to a period of difficulty on the contemplative path," Britton says, "but many did." Related Story What Happens to the Brain During Spiritual Experiences?
  • "Does it promote good relationships? Does it reduce cortisol? Does it help me work harder?" asks Britton, referencing these more lucrative questions. Because studies have shown that meditation does satisfy such interests, the results, she says, are vigorously reported to the public. "But," she cautions, "what about when meditation plays a role in creating an experience that then leads to a breakup, a psychotic break, or an inability to focus at work?"
  • Given the juggernaut—economic and otherwise—behind the mindfulness movement, there is a lot at stake in exploring a shadow side of meditation. Upton Sinclair once observed how difficult it is to get a man to understand something when his salary depends on his not understanding it.
  • Among the nearly 40 dark night subjects her team has formally interviewed over the past few years, she says most were "fairly out of commission, fairly impaired for between six months [and] more than 20 years."
  • The Dark Night Project is young, and still very much in progress. Researchers in the field are just beginning to carefully collect and sort through the narratives of difficult meditation-related experiences. Britton has presented her findings at major Buddhist and scientific conferences, prominent retreat centers, and even to the Dalai Lama at the 24th Mind and Life Dialogue in 2012.
  • "There are parts of me that just want meditation to be all good. I find myself in denial sometimes, where I just want to forget all that I've learned and go back to being happy about mindfulness and promoting it, but then I get another phone call and meet someone who's in distress, and I see the devastation in their eyes, and I can't deny that this is happening. As much as I want to investigate and promote contemplative practices and contribute to the well-being of humanity through that, I feel a deeper commitment to what's actually true."
Javier E

Actually, Some Material Goods Can Make You Happy - The Atlantic - 1 views

  • In many studies, participants are asked to think about material items as purchases made "in order to have," in contrast with experiences—purchases made "in order to do." This, they say, neglects a category of goods: those made in order to have experiences,  such as electronics, musical instruments, and sports and outdoors gear.
  • Do such "experiential goods," as Guevarra and Howell call them, leave our well-being unimproved, as is the case with most goods, or do they contribute positively to our happiness?
  • In a series of experiments, Guevarra and Howell find that the latter is the case: experiential goods made people happier, just like the experiences themselves.
  • ...3 more annotations...
  • What is it about experiences? It's not the fact of having an experience per se but that experiences can "satisf[y] the psychological needs of autonomy, competence, and relatedness." Talking to friends, mastering a skill, expressing oneself through art or writing—all of these provide a measure of fulfillment that merely owning a thing cannot.
  • Experiential goods fit in under this framework because they likewise can satisfy those same psychological needs. A musical instrument, for example, makes possible a sort of human happiness hat trick: Finely tune your skills, get the happiness of mastery (competence); play your heart out, get the happiness of self-expression (autonomy); jam with friends, get the happiness of connecting with others (relatedness).
  • "Spend your money on experiences, not things" remains a good basic rule. But it's possible to tweak it slightly to better reflect the drivers of human happiness: "Spend your money on competence, autonomy, and relatedness." That doesn't quite have the same ring to it, but it'll guide you wisely.
sissij

Study Finds Foreign Experiences Can Cause People To Behave Immorally | Big Think - 1 views

  • Travel experience is valued in globalized society. “Loves to travel” is tacked onto countless dating profiles.
  • Conventional wisdom holds that travel makes us well-rounded people. But what actually are the psychological effects of travel? 
  • New research suggests there's a dark side lurking underneath the well-established benefits of foreign experiences.
  • ...7 more annotations...
  • Past studies show that travel can increase cognitive flexibility, defined as the ability to shift thoughts and adapt behavior in response to changing situational demands.
  • New research, however, suggests the psychological benefits of travel come at a cost.
  • showed that people with more travel experiences were more likely to cheat on tests presented by researchers, behavior they defined as "morally unacceptable to the larger community."
  • The idea is that because travel requires people to break mental rules, it might also encourage them to break moral rules.
  • Broad foreign experiences expose people to many different – and possibly conflicting – moral codes, leading them to view morality as relative. 
  • One important distinction the study authors emphasized was that only breadth of foreign experience, not depth of foreign experience, increased immoral behavior.
  • Notably, social class and age didn't appear to influence results in this study and others.
  •  
    We are often told that traveling can broaden our world view and benefit our mindset. There is also an old saying stating that travel a thousand miles is more beneficial than read a thousand books. The research in this article shows us the other side of the coin. before reading this article, I have never related immorality with short-term travel to multiple countries. I think this shows that how complicated human social studies are. Even two subjects that seems far away can affect each other. There is indeed a leap of imagination in human social studies. Transportation is very convenient now so people like to travel to different countries. However, many of those travels are just sight-seeing. In order to really benefit from traveling, we should stay longer and experience the local life from the perspective of a resident, not a tourist. --Sissi (3/17/2017)
Javier E

Who Decides What's Racist? - Persuasion - 1 views

  • The implication of Hannah-Jones’s tweet and candidate Biden’s quip seems to be that you can have African ancestry, dark skin, textured hair, and perhaps even some “culturally black” traits regarding tastes in food, music, and ways of moving through the world. But unless you hold the “correct” political beliefs and values, you are not authentically black.
  • In a now-deleted tweet from May 22, 2020, Nikole Hannah-Jones, a Pulitzer Prize-winning reporter for The New York Times, opined, “There is a difference between being politically black and being racially black.”
  • Shelly Eversley’s The Real Negro suggests that in the latter half of the 20th century, the criteria of what constitutes “authentic” black experience moved from perceptible outward signs, like the fact of being restricted to segregated public spaces and speaking in a “black” dialect, to psychological, interior signs. In this new understanding, Eversley writes, “the ‘truth’ about race is felt, not performed, not seen.”
  • ...26 more annotations...
  • This insight goes a long way to explaining the current fetishization of experience, especially if it is (redundantly) “lived.” Black people from all walks of life find themselves deferred to by non-blacks
  • black people certainly don’t all “feel” or “experience” the same things. Nor do they all "experience" the same event in an identical way. Finally, even when their experiences are similar, they don’t all think about or interpret their experiences in the same way.
  • we must begin to attend in a serious way to heterodox black voices
  • This need is especially urgent given the ideological homogeneity of the “antiracist” outlook and efforts of elite institutions, including media, corporations, and an overwhelmingly progressive academia. For the arbiters of what it means to be black that dominate these institutions, there is a fairly narrowly prescribed “authentic” black narrative, black perspective, and black position on every issue that matters.
  • When we hear the demand to “listen to black voices,” what is usually meant is “listen to the right black voices.”
  • Many non-black people have heard a certain construction of “the black voice” so often that they are perplexed by black people who don’t fit the familiar model.
  • Similarly, many activists are not in fact “pro-black”: they are pro a rather specific conception of “blackness” that is not necessarily endorsed by all black people.
  • This is where our new website, Free Black Thought (FBT), seeks to intervene in the national conversation. FBT honors black individuals for their distinctive, diverse, and heterodox perspectives, and offers up for all to hear a polyphony, perhaps even a cacophony, of different and differing black voices.
  • The practical effects of the new antiracism are everywhere to be seen, but in few places more clearly than in our children’s schools
  • one might reasonably question what could be wrong with teaching children “antiracist” precepts. But the details here are full of devils.
  • To take an example that could affect millions of students, the state of California has adopted a statewide Ethnic Studies Model Curriculum (ESMC) that reflects “antiracist” ideas. The ESMC’s content inadvertently confirms that contemporary antiracism is often not so much an extension of the civil rights movement but in certain respects a tacit abandonment of its ideals.
  • It has thus been condemned as a “perversion of history” by Dr. Clarence Jones, MLK’s legal counsel, advisor, speechwriter, and Scholar in Residence at the Martin Luther King, Jr. Institute at Stanford University:
  • Essentialist thinking about race has also gained ground in some schools. For example, in one elite school, students “are pressured to conform their opinions to those broadly associated with their race and gender and to minimize or dismiss individual experiences that don’t match those assumptions.” These students report feeling that “they must never challenge any of the premises of [the school’s] ‘antiracist’ teachings.”
  • In contrast, the non-white students were taught that they were “folx (sic) who do not benefit from their social identities,” and “have little to no privilege and power.”
  • The children with “white” in their identity map were taught that they were part of the “dominant culture” which has been “created and maintained…to hold power and stay in power.” They were also taught that they had “privilege” and that “those with privilege have power over others.
  • Or consider the third-grade students at R.I. Meyerholz Elementary School in Cupertino, California
  • Or take New York City’s public school system, one of the largest educators of non-white children in America. In an effort to root out “implicit bias,” former Schools Chancellor Richard Carranza had his administrators trained in the dangers of “white supremacy culture.”
  • A slide from a training presentation listed “perfectionism,” “individualism,” “objectivity” and “worship of the written word” as white supremacist cultural traits to be “dismantled,”
  • Finally, some schools are adopting antiracist ideas of the sort espoused by Ibram X. Kendi, according to whom, if metrics such as tests and grades reveal disparities in achievement, the project of measuring achievement must itself be racist.
  • Parents are justifiably worried about such innovations. What black parent wants her child to hear that grading or math are “racist” as a substitute for objective assessment and real learning? What black parent wants her child told she shouldn’t worry about working hard, thinking objectively, or taking a deep interest in reading and writing because these things are not authentically black?
  • Clearly, our children’s prospects for success depend on the public being able to have an honest and free-ranging discussion about this new antiracism and its utilization in schools. Even if some black people have adopted its tenets, many more, perhaps most, hold complex perspectives that draw from a constellation of rather different ideologies.
  • So let’s listen to what some heterodox black people have to say about the new antiracism in our schools.
  • Coleman Hughes, a fellow at the Manhattan Institute, points to a self-defeating feature of Kendi-inspired grading and testing reforms: If we reject high academic standards for black children, they are unlikely to rise to “those same rejected standards” and racial disparity is unlikely to decrease
  • Chloé Valdary, the founder of Theory of Enchantment, worries that antiracism may “reinforce a shallow dogma of racial essentialism by describing black and white people in generalizing ways” and discourage “fellowship among peers of different races.”
  • We hope it’s obvious that the point we’re trying to make is not that everyone should accept uncritically everything these heterodox black thinkers say. Our point in composing this essay is that we all desperately need to hear what these thinkers say so we can have a genuine conversation
  • We promote no particular politics or agenda beyond a desire to offer a wide range of alternatives to the predictable fare emanating from elite mainstream outlets. At FBT, Marxists rub shoulders with laissez-faire libertarians. We have no desire to adjudicate who is “authentically black” or whom to prefer.
Javier E

Eric A. Posner Reviews Jim Manzi's "Uncontrolled" | The New Republic - 0 views

  • Most urgent questions of public policy turn on empirical imponderables, and so policymakers fall back on ideological predispositions or muddle through. Is there a better way?
  • The gold standard for empirical research is the randomized field trial (RFT).
  • The RFT works better than most other types of empirical investigation. Most of us use anecdotes or common sense empiricism to make inferences about the future, but psychological biases interfere with the reliability of these methods
  • ...15 more annotations...
  • Serious empiricists frequently use regression analysis.
  • Regression analysis is inferior to RFT because of the difficulty of ruling out confounding factors (for example, that a gene jointly causes baldness and a preference for tight hats) and of establishing causation
  • RFT has its limitations as well. It is enormously expensive because you must (usually) pay a large number of people to participate in an experiment, though one can obtain a discount if one uses prisoners, especially those in a developing country. In addition, one cannot always generalize from RFTs.
  • academic research proceeds in fits and starts, using RFT when it can, but otherwise relying on regression analysis and similar tools, including qualitative case studies,
  • businesses also use RFT whenever they can. A business such as Wal-Mart, with thousands of stores, might try out some innovation like a new display in a random selection of stores, using the remaining stores as a control group
  • Manzi argues that the RFT—or more precisely, the overall approach to empirical investigation that the RFT exemplifies—provides a way of thinking about public policy. Thi
  • the universe is shaky even where, as in the case of physics, “hard science” plays the dominant role. The scientific method cannot establish truths; it can only falsify hypotheses. The hypotheses come from our daily experience, so even when science prunes away intuitions that fail the experimental method, we can never be sure that the theories that remain standing reflect the truth or just haven’t been subject to the right experiment. And even within its domain, the experimental method is not foolproof. When an experiment contradicts received wisdom, it is an open question whether the wisdom is wrong or the experiment was improperly performed.
  • The book is less interested in the RFT than in the limits of empirical knowledge. Given these limits, what attitude should we take toward government?
  • Much of scientific knowledge turns out to depend on norms of scientific behavior, good faith, convention, and other phenomena that in other contexts tend to provide an unreliable basis for knowledge.
  • Under this view of the world, one might be attracted to the cautious conservatism associated with Edmund Burke, the view that we should seek knowledge in traditional norms and customs, which have stood the test of time and presumably some sort of Darwinian competition—a human being is foolish, the species is wise. There are hints of this worldview in Manzi’s book, though he does not explicitly endorse it. He argues, for example, that we should approach social problems with a bias for the status quo; those who seek to change it carry the burden of persuasion. Once a problem is identified, we should try out our ideas on a small scale before implementing them across society
  • Pursuing the theme of federalism, Manzi argues that the federal government should institutionalize policy waivers, so states can opt out from national programs and pursue their own initiatives. A state should be allowed to opt out of federal penalties for drug crimes, for example.
  • It is one thing to say, as he does, that federalism is useful because we can learn as states experiment with different policies. But Manzi takes away much of the force of this observation when he observes, as he must, that the scale of many of our most urgent problems—security, the economy—is at the national level, so policymaking in response to these problems cannot be left to the states. He also worries about social cohesion, which must be maintained at a national level even while states busily experiment. Presumably, this implies national policy of some sort
  • Manzi’s commitment to federalism and his technocratic approach to policy, which relies so heavily on RFT, sit uneasily together. The RFT is a form of planning: the experimenter must design the RFT and then execute it by recruiting subjects, paying them, and measuring and controlling their behavior. By contrast, experimentation by states is not controlled: the critical element of the RFT—randomization—is absent.
  • The right way to go would be for the national government to conduct experiments by implementing policies in different states (or counties or other local units) by randomizing—that is, by ordering some states to be “treatment” states and other states to be “control” states,
  • Manzi’s reasoning reflects the top-down approach to social policy that he is otherwise skeptical of—although, to be sure, he is willing to subject his proposals to RFTs.
Javier E

Eric Kandel's Visions - The Chronicle Review - The Chronicle of Higher Education - 0 views

  • Judith, "barely clothed and fresh from the seduction and slaying of Holofernes, glows in her voluptuousness. Her hair is a dark sky between the golden branches of Assyrian trees, fertility symbols that represent her eroticism. This young, ecstatic, extravagantly made-up woman confronts the viewer through half-closed eyes in what appears to be a reverie of orgasmic rapture," writes Eric Kandel in his new book, The Age of Insight. Wait a minute. Writes who? Eric Kandel, the Nobel-winning neuroscientist who's spent most of his career fixated on the generously sized neurons of sea snails
  • Kandel goes on to speculate, in a bravura paragraph a few hundred pages later, on the exact neurochemical cognitive circuitry of the painting's viewer:
  • "At a base level, the aesthetics of the image's luminous gold surface, the soft rendering of the body, and the overall harmonious combination of colors could activate the pleasure circuits, triggering the release of dopamine. If Judith's smooth skin and exposed breast trigger the release of endorphins, oxytocin, and vasopressin, one might feel sexual excitement. The latent violence of Holofernes's decapitated head, as well as Judith's own sadistic gaze and upturned lip, could cause the release of norepinephrine, resulting in increased heart rate and blood pressure and triggering the fight-or-flight response. In contrast, the soft brushwork and repetitive, almost meditative, patterning may stimulate the release of serotonin. As the beholder takes in the image and its multifaceted emotional content, the release of acetylcholine to the hippocampus contributes to the storing of the image in the viewer's memory. What ultimately makes an image like Klimt's 'Judith' so irresistible and dynamic is its complexity, the way it activates a number of distinct and often conflicting emotional signals in the brain and combines them to produce a staggeringly complex and fascinating swirl of emotions."
  • ...18 more annotations...
  • His key findings on the snail, for which he shared the 2000 Nobel Prize in Physiology or Medicine, showed that learning and memory change not the neuron's basic structure but rather the nature, strength, and number of its synaptic connections. Further, through focus on the molecular biology involved in a learned reflex like Aplysia's gill retraction, Kandel demonstrated that experience alters nerve cells' synapses by changing their pattern of gene expression. In other words, learning doesn't change what neurons are, but rather what they do.
  • In Search of Memory (Norton), Kandel offered what sounded at the time like a vague research agenda for future generations in the budding field of neuroaesthetics, saying that the science of memory storage lay "at the foothills of a great mountain range." Experts grasp the "cellular and molecular mechanisms," he wrote, but need to move to the level of neural circuits to answer the question, "How are internal representations of a face, a scene, a melody, or an experience encoded in the brain?
  • Since giving a talk on the matter in 2001, he has been piecing together his own thoughts in relation to his favorite European artists
  • The field of neuroaesthetics, says one of its founders, Semir Zeki, of University College London, is just 10 to 15 years old. Through brain imaging and other studies, scholars like Zeki have explored the cognitive responses to, say, color contrasts or ambiguities of line or perspective in works by Titian, Michelangelo, Cubists, and Abstract Expressionists. Researchers have also examined the brain's pleasure centers in response to appealing landscapes.
  • it is fundamental to an understanding of human cognition and motivation. Art isn't, as Kandel paraphrases a concept from the late philosopher of art Denis Dutton, "a byproduct of evolution, but rather an evolutionary adaptation—an instinctual trait—that helps us survive because it is crucial to our well-being." The arts encode information, stories, and perspectives that allow us to appraise courses of action and the feelings and motives of others in a palatable, low-risk way.
  • "as far as activity in the brain is concerned, there is a faculty of beauty that is not dependent on the modality through which it is conveyed but which can be activated by at least two sources—musical and visual—and probably by other sources as well." Specifically, in this "brain-based theory of beauty," the paper says, that faculty is associated with activity in the medial orbitofrontal cortex.
  • It also enables Kandel—building on the work of Gombrich and the psychoanalyst and art historian Ernst Kris, among others—to compare the painters' rendering of emotion, the unconscious, and the libido with contemporaneous psychological insights from Freud about latent aggression, pleasure and death instincts, and other primal drives.
  • Kandel views the Expressionists' art through the powerful multiple lenses of turn-of-the-century Vienna's cultural mores and psychological insights. But then he refracts them further, through later discoveries in cognitive science. He seeks to reassure those who fear that the empirical and chemical will diminish the paintings' poetic power. "In art, as in science," he writes, "reductionism does not trivialize our perception—of color, light, and perspective—but allows us to see each of these components in a new way. Indeed, artists, particularly modern artists, have intentionally limited the scope and vocabulary of their expression to convey, as Mark Rothko and Ad Reinhardt do, the most essential, even spiritual ideas of their art."
  • The author of a classic textbook on neuroscience, he seems here to have written a layman's cognition textbook wrapped within a work of art history.
  • "our initial response to the most salient features of the paintings of the Austrian Modernists, like our response to a dangerous animal, is automatic. ... The answer to James's question of how an object simply perceived turns into an object emotionally felt, then, is that the portraits are never objects simply perceived. They are more like the dangerous animal at a distance—both perceived and felt."
  • If imaging is key to gauging therapeutic practices, it will be key to neuroaesthetics as well, Kandel predicts—a broad, intense array of "imaging experiments to see what happens with exaggeration, distorted faces, in the human brain and the monkey brain," viewers' responses to "mixed eroticism and aggression," and the like.
  • while the visual-perception literature might be richer at the moment, there's no reason that neuroaesthetics should restrict its emphasis to the purely visual arts at the expense of music, dance, film, and theater.
  • although Kandel considers The Age of Insight to be more a work of intellectual history than of science, the book summarizes centuries of research on perception. And so you'll find, in those hundreds of pages between Kandel's introduction to Klimt's "Judith" and the neurochemical cadenza about the viewer's response to it, dossiers on vision as information processing; the brain's three-dimensional-space mapping and its interpretations of two-dimensional renderings; face recognition; the mirror neurons that enable us to empathize and physically reflect the affect and intentions we see in others; and many related topics. Kandel elsewhere describes the scientific evidence that creativity is nurtured by spells of relaxation, which foster a connection between conscious and unconscious cognition.
  • Zeki's message to art historians, aesthetic philosophers, and others who chafe at that idea is twofold. The more diplomatic pitch is that neuroaesthetics is different, complementary, and not oppositional to other forms of arts scholarship. But "the stick," as he puts it, is that if arts scholars "want to be taken seriously" by neurobiologists, they need to take advantage of the discoveries of the past half-century. If they don't, he says, "it's a bit like the guys who said to Galileo that we'd rather not look through your telescope."
  • Matthews, a co-author of The Bard on the Brain: Understanding the Mind Through the Art of Shakespeare and the Science of Brain Imaging (Dana Press, 2003), seems open to the elucidations that science and the humanities can cast on each other. The neural pathways of our aesthetic responses are "good explanations," he says. But "does one [type of] explanation supersede all the others? I would argue that they don't, because there's a fundamental disconnection still between ... explanations of neural correlates of conscious experience and conscious experience" itself.
  • There are, Matthews says, "certain kinds of problems that are fundamentally interesting to us as a species: What is love? What motivates us to anger?" Writers put their observations on such matters into idiosyncratic stories, psychologists conceive their observations in a more formalized framework, and neuroscientists like Zeki monitor them at the level of functional changes in the brain. All of those approaches to human experience "intersect," Matthews says, "but no one of them is the explanation."
  • "Conscious experience," he says, "is something we cannot even interrogate in ourselves adequately. What we're always trying to do in effect is capture the conscious experience of the last moment. ... As we think about it, we have no way of capturing more than one part of it."
  • Kandel sees art and art history as "parent disciplines" and psychology and brain science as "antidisciplines," to be drawn together in an E.O. Wilson-like synthesis toward "consilience as an attempt to open a discussion between restricted areas of knowledge." Kandel approvingly cites Stephen Jay Gould's wish for "the sciences and humanities to become the greatest of pals ... but to keep their ineluctably different aims and logics separate as they ply their joint projects and learn from each other."
nolan_delaney

How to be good at stress | ideas.ted.com - 0 views

  • He dedicated his career to identifying what distinguishes people who thrive under stress from those who are defeated by it. The ones who thrive, he concluded, are those who view stress as inevitable, and rather than try to avoid it, they look for ways to engage with it, adapt to it, and learn from it.
  • what is new is how psychology and neuroscience have begun to examine this truism. Research is beginning to reveal not only why stress helps us learn and grow, but also what makes some people more likely to experience these benefits.
  • . But the stress response doesn’t end when your heart stops pounding. Other stress hormones are released to help you recover from the challenge. These stress-recovery hormones include DHEA and nerve growth factor, both of which increase neuroplasticity. In other words, they help your brain learn from experience
  • ...7 more annotations...
  • . DHEA is classified as a neurosteroid; in the same way that steroids help your body grow stronger from physical exercise, DHEA helps your brain grow stronger from psychological challenges. For several hours after you have a strong stress response, the brain is rewiring itself to remember and learn from the experience. Stress leaves an imprint on your brain that prepares you to handle similar stress the next time you encounter it.
  • Psychologists call the process of learning and growing from a difficult experience stress inoculation. Going through the experience gives your brain and body a kind of stress vaccine. This is why putting people through practice stress is a key training technique for NASA astronauts, Navy SEALS, emergency responders and elite athletes, and others who have to thrive under high levels of stress.
  • . (This is part of what makes the science of stress so fascinating, and also so puzzling.
  • Higher levels of cortisol have been associated with worse outcomes, such as impaired immune function and depression. In contrast, higher levels of DHEA—the neurosteroid—have been linked to reduced risk of anxiety, depression, heart disease, neurodegeneration and other diseases we typically think of as stress-related.
  • An important question, then, is: How do you influence your own — or somebody else’s — growth index?
  • This mindset can actually shift your stress physiology toward a state that makes such a positive outcome more likely, for example by increasing your growth index and reducing harmful side effects of stress such as inflammation.
  • Other studies confirm that viewing a stressful situation as an opportunity to improve your skills, knowledge or strengths makes it more likely that you will experience stress inoculation or stress-related growth. Once you appreciate that going through stress makes you better at it, it gets easier to face each new challenge. And the expectation of growth sends a signal to your brain and body: get ready to learn something, because you can handle this.
  •  
    Good timing for an article about stress considering we are taking exams this week.  New physiology studies suggest that your brain releases a growth hormone  after a stressful experience (that is like steroid for the brain) that temporarily increases your ability to learn.   Interesting to think just how this trait/hormone was evolved...
Javier E

Thieves of experience: On the rise of surveillance capitalism - 1 views

  • Harvard Business School professor emerita Shoshana Zuboff argues in her new book that the Valley’s wealth and power are predicated on an insidious, essentially pathological form of private enterprise—what she calls “surveillance capitalism.” Pioneered by Google, perfected by Facebook, and now spreading throughout the economy, surveillance capitalism uses human life as its raw material. Our everyday experiences, distilled into data, have become a privately-owned business asset used to predict and mold our behavior, whether we’re shopping or socializing, working or voting.
  • By reengineering the economy and society to their own benefit, Google and Facebook are perverting capitalism in a way that undermines personal freedom and corrodes democracy.
  • Under the Fordist model of mass production and consumption that prevailed for much of the twentieth century, industrial capitalism achieved a relatively benign balance among the contending interests of business owners, workers, and consumers. Enlightened executives understood that good pay and decent working conditions would ensure a prosperous middle class eager to buy the goods and services their companies produced. It was the product itself — made by workers, sold by companies, bought by consumers — that tied the interests of capitalism’s participants together. Economic and social equilibrium was negotiated through the product.
  • ...72 more annotations...
  • By removing the tangible product from the center of commerce, surveillance capitalism upsets the equilibrium. Whenever we use free apps and online services, it’s often said, we become the products, our attention harvested and sold to advertisers
  • this truism gets it wrong. Surveillance capitalism’s real products, vaporous but immensely valuable, are predictions about our future behavior — what we’ll look at, where we’ll go, what we’ll buy, what opinions we’ll hold — that internet companies derive from our personal data and sell to businesses, political operatives, and other bidders.
  • Unlike financial derivatives, which they in some ways resemble, these new data derivatives draw their value, parasite-like, from human experience.To the Googles and Facebooks of the world, we are neither the customer nor the product. We are the source of what Silicon Valley technologists call “data exhaust” — the informational byproducts of online activity that become the inputs to prediction algorithms
  • Another 2015 study, appearing in the Journal of Computer-Mediated Communication, showed that when people hear their phone ring but are unable to answer it, their blood pressure spikes, their pulse quickens, and their problem-solving skills decline.
  • The smartphone has become a repository of the self, recording and dispensing the words, sounds and images that define what we think, what we experience and who we are. In a 2015 Gallup survey, more than half of iPhone owners said that they couldn’t imagine life without the device.
  • So what happens to our minds when we allow a single tool such dominion over our perception and cognition?
  • Not only do our phones shape our thoughts in deep and complicated ways, but the effects persist even when we aren’t using the devices. As the brain grows dependent on the technology, the research suggests, the intellect weakens.
  • he has seen mounting evidence that using a smartphone, or even hearing one ring or vibrate, produces a welter of distractions that makes it harder to concentrate on a difficult problem or job. The division of attention impedes reasoning and performance.
  • internet companies operate in what Zuboff terms “extreme structural independence from people.” When databases displace goods as the engine of the economy, our own interests, as consumers but also as citizens, cease to be part of the negotiation. We are no longer one of the forces guiding the market’s invisible hand. We are the objects of surveillance and control.
  • Social skills and relationships seem to suffer as well.
  • In both tests, the subjects whose phones were in view posted the worst scores, while those who left their phones in a different room did the best. The students who kept their phones in their pockets or bags came out in the middle. As the phone’s proximity increased, brainpower decreased.
  • In subsequent interviews, nearly all the participants said that their phones hadn’t been a distraction—that they hadn’t even thought about the devices during the experiment. They remained oblivious even as the phones disrupted their focus and thinking.
  • The researchers recruited 520 undergraduates at UCSD and gave them two standard tests of intellectual acuity. One test gauged “available working-memory capacity,” a measure of how fully a person’s mind can focus on a particular task. The second assessed “fluid intelligence,” a person’s ability to interpret and solve an unfamiliar problem. The only variable in the experiment was the location of the subjects’ smartphones. Some of the students were asked to place their phones in front of them on their desks; others were told to stow their phones in their pockets or handbags; still others were required to leave their phones in a different room.
  • the “integration of smartphones into daily life” appears to cause a “brain drain” that can diminish such vital mental skills as “learning, logical reasoning, abstract thought, problem solving, and creativity.”
  •  Smartphones have become so entangled with our existence that, even when we’re not peering or pawing at them, they tug at our attention, diverting precious cognitive resources. Just suppressing the desire to check our phone, which we do routinely and subconsciously throughout the day, can debilitate our thinking.
  • They found that students who didn’t bring their phones to the classroom scored a full letter-grade higher on a test of the material presented than those who brought their phones. It didn’t matter whether the students who had their phones used them or not: All of them scored equally poorly.
  • A study of nearly a hundred secondary schools in the U.K., published last year in the journal Labour Economics, found that when schools ban smartphones, students’ examination scores go up substantially, with the weakest students benefiting the most.
  • Data, the novelist and critic Cynthia Ozick once wrote, is “memory without history.” Her observation points to the problem with allowing smartphones to commandeer our brains
  • Because smartphones serve as constant reminders of all the friends we could be chatting with electronically, they pull at our minds when we’re talking with people in person, leaving our conversations shallower and less satisfying.
  • In a 2013 study conducted at the University of Essex in England, 142 participants were divided into pairs and asked to converse in private for ten minutes. Half talked with a phone in the room, half without a phone present. The subjects were then given tests of affinity, trust and empathy. “The mere presence of mobile phones,” the researchers reported in the Journal of Social and Personal Relationships, “inhibited the development of interpersonal closeness and trust” and diminished “the extent to which individuals felt empathy and understanding from their partners.”
  • The evidence that our phones can get inside our heads so forcefully is unsettling. It suggests that our thoughts and feelings, far from being sequestered in our skulls, can be skewed by external forces we’re not even aware o
  •  Scientists have long known that the brain is a monitoring system as well as a thinking system. Its attention is drawn toward any object that is new, intriguing or otherwise striking — that has, in the psychological jargon, “salience.”
  • even in the history of captivating media, the smartphone stands out. It is an attention magnet unlike any our minds have had to grapple with before. Because the phone is packed with so many forms of information and so many useful and entertaining functions, it acts as what Dr. Ward calls a “supernormal stimulus,” one that can “hijack” attention whenever it is part of our surroundings — and it is always part of our surroundings.
  • Imagine combining a mailbox, a newspaper, a TV, a radio, a photo album, a public library and a boisterous party attended by everyone you know, and then compressing them all into a single, small, radiant object. That is what a smartphone represents to us. No wonder we can’t take our minds off it.
  • The irony of the smartphone is that the qualities that make it so appealing to us — its constant connection to the net, its multiplicity of apps, its responsiveness, its portability — are the very ones that give it such sway over our minds.
  • Phone makers like Apple and Samsung and app writers like Facebook, Google and Snap design their products to consume as much of our attention as possible during every one of our waking hours
  • Social media apps were designed to exploit “a vulnerability in human psychology,” former Facebook president Sean Parker said in a recent interview. “[We] understood this consciously. And we did it anyway.”
  • A quarter-century ago, when we first started going online, we took it on faith that the web would make us smarter: More information would breed sharper thinking. We now know it’s not that simple.
  • As strange as it might seem, people’s knowledge and understanding may actually dwindle as gadgets grant them easier access to online data stores
  • In a seminal 2011 study published in Science, a team of researchers — led by the Columbia University psychologist Betsy Sparrow and including the late Harvard memory expert Daniel Wegner — had a group of volunteers read forty brief, factual statements (such as “The space shuttle Columbia disintegrated during re-entry over Texas in Feb. 2003”) and then type the statements into a computer. Half the people were told that the machine would save what they typed; half were told that the statements would be erased.
  • Afterward, the researchers asked the subjects to write down as many of the statements as they could remember. Those who believed that the facts had been recorded in the computer demonstrated much weaker recall than those who assumed the facts wouldn’t be stored. Anticipating that information would be readily available in digital form seemed to reduce the mental effort that people made to remember it
  • The researchers dubbed this phenomenon the “Google effect” and noted its broad implications: “Because search engines are continually available to us, we may often be in a state of not feeling we need to encode the information internally. When we need it, we will look it up.”
  • as the pioneering psychologist and philosopher William James said in an 1892 lecture, “the art of remembering is the art of thinking.”
  • Only by encoding information in our biological memory can we weave the rich intellectual associations that form the essence of personal knowledge and give rise to critical and conceptual thinking. No matter how much information swirls around us, the less well-stocked our memory, the less we have to think with.
  • As Dr. Wegner and Dr. Ward explained in a 2013 Scientific American article, when people call up information through their devices, they often end up suffering from delusions of intelligence. They feel as though “their own mental capacities” had generated the information, not their devices. “The advent of the ‘information age’ seems to have created a generation of people who feel they know more than ever before,” the scholars concluded, even though “they may know ever less about the world around them.”
  • That insight sheds light on society’s current gullibility crisis, in which people are all too quick to credit lies and half-truths spread through social media. If your phone has sapped your powers of discernment, you’ll believe anything it tells you.
  • A second experiment conducted by the researchers produced similar results, while also revealing that the more heavily students relied on their phones in their everyday lives, the greater the cognitive penalty they suffered.
  • When we constrict our capacity for reasoning and recall or transfer those skills to a gadget, we sacrifice our ability to turn information into knowledge. We get the data but lose the meaning
  • We need to give our minds more room to think. And that means putting some distance between ourselves and our phones.
  • Google’s once-patient investors grew restive, demanding that the founders figure out a way to make money, preferably lots of it.
  • nder pressure, Page and Brin authorized the launch of an auction system for selling advertisements tied to search queries. The system was designed so that the company would get paid by an advertiser only when a user clicked on an ad. This feature gave Google a huge financial incentive to make accurate predictions about how users would respond to ads and other online content. Even tiny increases in click rates would bring big gains in income. And so the company began deploying its stores of behavioral data not for the benefit of users but to aid advertisers — and to juice its own profits. Surveillance capitalism had arrived.
  • Google’s business now hinged on what Zuboff calls “the extraction imperative.” To improve its predictions, it had to mine as much information as possible from web users. It aggressively expanded its online services to widen the scope of its surveillance.
  • Through Gmail, it secured access to the contents of people’s emails and address books. Through Google Maps, it gained a bead on people’s whereabouts and movements. Through Google Calendar, it learned what people were doing at different moments during the day and whom they were doing it with. Through Google News, it got a readout of people’s interests and political leanings. Through Google Shopping, it opened a window onto people’s wish lists,
  • The company gave all these services away for free to ensure they’d be used by as many people as possible. It knew the money lay in the data.
  • the organization grew insular and secretive. Seeking to keep the true nature of its work from the public, it adopted what its CEO at the time, Eric Schmidt, called a “hiding strategy” — a kind of corporate omerta backed up by stringent nondisclosure agreements.
  • Page and Brin further shielded themselves from outside oversight by establishing a stock structure that guaranteed their power could never be challenged, neither by investors nor by directors.
  • What’s most remarkable about the birth of surveillance capitalism is the speed and audacity with which Google overturned social conventions and norms about data and privacy. Without permission, without compensation, and with little in the way of resistance, the company seized and declared ownership over everyone’s information
  • The companies that followed Google presumed that they too had an unfettered right to collect, parse, and sell personal data in pretty much any way they pleased. In the smart homes being built today, it’s understood that any and all data will be beamed up to corporate clouds.
  • Google conducted its great data heist under the cover of novelty. The web was an exciting frontier — something new in the world — and few people understood or cared about what they were revealing as they searched and surfed. In those innocent days, data was there for the taking, and Google took it
  • Google also benefited from decisions made by lawmakers, regulators, and judges — decisions that granted internet companies free use of a vast taxpayer-funded communication infrastructure, relieved them of legal and ethical responsibility for the information and messages they distributed, and gave them carte blanche to collect and exploit user data.
  • Consider the terms-of-service agreements that govern the division of rights and the delegation of ownership online. Non-negotiable, subject to emendation and extension at the company’s whim, and requiring only a casual click to bind the user, TOS agreements are parodies of contracts, yet they have been granted legal legitimacy by the court
  • Law professors, writes Zuboff, “call these ‘contracts of adhesion’ because they impose take-it-or-leave-it conditions on users that stick to them whether they like it or not.” Fundamentally undemocratic, the ubiquitous agreements helped Google and other firms commandeer personal data as if by fiat.
  • n the choices we make as consumers and private citizens, we have always traded some of our autonomy to gain other rewards. Many people, it seems clear, experience surveillance capitalism less as a prison, where their agency is restricted in a noxious way, than as an all-inclusive resort, where their agency is restricted in a pleasing way
  • Zuboff makes a convincing case that this is a short-sighted and dangerous view — that the bargain we’ve struck with the internet giants is a Faustian one
  • but her case would have been stronger still had she more fully addressed the benefits side of the ledger.
  • there’s a piece missing. While Zuboff’s assessment of the costs that people incur under surveillance capitalism is exhaustive, she largely ignores the benefits people receive in return — convenience, customization, savings, entertainment, social connection, and so on
  • hat the industries of the future will seek to manufacture is the self.
  • Behavior modification is the thread that ties today’s search engines, social networks, and smartphone trackers to tomorrow’s facial-recognition systems, emotion-detection sensors, and artificial-intelligence bots.
  • All of Facebook’s information wrangling and algorithmic fine-tuning, she writes, “is aimed at solving one problem: how and when to intervene in the state of play that is your daily life in order to modify your behavior and thus sharply increase the predictability of your actions now, soon, and later.”
  • “The goal of everything we do is to change people’s actual behavior at scale,” a top Silicon Valley data scientist told her in an interview. “We can test how actionable our cues are for them and how profitable certain behaviors are for us.”
  • This goal, she suggests, is not limited to Facebook. It is coming to guide much of the economy, as financial and social power shifts to the surveillance capitalists
  • Combining rich information on individuals’ behavioral triggers with the ability to deliver precisely tailored and timed messages turns out to be a recipe for behavior modification on an unprecedented scale.
  • it was Facebook, with its incredibly detailed data on people’s social lives, that grasped digital media’s full potential for behavior modification. By using what it called its “social graph” to map the intentions, desires, and interactions of literally billions of individuals, it saw that it could turn its network into a worldwide Skinner box, employing psychological triggers and rewards to program not only what people see but how they react.
  • spying on the populace is not the end game. The real prize lies in figuring out ways to use the data to shape how people think and act. “The best way to predict the future is to invent it,” the computer scientist Alan Kay once observed. And the best way to predict behavior is to script it.
  • competition for personal data intensified. It was no longer enough to monitor people online; making better predictions required that surveillance be extended into homes, stores, schools, workplaces, and the public squares of cities and towns. Much of the recent innovation in the tech industry has entailed the creation of products and services designed to vacuum up data from every corner of our lives
  • “The typical complaint is that privacy is eroded, but that is misleading,” Zuboff writes. “In the larger societal pattern, privacy is not eroded but redistributed . . . . Instead of people having the rights to decide how and what they will disclose, these rights are concentrated within the domain of surveillance capitalism.” The transfer of decision rights is also a transfer of autonomy and agency, from the citizen to the corporation.
  • What we lose under this regime is something more fundamental than privacy. It’s the right to make our own decisions about privacy — to draw our own lines between those aspects of our lives we are comfortable sharing and those we are not
  • Other possible ways of organizing online markets, such as through paid subscriptions for apps and services, never even got a chance to be tested.
  • Online surveillance came to be viewed as normal and even necessary by politicians, government bureaucrats, and the general public
  • Google and other Silicon Valley companies benefited directly from the government’s new stress on digital surveillance. They earned millions through contracts to share their data collection and analysis techniques with the National Security Agenc
  • As much as the dot-com crash, the horrors of 9/11 set the stage for the rise of surveillance capitalism. Zuboff notes that, in 2000, members of the Federal Trade Commission, frustrated by internet companies’ lack of progress in adopting privacy protections, began formulating legislation to secure people’s control over their online information and severely restrict the companies’ ability to collect and store it. It seemed obvious to the regulators that ownership of personal data should by default lie in the hands of private citizens, not corporations.
  • The 9/11 attacks changed the calculus. The centralized collection and analysis of online data, on a vast scale, came to be seen as essential to national security. “The privacy provisions debated just months earlier vanished from the conversation more or less overnight,”
1 - 20 of 981 Next › Last »
Showing 20 items per page