Skip to main content

Home/ TOK Friends/ Group items tagged mind

Rss Feed Group items tagged

Javier E

Korean philosophy is built upon daily practice of good habits | Aeon Essays - 0 views

  • ‘We are unknown, we knowers, ourselves to ourselves,’ wrote Friedrich Nietzsche at the beginning of On the Genealogy of Morals (1887
  • This seeking after ourselves, however, is not something that is lacking in Buddhist and Confucian traditions – especially not in the case of Korean philosophy. Self-cultivation, central to the tradition, underscores that the onus is on the individual to develop oneself, without recourse to the divine or the supernatural
  • Korean philosophy is practical, while remaining agnostic to a large degree: recognising the spirit realm but highlighting that we ourselves take charge of our lives by taking charge of our minds
  • ...36 more annotations...
  • The word for ‘philosophy’ in Korean is 철학, pronounced ch’ŏrhak. It literally means the ‘study of wisdom’ or, perhaps better, ‘how to become wise’, which reflects its more dynamic and proactive implications
  • At night, in the darkness of the cave, he drank water from a perfectly useful ‘bowl’. But when he could see properly, he found that there was no ‘bowl’ at all, only a disgusting human skull.
  • Our lives and minds are affected by others (and their actions), as others (and their minds) are affected by our actions. This is particularly true in the Korean application of Confucian and Buddhist ideas.
  • Wŏnhyo understood that how we think about things shapes their very existence – and in turn our own existence, which is constructed according to our thoughts.
  • In the Korean tradition of philosophy, human beings are social beings, therefore knowing how to interact with others is an essential part of living a good life – indeed, living well with others is our real contribution to human life
  • he realised that there isn’t a difference between the ‘bowl’ and the skull: the only difference lies with us and our perceptions. We interpret our lives through a continual stream of thoughts, and so we become what we think, or rather how we think
  • As our daily lives are shaped by our thoughts, so our experience of this reality is good or bad – depending on our thoughts – which make things ‘appear’ good or bad because, in ‘reality’, things in and of themselves are devoid of their own independent nature
  • We can take from Wŏnhyo the idea that, if you change the patterns that have become engrained in how you think, you will begin to live differently. To do this, you need to change your mental habits, which is why meditation and mindful awareness can help. And this needs to be practised every day
  • Wŏnhyo’s most important work is titled Awaken your Mind and Practice (in Korean, Palsim suhaeng-jang). It is an explicit call to younger adherents to put Buddhist ideas into practice, and an indirect warning not to get lost in contemplation or in the study of text
  • While Wŏnhyo had emphasised the mind and the need to ‘practise’ Buddhism, a later Korean monk, Chinul (1158-1210), spearheaded Sŏn, the meditational tradition in Korea that espoused the idea of ‘sudden enlightenment’ that alerts the mind, accompanied by ‘gradual cultivation’
  • we still need to practise meditation, for if not we can easily fall into our old ways even if our minds have been awakened
  • his greatest contribution to Sŏn is Secrets on Cultivating the Mind (Susim kyŏl). This text outlines in detail his teachings on sudden awakening followed by the need for gradual cultivation
  • hinul’s approach recognises the mind as the ‘essence’ of one’s Buddha nature (contained in the mind, which is inherently good), while continual practice and cultivation aids in refining its ‘function’ – this is the origin of the ‘essence-function’ concept that has since become central to Korean philosophy.
  • These ideas also influenced the reformed view of Confucianism that became linked with the mind and other metaphysical ideas, finally becoming known as Neo-Confucianism.
  • During the Chosŏn dynasty (1392-1910), the longest lasting in East Asian history, Neo-Confucianism became integrated into society at all levels through rituals for marriage, funerals and ancestors
  • Neo-Confucianism recognises that we as individuals exist through plural relationships with responsibilities to others (as a child, brother/sister, lover, husband/wife, parent, teacher/student and so on), an idea nicely captured in 2000 by the French philosopher Jean-Luc Nancy when he described our ‘being’ as ‘singular plural’
  • Corrupt interpretations of Confucianism by heteronormative men have historically championed these ideas in terms of vertical relationships rather than as a reciprocal set of benevolent social interactions, meaning that women have suffered greatly as a result.
  • Setting aside these sexist and self-serving interpretations, Confucianism emphasises that society works as an interconnected set of complementary reciprocal relationships that should be beneficial to all parties within a social system
  • Confucian relationships have the potential to offer us an example of effective citizenship, similar to that outlined by Cicero, where the good of the republic or state is at the centre of being a good citizen
  • There is a general consensus in Korean philosophy that we have an innate sociability and therefore should have a sense of duty to each other and to practise virtue.
  • The main virtue of Confucianism is the idea of ‘humanity’, coming from the Chinese character 仁, often left untranslated and written as ren and pronounced in Korean as in.
  • It is a combination of the character for a human being and the number two. In other words, it signifies what (inter)connects two people, or rather how they should interact in a humane or benevolent manner to each other. This character therefore highlights the link between people while emphasising that the most basic thing that makes us ‘human’ is our interaction with others.
  • Neo-Confucianism adopted a turn towards a more mind-centred view in the writings of the Korean scholar Yi Hwang, known by his pen name T’oegye (1501-70), who appears on the 1,000-won note. He greatly influenced Neo-Confucianism in Japan through his formidable text, Ten Diagrams on Sage Learning (Sŏnghak sipto), composed in 1568, which was one of the most-reproduced texts of the entire Chosŏn dynasty and represents the synthesis of Neo-Confucian thought in Korea
  • with commentaries that elucidate the moral principles of Confucianism, related to the cardinal relationships and education. It also embodies T’oegye’s own development of moral psychology through his focus on the mind, and illuminates the importance of teaching and the practice of self-cultivation.
  • He writes that we ourselves can transform the unrestrained mind and its desires, and achieve sagehood, if we take the arduous, gradual path of self-cultivation centred on the mind.
  • Confucians had generally accepted the Mencian idea that human nature was embodied in the unaroused state of the mind, before it was shaped by its environment. The mind in its unaroused state was taken to be theoretically good. However, this inborn tendency for goodness is always in danger of being reduced to passivity, unless you cultivate yourself as a person of ‘humanity’ (in the Confucian sense mentioned above).
  • You should constantly try to activate your humanity to allow the unhampered operation of the original mind to manifest itself through socially responsible and moral character in action
  • Humanity is the realisation of what I describe as our ‘optimum level of perfection’ that exists in an inherent stage of potentiality due our innate good nature
  • This, in a sense, is like the Buddha nature of the Buddhists, which suggests we are already enlightened and just need to recover our innate mental state. Both philosophies are hopeful: humans are born good with the potential to correct their own flaws and failures
  • this could hardly contrast any more greatly with the Christian doctrine of original sin
  • The seventh diagram in T’oegye’s text is entitled ‘The Diagram of the Explanation of Humanity’ (Insŏl-to). Here he warns how one’s good inborn nature may become impaired, hampering the operation of the original mind and negatively impacting our character in action. Humanity embodies the gradual realisation of our optimum level of perfection that already exists in our mind but that depends on how we think about things and how we relate that to others in a social context
  • For T’oegye, the key to maintaining our capacity to remain level-headed, and to control our impulses and emotions, was kyŏng. This term is often translated as ‘seriousness’, occasionally ‘mindfulness’, and it identifies the serious need for constant effort to control one’s mind in order to go about one’s life in a healthy manner
  • For T’oegye, mindfulness is as serious as meditation is for the Buddhists. In fact, the Neo-Confucians had their own meditational practice of ‘quiet-sitting’ (chŏngjwa), which focused on recovering the calm and not agitated ‘original mind’, before putting our daily plans into action
  • These diagrams reinforce this need for a daily practice of Confucian mindfulness, because practice leads to the ‘good habit’ of creating (and maintaining) routines. There is no short-cut provided, no weekend intro to this practice: it is life-long, and that is what makes it transformative, leading us to become better versions of who were in the beginning. This is consolation of Korean philosophy.
  • Seeing the world as it is can steer us away from making unnecessary mistakes, while highlighting what is good and how to maintain that good while also reducing anxiety from an agitated mind and harmful desires. This is why Korean philosophy can provide us with consolation; it recognises the bad, but prioritises the good, providing several moral pathways that are referred to in the East Asian traditions (Confucianism, Buddhism and Daoism) as modes of ‘self-cultivation’
  • As social beings, we penetrate the consciousness of others, and so humans are linked externally through conduct but also internally through thought. Humanity is a unifying approach that holds the potential to solve human problems, internally and externally, as well as help people realise the perfection that is innately theirs
Javier E

At the Existentialist Café: Freedom, Being, and Apricot Cocktails with Jean-P... - 0 views

  • The phenomenologists’ leading thinker, Edmund Husserl, provided a rallying cry, ‘To the things themselves!’ It meant: don’t waste time on the interpretations that accrue upon things, and especially don’t waste time wondering whether the things are real. Just look at this that’s presenting itself to you, whatever this may be, and describe it as precisely as possible.
  • You might think you have defined me by some label, but you are wrong, for I am always a work in progress. I create myself constantly through action, and this is so fundamental to my human condition that, for Sartre, it is the human condition, from the moment of first consciousness to the moment when death wipes it out. I am my own freedom: no more, no less.
  • Sartre wrote like a novelist — not surprisingly, since he was one. In his novels, short stories and plays as well as in his philosophical treatises, he wrote about the physical sensations of the world and the structures and moods of human life. Above all, he wrote about one big subject: what it meant to be free. Freedom, for him, lay at the heart of all human experience, and this set humans apart from all other kinds of object.
  • ...97 more annotations...
  • Sartre listened to his problem and said simply, ‘You are free, therefore choose — that is to say, invent.’ No signs are vouchsafed in this world, he said. None of the old authorities can relieve you of the burden of freedom. You can weigh up moral or practical considerations as carefully as you like, but ultimately you must take the plunge and do something, and it’s up to you what that something is.
  • Even if the situation is unbearable — perhaps you are facing execution, or sitting in a Gestapo prison, or about to fall off a cliff — you are still free to decide what to make of it in mind and deed. Starting from where you are now, you choose. And in choosing, you also choose who you will be.
  • The war had made people realise that they and their fellow humans were capable of departing entirely from civilised norms; no wonder the idea of a fixed human nature seemed questionable.
  • If this sounds difficult and unnerving, it’s because it is. Sartre does not deny that the need to keep making decisions brings constant anxiety. He heightens this anxiety by pointing out that what you do really matters. You should make your choices as though you were choosing on behalf of the whole of humanity, taking the entire burden of responsibility for how the human race behaves. If you avoid this responsibility by fooling yourself that you are the victim of circumstance or of someone else’s bad advice, you are failing to meet the demands of human life and choosing a fake existence, cut off from your own ‘authenticity’.
  • Along with the terrifying side of this comes a great promise: Sartre’s existentialism implies that it is possible to be authentic and free, as long as you keep up the effort.
  • almost all agreed that it was, as an article in Les nouvelles littéraires phrased it, a ‘sickening mixture of philosophic pretentiousness, equivocal dreams, physiological technicalities, morbid tastes and hesitant eroticism … an introspective embryo that one would take distinct pleasure in crushing’.
  • he offered a philosophy designed for a species that had just scared the hell out of itself, but that finally felt ready to grow up and take responsibility.
  • In this rebellious world, just as with the Parisian bohemians and Dadaists in earlier generations, everything that was dangerous and provocative was good, and everything that was nice or bourgeois was bad.
  • Such interweaving of ideas and life had a long pedigree, although the existentialists gave it a new twist. Stoic and Epicurean thinkers in the classical world had practised philosophy as a means of living well, rather than of seeking knowledge or wisdom for their own sake. By reflecting on life’s vagaries in philosophical ways, they believed they could become more resilient, more able to rise above circumstances, and better equipped to manage grief, fear, anger, disappointment or anxiety.
  • In the tradition they passed on, philosophy is neither a pure intellectual pursuit nor a collection of cheap self-help tricks, but a discipline for flourishing and living a fully human, responsible life.
  • For Kierkegaard, Descartes had things back to front. In his own view, human existence comes first: it is the starting point for everything we do, not the result of a logical deduction. My existence is active: I live it and choose it, and this precedes any statement I can make about myself.
  • Studying our own moral genealogy cannot help us to escape or transcend ourselves. But it can enable us to see our illusions more clearly and lead a more vital, assertive existence.
  • What was needed, he felt, was not high moral or theological ideals, but a deeply critical form of cultural history or ‘genealogy’ that would uncover the reasons why we humans are as we are, and how we came to be that way. For him, all philosophy could even be redefined as a form of psychology, or history.
  • For those oppressed on grounds of race or class, or for those fighting against colonialism, existentialism offered a change of perspective — literally, as Sartre proposed that all situations be judged according to how they appeared in the eyes of those most oppressed, or those whose suffering was greatest.
  • She observed that we need not expect moral philosophers to ‘live by’ their ideas in a simplistic way, as if they were following a set of rules. But we can expect them to show how their ideas are lived in. We should be able to look in through the windows of a philosophy, as it were, and see how people occupy it, how they move about and how they conduct themselves.
  • the existentialists inhabited their historical and personal world, as they inhabited their ideas. This notion of ‘inhabited philosophy’ is one I’ve borrowed from the English philosopher and novelist Iris Murdoch, who wrote the first full-length book on Sartre and was an early adopter of existentialism
  • What is existentialism anyway?
  • An existentialist who is also phenomenological provides no easy rules for dealing with this condition, but instead concentrates on describing lived experience as it presents itself. — By describing experience well, he or she hopes to understand this existence and awaken us to ways of living more authentic lives.
  • Existentialists concern themselves with individual, concrete human existence. — They consider human existence different from the kind of being other things have. Other entities are what they are, but as a human I am whatever I choose to make of myself at every moment. I am free — — and therefore I’m responsible for everything I do, a dizzying fact which causes — an anxiety inseparable from human existence itself.
  • On the other hand, I am only free within situations, which can include factors in my own biology and psychology as well as physical, historical and social variables of the world into which I have been thrown. — Despite the limitations, I always want more: I am passionately involved in personal projects of all kinds. — Human existence is thus ambiguous: at once boxed in by borders and yet transcendent and exhilarating. —
  • The first part of this is straightforward: a phenomenologist’s job is to describe. This is the activity that Husserl kept reminding his students to do. It meant stripping away distractions, habits, clichés of thought, presumptions and received ideas, in order to return our attention to what he called the ‘things themselves’. We must fix our beady gaze on them and capture them exactly as they appear, rather than as we think they are supposed to be.
  • Husserl therefore says that, to phenomenologically describe a cup of coffee, I should set aside both the abstract suppositions and any intrusive emotional associations. Then I can concentrate on the dark, fragrant, rich phenomenon in front of me now. This ‘setting aside’ or ‘bracketing out’ of speculative add-ons Husserl called epoché — a term borrowed from the ancient Sceptics,
  • The point about rigour is crucial; it brings us back to the first half of the command to describe phenomena. A phenomenologist cannot get away with listening to a piece of music and saying, ‘How lovely!’ He or she must ask: is it plaintive? is it dignified? is it colossal and sublime? The point is to keep coming back to the ‘things themselves’ — phenomena stripped of their conceptual baggage — so as to bail out weak or extraneous material and get to the heart of the experience.
  • Husserlian ‘bracketing out’ or epoché allows the phenomenologist to temporarily ignore the question ‘But is it real?’, in order to ask how a person experiences his or her world. Phenomenology gives a formal mode of access to human experience. It lets philosophers talk about life more or less as non-philosophers do, while still being able to tell themselves they are being methodical and rigorous.
  • Besides claiming to transform the way we think about reality, phenomenologists promised to change how we think about ourselves. They believed that we should not try to find out what the human mind is, as if it were some kind of substance. Instead, we should consider what it does, and how it grasps its experiences.
  • For Brentano, this reaching towards objects is what our minds do all the time. Our thoughts are invariably of or about something, he wrote: in love, something is loved, in hatred, something is hated, in judgement, something is affirmed or denied. Even when I imagine an object that isn’t there, my mental structure is still one of ‘about-ness’ or ‘of-ness’.
  • Except in deepest sleep, my mind is always engaged in this aboutness: it has ‘intentionality’. Having taken the germ of this from Brentano, Husserl made it central to his whole philosophy.
  • Husserl saw in the idea of intentionality a way to sidestep two great unsolved puzzles of philosophical history: the question of what objects ‘really’ are, and the question of what the mind ‘really’ is. By doing the epoché and bracketing out all consideration of reality from both topics, one is freed to concentrate on the relationship in the middle. One can apply one’s descriptive energies to the endless dance of intentionality that takes place in our lives: the whirl of our minds as they seize their intended phenomena one after the other and whisk them around the floor,
  • Understood in this way, the mind hardly is anything at all: it is its aboutness. This makes the human mind (and possibly some animal minds) different from any other naturally occurring entity. Nothing else can be as thoroughly about or of things as the mind is:
  • Some Eastern meditation techniques aim to still this scurrying creature, but the extreme difficulty of this shows how unnatural it is to be mentally inert. Left to itself, the mind reaches out in all directions as long as it is awake — and even carries on doing it in the dreaming phase of its sleep.
  • a mind that is experiencing nothing, imagining nothing, or speculating about nothing can hardly be said to be a mind at all.
  • Three simple ideas — description, phenomenon, intentionality — provided enough inspiration to keep roomfuls of Husserlian assistants busy in Freiburg for decades. With all of human existence awaiting their attention, how could they ever run out of things to do?
  • For Sartre, this gives the mind an immense freedom. If we are nothing but what we think about, then no predefined ‘inner nature’ can hold us back. We are protean.
  • way of this interpretation. Real, not real; inside, outside; what difference did it make? Reflecting on this, Husserl began turning his phenomenology into a branch of ‘idealism’ — the philosophical tradition which denied external reality and defined everything as a kind of private hallucination.
  • For Sartre, if we try to shut ourselves up inside our own minds, ‘in a nice warm room with the shutters closed’, we cease to exist. We have no cosy home: being out on the dusty road is the very definition of what we are.
  • One might think that, if Heidegger had anything worth saying, he could have communicated it in ordinary language. The fact is that he does not want to be ordinary, and he may not even want to communicate in the usual sense. He wants to make the familiar obscure, and to vex us. George Steiner thought that Heidegger’s purpose was less to be understood than to be experienced through a ‘felt strangeness’.
  • He takes Dasein in its most ordinary moments, then talks about it in the most innovative way he can. For Heidegger, Dasein’s everyday Being is right here: it is Being-in-the-world, or In-der-Welt-sein. The main feature of Dasein’s everyday Being-in-the-world right here is that it is usually busy doing something.
  • Thus, for Heidegger, all Being-in-the-world is also a ‘Being-with’ or Mitsein. We cohabit with others in a ‘with-world’, or Mitwelt. The old philosophical problem of how we prove the existence of other minds has now vanished. Dasein swims in the with-world long before it wonders about other minds.
  • Sometimes the best-educated people were those least inclined to take the Nazis seriously, dismissing them as too absurd to last. Karl Jaspers was one of those who made this mistake, as he later recalled, and Beauvoir observed similar dismissive attitudes among the French students in Berlin.
  • In any case, most of those who disagreed with Hitler’s ideology soon learned to keep their view to themselves. If a Nazi parade passed on the street, they would either slip out of view or give the obligatory salute like everyone else, telling themselves that the gesture meant nothing if they did not believe in it. As the psychologist Bruno Bettelheim later wrote of this period, few people will risk their life for such a small thing as raising an arm — yet that is how one’s powers of resistance are eroded away, and eventually one’s responsibility and integrity go with them.
  • for Arendt, if you do not respond adequately when the times demand it, you show a lack of imagination and attention that is as dangerous as deliberately committing an abuse. It amounts to disobeying the one command she had absorbed from Heidegger in those Marburg days: Think!
  • ‘Everything takes place under a kind of anaesthesia. Objectively dreadful events produce a thin, puny emotional response. Murders are committed like schoolboy pranks. Humiliation and moral decay are accepted like minor incidents.’ Haffner thought modernity itself was partly to blame: people had become yoked to their habits and to mass media, forgetting to stop and think, or to disrupt their routines long enough to question what was going on.
  • Heidegger’s former lover and student Hannah Arendt would argue, in her 1951 study The Origins of Totalitarianism, that totalitarian movements thrived at least partly because of this fragmentation in modern lives, which made people more vulnerable to being swept away by demagogues. Elsewhere, she coined the phrase ‘the banality of evil’ to describe the most extreme failures of personal moral awareness.
  • His communicative ideal fed into a whole theory of history: he traced all civilisation to an ‘Axial Period’ in the fifth century BC, during which philosophy and culture exploded simultaneously in Europe, the Middle East and Asia, as though a great bubble of minds had erupted from the earth’s surface. ‘True philosophy needs communion to come into existence,’ he wrote, and added, ‘Uncommunicativeness in a philosopher is virtually a criterion of the untruth of his thinking.’
  • The idea of being called to authenticity became a major theme in later existentialism, the call being interpreted as saying something like ‘Be yourself!’, as opposed to being phony. For Heidegger, the call is more fundamental than that. It is a call to take up a self that you didn’t know you had: to wake up to your Being. Moreover, it is a call to action. It requires you to do something: to take a decision of some sort.
  • Being and Time contained at least one big idea that should have been of use in resisting totalitarianism. Dasein, Heidegger wrote there, tends to fall under the sway of something called das Man or ‘the they’ — an impersonal entity that robs us of the freedom to think for ourselves. To live authentically requires resisting or outwitting this influence, but this is not easy because das Man is so nebulous. Man in German does not mean ‘man’ as in English (that’s der Mann), but a neutral abstraction, something like ‘one’ in the English phrase ‘one doesn’t do that’,
  • for Heidegger, das Man is me. It is everywhere and nowhere; it is nothing definite, but each of us is it. As with Being, it is so ubiquitous that it is difficult to see. If I am not careful, however, das Man takes over the important decisions that should be my own. It drains away my responsibility or ‘answerability’. As Arendt might put it, we slip into banality, failing to think.
  • Jaspers focused on what he called Grenzsituationen — border situations, or limit situations. These are the moments when one finds oneself constrained or boxed in by what is happening, but at the same time pushed by these events towards the limits or outer edge of normal experience. For example, you might have to make a life-or-death choice, or something might remind you suddenly of your mortality,
  • Jaspers’ interest in border situations probably had much to do with his own early confrontation with mortality. From childhood, he had suffered from a heart condition so severe that he always expected to die at any moment. He also had emphysema, which forced him to speak slowly, taking long pauses to catch his breath. Both illnesses meant that he had to budget his energies with care in order to get his work done without endangering his life.
  • If I am to resist das Man, I must become answerable to the call of my ‘voice of conscience’. This call does not come from God, as a traditional Christian definition of the voice of conscience might suppose. It comes from a truly existentialist source: my own authentic self. Alas, this voice is one I do not recognise and may not hear, because it is not the voice of my habitual ‘they-self’. It is an alien or uncanny version of my usual voice. I am familiar with my they-self, but not with my unalienated voice — so, in a weird twist, my real voice is the one that sounds strangest to me.
  • Marcel developed a strongly theological branch of existentialism. His faith distanced him from both Sartre and Heidegger, but he shared a sense of how history makes demands on individuals. In his essay ‘On the Ontological Mystery’, written in 1932 and published in the fateful year of 1933, Marcel wrote of the human tendency to become stuck in habits, received ideas, and a narrow-minded attachment to possessions and familiar scenes. Instead, he urged his readers to develop a capacity for remaining ‘available’ to situations as they arise. Similar ideas of disponibilité or availability had been explored by other writers,
  • Marcel made it his central existential imperative. He was aware of how rare and difficult it was. Most people fall into what he calls ‘crispation’: a tensed, encrusted shape in life — ‘as though each one of us secreted a kind of shell which gradually hardened and imprisoned him’.
  • Bettelheim later observed that, under Nazism, only a few people realised at once that life could not continue unaltered: these were the ones who got away quickly. Bettelheim himself was not among them. Caught in Austria when Hitler annexed it, he was sent first to Dachau and then to Buchenwald, but was then released in a mass amnesty to celebrate Hitler’s birthday in 1939 — an extraordinary reprieve, after which he left at once for America.
  • we are used to reading philosophy as offering a universal message for all times and places — or at least as aiming to do so. But Heidegger disliked the notion of universal truths or universal humanity, which he considered a fantasy. For him, Dasein is not defined by shared faculties of reason and understanding, as the Enlightenment philosophers thought. Still less is it defined by any kind of transcendent eternal soul, as in religious tradition. We do not exist on a higher, eternal plane at all. Dasein’s Being is local: it has a historical situation, and is constituted in time and place.
  • For Marcel, learning to stay open to reality in this way is the philosopher’s prime job. Everyone can do it, but the philosopher is the one who is called on above all to stay awake, so as to be the first to sound the alarm if something seems wrong.
  • Second, it also means understanding that we are historical beings, and grasping the demands our particular historical situation is making on us. In what Heidegger calls ‘anticipatory resoluteness’, Dasein discovers ‘that its uttermost possibility lies in giving itself up’. At that moment, through Being-towards-death and resoluteness in facing up to one’s time, one is freed from the they-self and attains one’s true, authentic self.
  • If we are temporal beings by our very nature, then authentic existence means accepting, first, that we are finite and mortal. We will die: this all-important realisation is what Heidegger calls authentic ‘Being-towards-Death’, and it is fundamental to his philosophy.
  • Hannah Arendt, instead, left early on: she had the benefit of a powerful warning. Just after the Nazi takeover, in spring 1933, she had been arrested while researching materials on anti-Semitism for the German Zionist Organisation at Berlin’s Prussian State Library. Her apartment was searched; both she and her mother were locked up briefly, then released. They fled, without stopping to arrange travel documents. They crossed to Czechoslovakia (then still safe) by a method that sounds almost too fabulous to be true: a sympathetic German family on the border had a house with its front door in Germany and its back door in Czechoslovakia. The family would invite people for dinner, then let them leave through the back door at night.
  • As Sartre argued in his 1943 review of The Stranger, basic phenomenological principles show that experience comes to us already charged with significance. A piano sonata is a melancholy evocation of longing. If I watch a soccer match, I see it as a soccer match, not as a meaningless scene in which a number of people run around taking turns to apply their lower limbs to a spherical object. If the latter is what I’m seeing, then I am not watching some more essential, truer version of soccer; I am failing to watch it properly as soccer at all.
  • Much as they liked Camus personally, neither Sartre nor Beauvoir accepted his vision of absurdity. For them, life is not absurd, even when viewed on a cosmic scale, and nothing can be gained by saying it is. Life for them is full of real meaning, although that meaning emerges differently for each of us.
  • For Sartre, we show bad faith whenever we portray ourselves as passive creations of our race, class, job, history, nation, family, heredity, childhood influences, events, or even hidden drives in our subconscious which we claim are out of our control. It is not that such factors are unimportant: class and race, in particular, he acknowledged as powerful forces in people’s lives, and Simone de Beauvoir would soon add gender to that list.
  • Sartre takes his argument to an extreme point by asserting that even war, imprisonment or the prospect of imminent death cannot take away my existential freedom. They form part of my ‘situation’, and this may be an extreme and intolerable situation, but it still provides only a context for whatever I choose to do next. If I am about to die, I can decide how to face that death. Sartre here resurrects the ancient Stoic idea that I may not choose what happens to me, but I can choose what to make of it, spiritually speaking.
  • But the Stoics cultivated indifference in the face of terrible events, whereas Sartre thought we should remain passionately, even furiously engaged with what happens to us and with what we can achieve. We should not expect freedom to be anything less than fiendishly difficult.
  • Freedom does not mean entirely unconstrained movement, and it certainly does not mean acting randomly. We often mistake the very things that enable us to be free — context, meaning, facticity, situation, a general direction in our lives — for things that define us and take away our freedom. It is only with all of these that we can be free in a real sense.
  • Nor did he mean that privileged groups have the right to pontificate to the poor and downtrodden about the need to ‘take responsibility’ for themselves. That would be a grotesque misreading of Sartre’s point, since his sympathy in any encounter always lay with the more oppressed side. But for each of us — for me — to be in good faith means not making excuses for myself.
  • Camus’ novel gives us a deliberately understated vision of heroism and decisive action compared to those of Sartre and Beauvoir. One can only do so much. It can look like defeatism, but it shows a more realistic perception of what it takes to actually accomplish difficult tasks like liberating one’s country.
  • Camus just kept returning to his core principle: no torture, no killing — at least not with state approval. Beauvoir and Sartre believed they were taking a more subtle and more realistic view. If asked why a couple of innocuous philosophers had suddenly become so harsh, they would have said it was because the war had changed them in profound ways. It had shown them that one’s duties to humanity could be more complicated than they seemed. ‘The war really divided my life in two,’ Sartre said later.
  • Poets and artists ‘let things be’, but they also let things come out and show themselves. They help to ease things into ‘unconcealment’ (Unverborgenheit), which is Heidegger’s rendition of the Greek term alētheia, usually translated as ‘truth’. This is a deeper kind of truth than the mere correspondence of a statement to reality, as when we say ‘The cat is on the mat’ and point to a mat with a cat on it. Long before we can do this, both cat and mat must ‘stand forth out of concealedness’. They must un-hide themselves.
  • Heidegger does not use the word ‘consciousness’ here because — as with his earlier work — he is trying to make us think in a radically different way about ourselves. We are not to think of the mind as an empty cavern, or as a container filled with representations of things. We are not even supposed to think of it as firing off arrows of intentional ‘aboutness’, as in the earlier phenomenology of Brentano. Instead, Heidegger draws us into the depths of his Schwarzwald, and asks us to imagine a gap with sunlight filtering in. We remain in the forest, but we provide a relatively open spot where other beings can bask for a moment. If we did not do this, everything would remain in the thickets, hidden even to itself.
  • The astronomer Carl Sagan began his 1980 television series Cosmos by saying that human beings, though made of the same stuff as the stars, are conscious and are therefore ‘a way for the cosmos to know itself’. Merleau-Ponty similarly quoted his favourite painter Cézanne as saying, ‘The landscape thinks itself in me, and I am its consciousness.’ This is something like what Heidegger thinks humanity contributes to the earth. We are not made of spiritual nothingness; we are part of Being, but we also bring something unique with us. It is not much: a little open space, perhaps with a path and a bench like the one the young Heidegger used to sit on to do his homework. But through us, the miracle occurs.
  • Beauty aside, Heidegger’s late writing can also be troubling, with its increasingly mystical notion of what it is to be human. If one speaks of a human being mainly as an open space or a clearing, or a means of ‘letting beings be’ and dwelling poetically on the earth, then one doesn’t seem to be talking about any recognisable person. The old Dasein has become less human than ever. It is now a forestry feature.
  • Even today, Jaspers, the dedicated communicator, is far less widely read than Heidegger, who has influenced architects, social theorists, critics, psychologists, artists, film-makers, environmental activists, and innumerable students and enthusiasts — including the later deconstructionist and post-structuralist schools, which took their starting point from his late thinking. Having spent the late 1940s as an outsider and then been rehabilitated, Heidegger became the overwhelming presence in university philosophy all over the European continent from then on.
  • As Levinas reflected on this experience, it helped to lead him to a philosophy that was essentially ethical, rather than ontological like Heidegger’s. He developed his ideas from the work of Jewish theologian Martin Buber, whose I and Thou in 1923 had distinguished between my relationship with an impersonal ‘it’ or ‘them’, and the direct personal encounter I have with a ‘you’. Levinas took it further: when I encounter you, we normally meet face-to-face, and it is through your face that you, as another person, can make ethical demands on me. This is very different from Heidegger’s Mitsein or Being-with, which suggests a group of people standing alongside one another, shoulder to shoulder as if in solidarity — perhaps as a unified nation or Volk.
  • For Levinas, we literally face each other, one individual at a time, and that relationship becomes one of communication and moral expectation. We do not merge; we respond to one another. Instead of being co-opted into playing some role in my personal drama of authenticity, you look me in the eyes — and you remain Other. You remain you.
  • This relationship is more fundamental than the self, more fundamental than consciousness, more fundamental even than Being — and it brings an unavoidable ethical obligation. Ever since Husserl, phenomenologists and existentialists had being trying to stretch the definition of existence to incorporate our social lives and relationships. Levinas did more: he turned philosophy around entirely so that these relationships were the foundation of our existence, not an extension of it.
  • Her last work, The Need for Roots, argues, among other things, that none of us has rights, but each one of us has a near-infinite degree of duty and obligation to the other. Whatever the underlying cause of her death — and anorexia nervosa seems to have been involved — no one could deny that she lived out her philosophy with total commitment. Of all the lives touched on in this book, hers is surely the most profound and challenging application of Iris Murdoch’s notion that a philosophy can be ‘inhabited’.
  • Other thinkers took radical ethical turns during the war years. The most extreme was Simone Weil, who actually tried to live by the principle of putting other people’s ethical demands first. Having returned to France after her travels through Germany in 1932, she had worked in a factory so as to experience the degrading nature of such work for herself. When France fell in 1940, her family fled to Marseilles (against her protests), and later to the US and to Britain. Even in exile, Weil made extraordinary sacrifices. If there were people in the world who could not sleep in a bed, she would not do so either, so she slept on the floor.
  • The mystery tradition had roots in Kierkegaard’s ‘leap of faith’. It owed much to the other great nineteenth-century mystic of the impossible, Dostoevsky, and to older theological notions. But it also grew from the protracted trauma that was the first half of the twentieth century. Since 1914, and especially since 1939, people in Europe and elsewhere had come to the realisation that we cannot fully know or trust ourselves; that we have no excuses or explanations for what we do — and yet that we must ground our existence and relationships on something firm, because otherwise we cannot survive.
  • One striking link between these radical ethical thinkers, all on the fringes of our main story, is that they had religious faith. They also granted a special role to the notion of ‘mystery’ — that which cannot be known, calculated or understood, especially when it concerns our relationships with each other. Heidegger was different from them, since he rejected the religion he grew up with and had no real interest in ethics — probably as a consequence of his having no real interest in the human.
  • Meanwhile, the Christian existentialist Gabriel Marcel was also still arguing, as he had since the 1930s, that ethics trumps everything else in philosophy and that our duty to each other is so great as to play the role of a transcendent ‘mystery’. He too had been led to this position partly by a wartime experience: during the First World War he had worked for the Red Cross’ Information Service, with the unenviable job of answering relatives’ inquiries about missing soldiers. Whenever news came, he passed it on, and usually it was not good. As Marcel later said, this task permanently inoculated him against warmongering rhetoric of any kind, and it made him aware of the power of what is unknown in our lives.
  • As the play’s much-quoted and frequently misunderstood final line has it: ‘Hell is other people.’ Sartre later explained that he did not mean to say that other people were hellish in general. He meant that after death we become frozen in their view, unable any longer to fend off their interpretation. In life, we can still do something to manage the impression we make; in death, this freedom goes and we are left entombed in other’s people’s memories and perceptions.
  • We have to do two near-impossible things at once: understand ourselves as limited by circumstances, and yet continue to pursue our projects as though we are truly in control. In Beauvoir’s view, existentialism is the philosophy that best enables us to do this, because it concerns itself so deeply with both freedom and contingency. It acknowledges the radical and terrifying scope of our freedom in life, but also the concrete influences that other philosophies tend to ignore: history, the body, social relationships and the environment.
  • The aspects of our existence that limit us, Merleau-Ponty says, are the very same ones that bind us to the world and give us scope for action and perception. They make us what we are. Sartre acknowledged the need for this trade-off, but he found it more painful to accept. Everything in him longed to be free of bonds, of impediments and limitations
  • Of course we have to learn this skill of interpreting and anticipating the world, and this happens in early childhood, which is why Merleau-Ponty thought child psychology was essential to philosophy. This is an extraordinary insight. Apart from Rousseau, very few philosophers before him had taken childhood seriously; most wrote as though all human experience were that of a fully conscious, rational, verbal adult who has been dropped into this world from the sky — perhaps by a stork.
  • For Merleau-Ponty, we cannot understand our experience if we don’t think of ourselves in part as overgrown babies. We fall for optical illusions because we once learned to see the world in terms of shapes, objects and things relevant to our own interests. Our first perceptions came to us in tandem with our first active experiments in observing the world and reaching out to explore it, and are still linked with those experiences.
  • Another factor in all of this, for Merleau-Ponty, is our social existence: we cannot thrive without others, or not for long, and we need this especially in early life. This makes solipsistic speculation about the reality of others ridiculous; we could never engage in such speculation if we hadn’t already been formed by them.
  • As Descartes could have said (but didn’t), ‘I think, therefore other people exist.’ We grow up with people playing with us, pointing things out, talking, listening, and getting us used to reading emotions and movements; this is how we become capable, reflective, smoothly integrated beings.
  • In general, Merleau-Ponty thinks human experience only makes sense if we abandon philosophy’s time-honoured habit of starting with a solitary, capsule-like, immobile adult self, isolated from its body and world, which must then be connected up again — adding each element around it as though adding clothing to a doll. Instead, for him, we slide from the womb to the birth canal to an equally close and total immersion in the world. That immersion continues as long as we live, although we may also cultivate the art of partially withdrawing from time to time when we want to think or daydream.
  • When he looks for his own metaphor to describe how he sees consciousness, he comes up with a beautiful one: consciousness, he suggests, is like a ‘fold’ in the world, as though someone had crumpled a piece of cloth to make a little nest or hollow. It stays for a while, before eventually being unfolded and smoothed away. There is something seductive, even erotic, in this idea of my conscious self as an improvised pouch in the cloth of the world. I still have my privacy — my withdrawing room. But I am part of the world’s fabric, and I remain formed out of it for as long as I am here.
  • By the time of these works, Merleau-Ponty is taking his desire to describe experience to the outer limits of what language can convey. Just as with the late Husserl or Heidegger, or Sartre in his Flaubert book, we see a philosopher venturing so far from shore that we can barely follow. Emmanuel Levinas would head out to the fringes too, eventually becoming incomprehensible to all but his most patient initiates.
  • Sartre once remarked — speaking of a disagreement they had about Husserl in 1941 — that ‘we discovered, astounded, that our conflicts had, at times, stemmed from our childhood, or went back to the elementary differences of our two organisms’. Merleau-Ponty also said in an interview that Sartre’s work seemed strange to him, not because of philosophical differences, but because of a certain ‘register of feeling’, especially in Nausea, that he could not share. Their difference was one of temperament and of the whole way the world presented itself to them.
  • The two also differed in their purpose. When Sartre writes about the body or other aspects of experience, he generally does it in order to make a different point. He expertly evokes the grace of his café waiter, gliding between the tables, bending at an angle just so, steering the drink-laden tray through the air on the tips of his fingers — but he does it all in order to illustrate his ideas about bad faith. When Merleau-Ponty writes about skilled and graceful movement, the movement itself is his point. This is the thing he wants to understand.
  • We can never move definitively from ignorance to certainty, for the thread of the inquiry will constantly lead us back to ignorance again. This is the most attractive description of philosophy I’ve ever read, and the best argument for why it is worth doing, even (or especially) when it takes us no distance at all from our starting point.
  • By prioritising perception, the body, social life and childhood development, Merleau-Ponty gathered up philosophy’s far-flung outsider subjects and brought them in to occupy the centre of his thought.
  • In his inaugural lecture at the Collège de France on 15 January 1953, published as In Praise of Philosophy, he said that philosophers should concern themselves above all with whatever is ambiguous in our experience. At the same time, they should think clearly about these ambiguities, using reason and science. Thus, he said, ‘The philosopher is marked by the distinguishing trait that he possesses inseparably the taste for evidence and the feeling for ambiguity.’ A constant movement is required between these two
  • As Sartre wrote in response to Hiroshima, humanity had now gained the power to wipe itself out, and must decide every single day that it wanted to live. Camus also wrote that humanity faced the task of choosing between collective suicide and a more intelligent use of its technology — ‘between hell and reason’. After 1945, there seemed little reason to trust in humanity’s ability to choose well.
  • Merleau-Ponty observed in a lecture of 1951 that, more than any previous century, the twentieth century had reminded people how ‘contingent’ their lives were — how at the mercy of historical events and other changes that they could not control. This feeling went on long after the war ended. After the A-bombs were dropped on Hiroshima and Nagasaki, many feared that a Third World War would not be long in coming, this time between the Soviet Union and the United States.
Javier E

The psychology of hate: How we deny human beings their humanity - Salon.com - 0 views

  • The cross-cultural psychologist Gustav Jahoda catalogued how Europeans since the time of the ancient Greeks viewed those living in relatively primitive cultures as lacking a mind in one of two ways: either lacking self-control and emotions, like an animal, or lacking reason and intellect, like a child. So foreign in appearance, language, and manner, “they” did not simply become other people, they became lesser people. More specifically, they were seen as having lesser minds, diminished capacities to either reason or feel.
  • In the early 1990ss, California State Police commonly referred to crimes involving young black men as NHI—No Humans Involved.
  • The essence of dehumanization is, therefore, failing to recognize the fully human mind of another person. Those who fight against dehumanization typically deal with extreme cases that can make it seem like a relatively rare phenomenon. It is not. Subtle versions are all around us.
  • ...15 more annotations...
  • Even doctors—those whose business is to treat others humanely— can remain disengaged from the minds of their patients, particularly when those patients are easily seen as different from the doctors themselves. Until the early 1990s, for instance, it was routine practice for infants to undergo surgery without anesthesia. Why? Because at the time, doctors did not believe that infants were able to experience pain, a fundamental capacity of the human mind.
  • Your sixth sense functions only when you engage it. When you do not, you may fail to recognize a fully human mind that is right before your eyes.
  • Although it is indeed true that the ability to read the minds of others exists along a spectrum with stable individual differences, I believe that the more useful knowledge comes from understanding the moment-to-moment, situational influences that can lead even the most social person—yes, even you and me—to treat others as mindless animals or objects.
  • None of the cases described in this chapter so far involve people with chronic and stable personality disorders. Instead, they all come from predictable contexts in which people’s sixth sense remained disengaged for one fundamental reason: distance.
  • This three-part chain—sharing attention, imitating action, and imitation creating experience—shows one way in which your sixth sense works through your physical senses. More important, it also shows how your sixth sense could remain disengaged, leaving you disconnected from the minds of others. Close your eyes, look away, plug your ears, stand too far away to see or hear, or simply focus your attention elsewhere, and your sixth sense may not be triggered.
  • Distance keeps your sixth sense disengaged for at least two reasons. First, your ability to understand the minds of others can be triggered by your physical senses. When you’re too far away in physical space, those triggers do not get pulled. Second, your ability to understand the minds of others is also engaged by your cognitive inferences. Too far away in psychological space—too different, too foreign, too other—and those triggers, again, do not get pulled
  • For psychologists, distance is not just physical space. It is also psychological space, the degree to which you feel closely connected to someone else. You are describing psychological distance when you say that you feel “distant” from your spouse, “out of touch” with your kids’ lives, “worlds apart” from a neighbor’s politics, or “separated” from your employees. You don’t mean that you are physically distant from other people; you mean that you feel psychologically distant from them in some way
  • Interviews with U.S. soldiers in World War II found that only 15 to 20 percent were able to discharge their weapons at the enemy in close firefights. Even when they did shoot, soldiers found it hard to hit their human targets. In the U.S. Civil War, muskets were capable of hitting a pie plate at 70 yards and soldiers could typically reload anywhere from 4 to 5 times per minute. Theoretically, a regiment of 200 soldiers firing at a wall of enemy soldiers 100 feet wide should be able to kill 120 on the first volley. And yet the kill rate during the Civil War was closer to 1 to 2 men per minute, with the average distance of engagement being only 30 yards.
  • Modern armies now know that they have to overcome these empathic urges, so soldiers undergo relentless training that desensitizes them to close combat, so that they can do their jobs. Modern technology also allows armies to kill more easily because it enables killing at such a great physical distance. Much of the killing by U.S. soldiers now comes through the hands of drone pilots watching a screen from a trailer in Nevada, with their sixth sense almost completely disengaged.
  • Other people obviously do not need to be standing right in front of you for you to imagine what they are thinking or feeling or planning. You can simply close your eyes and imagine it.
  • The MPFC and a handful of other brain regions undergird the inferential component of your sixth sense. When this network of brain regions is engaged, you are thinking about others’ minds. Failing to engage this region when thinking about other people is then a solid indication that you’re overlooking their minds.
  • Research confirms that the MPFC is engaged more when you’re thinking about yourself, your close friends and family, and others who have beliefs similar to your own. It is activated when you care enough about others to care what they are thinking, and not when you are indifferent to others
  • As people become more and more different from us, or more distant from our immediate social networks, they become less and less likely to engage our MPFC. When we don’t engage this region, others appear relatively mindless, something less than fully human.
  • The mistake that can arise when you fail to engage with the minds of others is that you may come to think of them as relatively mindless. That is, you may come to think that these others have less going on between their ears than, say, you do.
  • It’s not only free will that other minds might seem to lack. This lesser minds effect has many manifestations, including what appears to be a universal tendency to assume that others’ minds are less sophisticated and more superficial than one’s own. Members of distant out-groups, ranging from terrorists to poor hurricane victims to political opponents, are also rated as less able to experience complicated emotions, such as shame, pride, embarassment, and guilt than close members of one’s own group.
julia rhodes

The Mind-Body Illusion | Psychology Today - 0 views

  • We have minds and we have bodies and never the twain shall meet. 
  • As much as I wish I weren’t, I’m a dualist too.  Not intellectually, mind you, but in my everyday life.  Just this morning I dragged myself out of bed.  Who did the dragging?  My mind.  Who was dragged?  My tired body.  The truth is that we have no easy alternative way to make sense of ourselves except as minds and bodies.
  • The problem is that the mind and body obviously do work together.  I have a thought and if it’s a good day, my body puts that thought into action. 
  • ...5 more annotations...
  • the notion that minds and bodies exist in separate realms (i.e. Cartesian Dualism) is entirely untenable. 
  • Our bodies have an address in the first realm, whereas our minds and everything psychological about minds make their home in the second realm.  According to Descartes, these realms are forever separate such that the mind and body cannot influence one another.
  • We are amazed to learn that when the mind is stressed over time, this can lead the body to break down, producing a variety of physical ailments.  This is amazing to us because we believe stress is of the mind and it is difficult to imagine something of the mind affecting our bodies.  Its hard to believe because it violates our intuitive dualistic understanding of the world.
  • Are you smart?  Are you kind?  Are you funny?  Are you afraid of being alone?  These two different kinds of questions, focused on body and mind, respectively, rely on different networks within your brain.  When you think about your body and the actions of your body, you recruit a prefrontal and parietal region on the outer surface of your right hemisphere (these are called ‘lateral’ regions).  When you think about your mind you instead recruit different prefrontal and parietal regions in the middle of the brain, where the two hemispheres touch each other (these are called ‘medial’ regions). 
  • Knowing that our sense of dualism is just an illusion is very important.  Thomas Mussweiler’s lab recently conducted a study showing that those who have a stronger belief in dualism tend to be less healthy.  Why?  Dualists believe their body is just a shell that is far less important than the mind. 
caelengrubb

History Is About Stories. Here's Why We Get Them Wrong | Time - 1 views

  • Science comes hard to most of us because it can’t really take that form. Instead it’s equations, models, theories and the data that support them. But ironically, science offers an explanation of why we love stories.
  • It starts with a challenge posed in human evolution — but the more we come to understand about that subject, the more we see that our storytelling instinct can lead us astray, especially when it comes to how most of us understand history.
  • Many animals have highly developed mind-reading instinct, a sort of tracking-device technique shared with creatures that have no language, not even a language of thought.
  • ...14 more annotations...
  • It’s what they use to track prey and avoid predation.
  • The theory of mind is so obvious it’s nearly invisible: it tells us that behavior is the result of the joint operation of pairs of beliefs and desires.
  • The desires are about the ways we want things to turn out in the future. The beliefs are about the way things are now.
  • The theory of mind turns what people do into a story with a plot by pairing up the content of beliefs and desires, what they are about.
  • Psycholinguistics has shown that the theory of mind is necessary for learning language and almost anything else our parents teach us.
  • Imitating others requires using the theory to figure out what they want us to do and in what order. Without it, you can’t learn much beyond what other higher primates can.
  • The theory of mind makes us construct stories obsessively, and thus encourages us to see the past as a set of them.
  • When popular historians seek to know why Hitler declared war on the U.S. (when he didn’t have to), they put the theory of mind to work: What did he believe and what was it that he wanted that made him do such a foolish thing?
  • he trouble is that the theory of mind is completely wrong about the way the mind, i.e. the brain, actually works. We can’t help but use it to guess what is going on in other people’s minds, and historians rely on it, but the evidence from neuroscience shows that in fact what’s “going on” in anyone’s mind is not decision about what to do in the light of beliefs and desire, but rather a series of neural circuitry firings.
  • The wrongness of the theory of mind is so profound it makes false all the stories we know and love, in narrative history (and in historical novels).
  • Neuroscience reveals that the brain is not organized even remotely to work the way the theory of mind says it does. The fact that narrative histories give persistently different answers to questions historians have been asking for centuries should be evidence that storytelling is not where the real answers can be found.
  • Crucially, they discovered that while different parts of the brain control different things, the neurons’ electrical signals don’t differ in “content”; they are not about different subjects. They are not about anything at all. Each neuron is just in a different part of the mid-brain, doing its job in exactly the same way all other neurons do, sending the same electrochemical oscillations.
  • There is nothing in our brains to vindicate the theory’s description of how anyone ever makes up his or her mind. And that explains a lot about how bad the theory of mind is at predicting anything much about the future, or explaining anything much about the past.
  • If we really want historical knowledge we’ll need to use the same tools scientists use — models and theories we can quantify and test. Guessing what was going through Hitler’s mind, and weaving it into a story is no substitute for empirical science.
Javier E

Breathing In vs. Spacing Out - NYTimes.com - 0 views

  • Although pioneers like Jon Kabat-Zinn, now emeritus professor at the University of Massachusetts Medical Center, began teaching mindfulness meditation as a means of reducing stress as far back as the 1970s, all but a dozen or so of the nearly 100 randomized clinical trials have been published since 2005.
  • Michael Posner, of the University of Oregon, and Yi-Yuan Tang, of Texas Tech University, used functional M.R.I.’s before and after participants spent a combined 11 hours over two weeks practicing a form of mindfulness meditation developed by Tang. They found that it enhanced the integrity and efficiency of the brain’s white matter, the tissue that connects and protects neurons emanating from the anterior cingulate cortex, a region of particular importance for rational decision-making and effortful problem-solving.
  • Perhaps that is why mindfulness has proved beneficial to prospective graduate students. In May, the journal Psychological Science published the results of a randomized trial showing that undergraduates instructed to spend a mere 10 minutes a day for two weeks practicing mindfulness made significant improvement on the verbal portion of the Graduate Record Exam — a gain of 16 percentile points. They also significantly increased their working memory capacity, the ability to maintain and manipulate multiple items of attention.
  • ...7 more annotations...
  • By emphasizing a focus on the here and now, it trains the mind to stay on task and avoid distraction.
  • “Your ability to recognize what your mind is engaging with, and control that, is really a core strength,” said Peter Malinowski, a psychologist and neuroscientist at Liverpool John Moores University in England. “For some people who begin mindfulness training, it’s the first time in their life where they realize that a thought or emotion is not their only reality, that they have the ability to stay focused on something else, for instance their breathing, and let that emotion or thought just pass by.”
  • the higher adults scored on a measurement of mindfulness, the worse they performed on tests of implicit learning — the kind that underlies all sorts of acquired skills and habits but that occurs without conscious awareness.
  • he found that having participants spend a brief period of time on an undemanding task that maximizes mind wandering improved their subsequent performance on a test of creativity. In a follow-up study, he reported that physicists and writers alike came up with their most insightful ideas while spacing out.
  • The trick is knowing when mindfulness is called for and when it’s not.
  • one of the most surprising findings of recent mindfulness studies is that it could have unwanted side effects. Raising roadblocks to the mind’s peregrinations could, after all, prevent the very sort of mental vacations that lead to epiphanies.
  • “There’s so much our brain is doing when we’re not aware of it,” said the study’s leader, Chelsea Stillman, a doctoral candidate. “We know that being mindful is really good for a lot of explicit cognitive functions. But it might not be so useful when you want to form new habits.” Learning to ride a bicycle, speak grammatically or interpret the meaning of people’s facial expressions are three examples of knowledge we acquire through implicit learning
Javier E

How the Internet Gets Inside Us : The New Yorker - 0 views

  • It isn’t just that we’ve lived one technological revolution among many; it’s that our technological revolution is the big social revolution that we live with
  • The idea, for instance, that the printing press rapidly gave birth to a new order of information, democratic and bottom-up, is a cruel cartoon of the truth. If the printing press did propel the Reformation, one of the biggest ideas it propelled was Luther’s newly invented absolutist anti-Semitism. And what followed the Reformation wasn’t the Enlightenment, a new era of openness and freely disseminated knowledge. What followed the Reformation was, actually, the Counter-Reformation, which used the same means—i.e., printed books—to spread ideas about what jerks the reformers were, and unleashed a hundred years of religious warfare.
  • Robert K. Logan’s “The Sixth Language,” begins with the claim that cognition is not a little processing program that takes place inside your head, Robby the Robot style. It is a constant flow of information, memory, plans, and physical movements, in which as much thinking goes on out there as in here. If television produced the global village, the Internet produces the global psyche: everyone keyed in like a neuron, so that to the eyes of a watching Martian we are really part of a single planetary brain. Contraptions don’t change consciousness; contraptions are part of consciousness.
  • ...14 more annotations...
  • In a practical, immediate way, one sees the limits of the so-called “extended mind” clearly in the mob-made Wikipedia, the perfect product of that new vast, supersized cognition: when there’s easy agreement, it’s fine, and when there’s widespread disagreement on values or facts, as with, say, the origins of capitalism, it’s fine, too; you get both sides. The trouble comes when one side is right and the other side is wrong and doesn’t know it. The Shakespeare authorship page and the Shroud of Turin page are scenes of constant conflict and are packed with unreliable information. Creationists crowd cyberspace every bit as effectively as evolutionists, and extend their minds just as fully. Our trouble is not the over-all absence of smartness but the intractable power of pure stupidity, and no machine, or mind, seems extended enough to cure that.
  • “The medium does matter,” Carr has written. “As a technology, a book focuses our attention, isolates us from the myriad distractions that fill our everyday lives. A networked computer does precisely the opposite. It is designed to scatter our attention. . . . Knowing that the depth of our thought is tied directly to the intensity of our attentiveness, it’s hard not to conclude that as we adapt to the intellectual environment of the Net our thinking becomes shallower.”
  • when people struggle to describe the state that the Internet puts them in they arrive at a remarkably familiar picture of disassociation and fragmentation. Life was once whole, continuous, stable; now it is fragmented, multi-part, shimmering around us, unstable and impossible to fix.
  • The odd thing is that this complaint, though deeply felt by our contemporary Better-Nevers, is identical to Baudelaire’s perception about modern Paris in 1855, or Walter Benjamin’s about Berlin in 1930, or Marshall McLuhan’s in the face of three-channel television (and Canadian television, at that) in 1965.
  • If all you have is a hammer, the saying goes, everything looks like a nail; and, if you think the world is broken, every machine looks like the hammer that broke it.
  • Blair argues that the sense of “information overload” was not the consequence of Gutenberg but already in place before printing began.
  • Anyway, the crucial revolution was not of print but of paper: “During the later Middle Ages a staggering growth in the production of manuscripts, facilitated by the use of paper, accompanied a great expansion of readers outside the monastic and scholastic contexts.” For that matter, our minds were altered less by books than by index slips. Activities that seem quite twenty-first century, she shows, began when people cut and pasted from one manuscript to another; made aggregated news in compendiums; passed around précis. “Early modern finding devices” were forced into existence: lists of authorities, lists of headings.
  • The book index was the search engine of its era, and needed to be explained at length to puzzled researchers—as, for that matter, did the Hermione-like idea of “looking things up.” That uniquely evil and necessary thing the comprehensive review of many different books on a related subject, with the necessary oversimplification of their ideas that it demanded, was already around in 1500, and already being accused of missing all the points.
  • at any given moment, our most complicated machine will be taken as a model of human intelligence, and whatever media kids favor will be identified as the cause of our stupidity. When there were automatic looms, the mind was like an automatic loom; and, since young people in the loom period liked novels, it was the cheap novel that was degrading our minds. When there were telephone exchanges, the mind was like a telephone exchange, and, in the same period, since the nickelodeon reigned, moving pictures were making us dumb. When mainframe computers arrived and television was what kids liked, the mind was like a mainframe and television was the engine of our idiocy. Some machine is always showing us Mind; some entertainment derived from the machine is always showing us Non-Mind.
  • What we live in is not the age of the extended mind but the age of the inverted self. The things that have usually lived in the darker recesses or mad corners of our mind—sexual obsessions and conspiracy theories, paranoid fixations and fetishes—are now out there: you click once and you can read about the Kennedy autopsy or the Nazi salute or hog-tied Swedish flight attendants. But things that were once external and subject to the social rules of caution and embarrassment—above all, our interactions with other people—are now easily internalized, made to feel like mere workings of the id left on its own.
  • A social network is crucially different from a social circle, since the function of a social circle is to curb our appetites and of a network to extend them.
  • And so the peacefulness, the serenity that we feel away from the Internet, and which all the Better-Nevers rightly testify to, has less to do with being no longer harried by others than with being less oppressed by the force of your own inner life. Shut off your computer, and your self stops raging quite as much or quite as loud.
  • Now television is the harmless little fireplace over in the corner, where the family gathers to watch “Entourage.” TV isn’t just docile; it’s positively benevolent. This makes you think that what made television so evil back when it was evil was not its essence but its omnipresence. Once it is not everything, it can be merely something. The real demon in the machine is the tirelessness of the user.
  • the Internet screen has always been like the palantír in Tolkien’s “Lord of the Rings”—the “seeing stone” that lets the wizards see the entire world. Its gift is great; the wizard can see it all. Its risk is real: evil things will register more vividly than the great mass of dull good. The peril isn’t that users lose their knowledge of the world. It’s that they can lose all sense of proportion. You can come to think that the armies of Mordor are not just vast and scary, which they are, but limitless and undefeatable, which they aren’t.
katherineharron

Mindfulness: How it could help you be happier, healthier and more successful - CNN - 0 views

  • "Change in humanity must start from individuals," the Dalai Lama told the mayors. "We created this violence, so we can reduce this violence."
  • Paying attention to the matters at hand may sound simple, but most Americans aren't doing it, studies show. Though the experts say there's a lot more research to be done, the number of scientific studies has grown exponentially over the past decade. They show that mindfulness is more than a passing fad; there's early evidence it can help your health.
  • n their 2010 study, they created a computer program that sent questions at random moments to people by iPhone. The program asked, "How are you feeling right now?" "What are you doing right now?" and "Are you thinking about something other than what you're currently doing?"
  • ...8 more annotations...
  • Of the 2,250 adults who answered the pings, 46.9% were not thinking about the task they were doing at the moment. This was the case for 30% of their activities, with one exception: during sex. That, apparently, had
  • their full attention.
  • To remain mindful, the Dalai Lama said, he sleeps a lot: about nine hours a night. He also gets up at 3 a.m. to meditate. He has another session in the afternoon and one more right before bed.
  • Scientists had Buddhist monks meditate while being scanned by an MRI machine. While strapped to a board and put in the huge, noisy machine, the monks calmed their minds, reduced distractions and paid attention to life moment-by-moment.
  • The participants were then subjected to a stressful day-long training exercise. Both groups had similar spikes in blood pressure and breathing rates during the test, but when it was over, the mindfully trained Marines' heart rate and breathing recovered much faster, as did their nervous systems.
  • The data on stress reduction is pretty good," said Richard J. Davidson, founder of the Center for Healthy Minds at the University of Wisconsin-Madison. He has published hundreds of scientific papers about the impact of emotion on the brain and did some of the first MRIs of meditating Buddhist monks.
  • Several workplace studies found that employees who get mindfulness training become more productive and stable. They demonstrate more self-control and efficiency. Employees with mindfulness training also seem to pick up on things faster and can read group dynamics better.
  • Davidson suggests that the data are "much weaker and less convincing" as mindfulness relates to curing a specific disease.It can't cure cancer or chronic pain, but the practice can help manage some of the symptoms. For instance, if you have chronic lower back pain, mindfulness may be as helpful as medication at easing that pain.
Javier E

Jonathan Haidt and the Moral Matrix: Breaking Out of Our Righteous Minds | Guest Blog, ... - 2 views

  • What did satisfy Haidt’s natural thirst for understanding human beings was social psychology.
  • Haidt initially found moral psychology “really dull.” He described it to me as “really missing the heart of the matter and too cerebral.” This changed in his second year after he took a course from the anthropologist Allen Fiske and got interested in moral emotions.
  • “The Emotional Dog and its Rational Trail,” which he describes as “the most important article I’ve ever written.”
  • ...13 more annotations...
  • it helped shift moral psychology away from rationalist models that dominated in the 1980s and 1990s. In its place Haidt offered an understanding of morality from an intuitive and automatic level. As Haidt says on his website, “we are just not very good at thinking open-mindedly about moral issues, so rationalist models end up being poor descriptions of actual moral psychology.”
  • “the mind is divided into parts that sometimes conflict. Like a rider on the back of an elephant, the conscious, reasoning part of the mind has only limited control of what the elephant does.”
  • In the last few decades psychology began to understand the unconscious mind not as dark and suppressed as Freud did, but as intuitive, highly intelligent and necessary for good conscious reasoning. “Elephants,” he reminded me, “are really smart, much smarter than horses.”
  • we are 90 percent chimp 10 percent bee. That is to say, though we are inherently selfish, human nature is also about being what he terms “groupish.” He explained to me like this:
  • they developed the idea that humans possess six universal moral modules, or moral “foundations,” that get built upon to varying degrees across culture and time. They are: Care/harm, Fairness/cheating, Loyalty/betrayal, Authority/subversion, Sanctity/degradation, and Liberty/oppression. Haidt describes these six modules like a “tongue with six taste receptors.” “In this analogy,” he explains in the book, “the moral matrix of a culture is something like its cuisine: it’s a cultural construction, influenced by accidents of environment and history, but it’s not so flexible that anything goes. You can’t have a cuisine based on grass and tree bark, or even one based primarily on bitter tastes. Cuisines vary, but they all must please tongues equipped with the same five taste receptors. Moral matrices vary, but they all must please righteous minds equipped with the same six social receptors.”
  • The questionnaire eventually manifested itself into the website www.YourMorals.org, and it has since gathered over two hundred thousand data points. Here is what they found:
  • This is the crux of the disagreement between liberals and conservatives. As the graph illustrates, liberals value Care and Fairness much more than the other three moral foundations whereas conservative endorse all five more or less equally. This shouldn’t sound too surprising, liberals tend to value universal rights and reject the idea of the United States being superior while conservatives tend to be less concerned about the latest United Nation declaration and more partial to the United States as a superior nation.
  • Haidt began reading political psychology. Karen Stenner’s The Authoritarian Dynamic, “conveyed some key insights about protecting the group that were particularly insightful,” he said. The work of the French sociologist Emile Durkheim was also vital. In contrast to John Stuart Mill, a Durkheimian society, as Haidt explains in an essay for edge.org, “would value self-control over self-expression, duty over rights, and loyalty to one’s groups over concerns for out-groups.”
  • He was motivated to write The Righteous Mind after Kerry lost the 2004 election: “I thought he did a terrible job of making moral appeals so I began thinking about how I could apply moral psychology to understand political divisions. I started studying the politics of culture and realized how liberals and conservatives lived in their own closed worlds.” Each of these worlds, as Haidt explains in the book, “provides a complete, unified, and emotionally compelling worldview, easily justified by observable evidence and nearly impregnable to attack by arguments from outsiders.” He describes them as “moral matrices,” and thinks that moral psychology can help him understand them.
  • “When I say that human nature is selfish, I mean that our minds contain a variety of mental mechanisms that make us adept at promoting our own interests, in competition with our peers. When I say that human nature is also groupish, I mean that our minds contain a variety of mental mechanisms that make us adept at promoting our group’s interests, in competition with other groups. We are not saints, but we are sometimes good team players.” This is what people who had studied morality had not realized, “that we evolved not just so I can treat you well or compete with you, but at the same time we can compete with them.”
  • At first, Haidt reminds us that we are all trapped in a moral matrix where
  • our “elephants” only look for what confirms its moral intuitions while our “riders” play the role of the lawyer; we team up with people who share similar matrices and become close-minded; and we forget that morality is diverse. But on the other hand, Haidt is offering us a choice: take the blue pill and remain happily delusional about your worldview, or take the red pill, and, as he said in his 2008 TED talk, “learn some moral psychology and step outside your moral matrix.”
  • The great Asian religions, Haidt reminded the crowd at TED, swallowed their pride and took the red pill millennia ago. And by stepping out of their moral matrices they realized that societies flourish when they value all of the moral foundations to some degree. This is why Ying and Yang aren’t enemies, “they are both necessary, like night and day, for the functioning of the world.” Or, similarly, why the two of the high Gods in Hinduism, Vishnu the preserver (who stands for conservative principles) and Shiva the destroyer (who stands for liberal principles) work together.
Javier E

You Think With the World, Not Just Your Brain - The Atlantic - 2 views

  • embodied or extended cognition: broadly, the theory that what we think of as brain processes can take place outside of the brain.
  • The octopus, for instance, has a bizarre and miraculous mind, sometimes inside its brain, sometimes extending beyond it in sucker-tipped trails. Neurons are spread throughout its body; the creature has more of them in its arms than in its brain itself. It’s possible that each arm might be, to some extent, an independently thinking creature, all of which are collapsed into an octopean superconsciousness in times of danger
  • Embodied cognition, though, tells us that we’re all more octopus-like than we realize. Our minds are not like the floating conceptual “I” imagined by Descartes. We’re always thinking with, and inseparable from, our bodies.
  • ...8 more annotations...
  • The body codes how the brain works, more than the brain controls the body. When we walk—whether taking a pleasant afternoon stroll, or storming off in tears, or trying to sneak into a stranger’s house late at night, with intentions that seem to have exploded into our minds from some distant elsewhere—the brain might be choosing where each foot lands, but the way in which it does so is always constrained by the shape of our legs
  • The way in which the brain approaches the task of walking is already coded by the physical layout of the body—and as such, wouldn’t it make sense to think of the body as being part of our decision-making apparatus? The mind is not simply the brain, as a generation of biological reductionists, clearing out the old wreckage of what had once been the soul, once insisted. It’s not a kind of software being run on the logical-processing unit of the brain. It’s bigger, and richer, and grosser, in every sense. It has joints and sinews. The rarefied rational mind sweats and shits; this body, this mound of eventually rotting flesh, is really you.
  • That’s embodied cognition.
  • Extended cognition is stranger.
  • The mind, they argue, has no reason to stop at the edges of the body, hemmed in by skin, flapping open and closed with mouths and anuses.
  • When we jot something down—a shopping list, maybe—on a piece of paper, aren’t we in effect remembering it outside our heads? Most of all, isn’t language itself something that’s always external to the individual mind?
  • Language sits hazy in the world, a symbolic and intersubjective ether, but at the same time it forms the substance of our thought and the structure of our understanding. Isn’t language thinking for us?
  • Writing, for Plato, is a pharmakon, a “remedy” for forgetfulness, but if taken in too strong a dose it becomes a poison: A person no longer remembers things for themselves; it’s the text that remembers, with an unholy autonomy. The same criticisms are now commonly made of smartphones. Not much changes.
Javier E

Coronavirus - Mindfulness is useless in a pandemic | 1843 magazine | The Economist - 1 views

  • hese days there are mindful guides to everything from anger to recruitment. There are even mindfulness advent calendars (who needs chocolate when you can feed your soul?). Like selling sand to the Sahara, these all pitch to us the ability to live in the “now”
  • It may be profitable but it flies in the face of thousands of years of evolution. Animals are hardwired to react to the future
  • Expectation is integral to survival and is seen in even the most underwhelming creatures.
  • ...8 more annotations...
  • In a small way babies are learning to predict and anticipate the future.
  • You can see similar responses throughout the animal kingdom. Give a chimp a raisin and its reward neurons fire. Teach a chimp that pressing a button will bring a raisin, and the chimp’s brain starts to react to the button as if that were the reward. “The process of getting the reward itself becomes rewarding,”
  • Planning is key to our physical survival. It’s also central to our emotional wellbeing.
  • Daydreaming, or mind-wandering, as the wonks call it, is part of universal human experience. In 2008 one Harvard study found that people spent nearly half of their waking hours mind-wandering – often about good things.
  • Imagining a positive outcome is a popular technique to build resilience and confidence in everything from sport to job interviews. Teachers may tell pupils off for daydreaming in lessons but studies show a link between daydreaming and creative thought.
  • . That’s not the point. It’s our dreams that feed us. We are hardwired to anticipate the future and, with all due respect to the philosophers, to thrill to it.
  • . The pandemic has reminded us that the joy we take in planning is as valid as the event itself.
  • When the present is crushing – when lives and economies are being ruined – our imagination offers us a welcome escape. The mind, as Milton put it, is its own place: it can make a hell of heaven, or a heaven of hell. Perhaps we should let it
runlai_jiang

8 Infinity Facts That Will Blow Your Mind - 0 views

  • Infinity has its own special symbol: ∞. The symbol, sometimes called the lemniscate, was introduced by clergyman and mathematician John Wallis in 1655. The word "lemniscate" comes from the Latin word lemniscus, which means "ribbon," while the word "infinity" comes from the Latin word infinitas, which means "boundless."
  • Of all Zeno's paradoxes, the most famous is his paradox of the Tortoise and Achilles. In the paradox, a tortoise challenges the Greek hero Achilles to a race, providing the tortoise is given a small head start. The tortoise argues he will win the race because as Achilles catches up to him, the tortoise will have gone a bit further, adding to the distance.
  • Pi as an Example of Infinity Pi is a number consisting of an infinite number of digits. Jeffrey Coolidge / Getty Images Another good example of infinity is the number π or pi. Mathematicians use a symbol for pi because it's impossible to write the number down. Pi consists of an infinite number of digits. It's often rounded to 3.14 or even 3.14159, yet no matter how many digits you write, it's impossible to get to the end.
  • ...2 more annotations...
  • Fractals and Infinity A fractal may be magnified over and over, to infinity, always revealing more detail. PhotoviewPlus / Getty Images A fractal is an abstract mathematical object, used in art and to simulate natural phenomena. Written as a mathematical equation, most fractals are nowhere differentiable. When viewing an image of a fractal, this means you could zoom in and see new detail. In other words, a fractal is infinitely magnifiable.The Koch snowflake is an interesting example of a fractal. The snowflake starts as an equilateral triangle. For each iteration of the fractal:Each line segment is divided into three equal segments.
  • Cosmology and Infinity Even if the universe is finite, it might be one of an infinite number of "bubbles.". Detlev van Ravenswaay / Getty Images Cosmologists study the universe and ponder infinity. Does space go on and on without end? This remains an open question. Even if the physical universe as we know it has a boundary, there is still the multiverse theory to consider. Our universe may be but one in an infinite number of them.
Emily Freilich

The Man Who Would Teach Machines to Think - James Somers - The Atlantic - 1 views

  • Douglas Hofstadter, the Pulitzer Prize–winning author of Gödel, Escher, Bach, thinks we've lost sight of what artificial intelligence really means. His stubborn quest to replicate the human mind.
  • “If somebody meant by artificial intelligence the attempt to understand the mind, or to create something human-like, they might say—maybe they wouldn’t go this far—but they might say this is some of the only good work that’s ever been done
  • Their operating premise is simple: the mind is a very unusual piece of software, and the best way to understand how a piece of software works is to write it yourself.
  • ...43 more annotations...
  • “It depends on what you mean by artificial intelligence.”
  • Computers are flexible enough to model the strange evolved convolutions of our thought, and yet responsive only to precise instructions. So if the endeavor succeeds, it will be a double victory: we will finally come to know the exact mechanics of our selves—and we’ll have made intelligent machines.
  • Ever since he was about 14, when he found out that his youngest sister, Molly, couldn’t understand language, because she “had something deeply wrong with her brain” (her neurological condition probably dated from birth, and was never diagnosed), he had been quietly obsessed by the relation of mind to matter.
  • How could consciousness be physical? How could a few pounds of gray gelatin give rise to our very thoughts and selves?
  • Consciousness, Hofstadter wanted to say, emerged via just the same kind of “level-crossing feedback loop.”
  • In 1931, the Austrian-born logician Kurt Gödel had famously shown how a mathematical system could make statements not just about numbers but about the system itself.
  • But then AI changed, and Hofstadter didn’t change with it, and for that he all but disappeared.
  • By the early 1980s, the pressure was great enough that AI, which had begun as an endeavor to answer yes to Alan Turing’s famous question, “Can machines think?,” started to mature—or mutate, depending on your point of view—into a subfield of software engineering, driven by applications.
  • Take Deep Blue, the IBM supercomputer that bested the chess grandmaster Garry Kasparov. Deep Blue won by brute force.
  • Hofstadter wanted to ask: Why conquer a task if there’s no insight to be had from the victory? “Okay,” he says, “Deep Blue plays very good chess—so what? Does that tell you something about how we play chess? No. Does it tell you about how Kasparov envisions, understands a chessboard?”
  • AI started working when it ditched humans as a model, because it ditched them. That’s the thrust of the analogy: Airplanes don’t flap their wings; why should computers think?
  • It’s a compelling point. But it loses some bite when you consider what we want: a Google that knows, in the way a human would know, what you really mean when you search for something
  • Cognition is recognition,” he likes to say. He describes “seeing as” as the essential cognitive act: you see some lines a
  • How do you make a search engine that understands if you don’t know how you understand?
  • s “an A,” you see a hunk of wood as “a table,” you see a meeting as “an emperor-has-no-clothes situation” and a friend’s pouting as “sour grapes”
  • That’s what it means to understand. But how does understanding work?
  • analogy is “the fuel and fire of thinking,” the bread and butter of our daily mental lives.
  • there’s an analogy, a mental leap so stunningly complex that it’s a computational miracle: somehow your brain is able to strip any remark of the irrelevant surface details and extract its gist, its “skeletal essence,” and retrieve, from your own repertoire of ideas and experiences, the story or remark that best relates.
  • in Hofstadter’s telling, the story goes like this: when everybody else in AI started building products, he and his team, as his friend, the philosopher Daniel Dennett, wrote, “patiently, systematically, brilliantly,” way out of the light of day, chipped away at the real problem. “Very few people are interested in how human intelligence works,”
  • For more than 30 years, Hofstadter has worked as a professor at Indiana University at Bloomington
  • The quick unconscious chaos of a mind can be slowed down on the computer, or rewound, paused, even edited
  • project out of IBM called Candide. The idea behind Candide, a machine-translation system, was to start by admitting that the rules-based approach requires too deep an understanding of how language is produced; how semantics, syntax, and morphology work; and how words commingle in sentences and combine into paragraphs—to say nothing of understanding the ideas for which those words are merely conduits.
  • , Hofstadter directs the Fluid Analogies Research Group, affectionately known as FARG.
  • Parts of a program can be selectively isolated to see how it functions without them; parameters can be changed to see how performance improves or degrades. When the computer surprises you—whether by being especially creative or especially dim-witted—you can see exactly why.
  • When you read Fluid Concepts and Creative Analogies: Computer Models of the Fundamental Mechanisms of Thought, which describes in detail this architecture and the logic and mechanics of the programs that use it, you wonder whether maybe Hofstadter got famous for the wrong book.
  • ut very few people, even admirers of GEB, know about the book or the programs it describes. And maybe that’s because FARG’s programs are almost ostentatiously impractical. Because they operate in tiny, seemingly childish “microdomains.” Because there is no task they perform better than a human.
  • “The entire effort of artificial intelligence is essentially a fight against computers’ rigidity.”
  • “Nobody is a very reliable guide concerning activities in their mind that are, by definition, subconscious,” he once wrote. “This is what makes vast collections of errors so important. In an isolated error, the mechanisms involved yield only slight traces of themselves; however, in a large collection, vast numbers of such slight traces exist, collectively adding up to strong evidence for (and against) particular mechanisms.
  • So IBM threw that approach out the window. What the developers did instead was brilliant, but so straightforward,
  • The technique is called “machine learning.” The goal is to make a device that takes an English sentence as input and spits out a French sentence
  • What you do is feed the machine English sentences whose French translations you already know. (Candide, for example, used 2.2 million pairs of sentences, mostly from the bilingual proceedings of Canadian parliamentary debates.)
  • By repeating this process with millions of pairs of sentences, you will gradually calibrate your machine, to the point where you’ll be able to enter a sentence whose translation you don’t know and get a reasonable resul
  • Google Translate team can be made up of people who don’t speak most of the languages their application translates. “It’s a bang-for-your-buck argument,” Estelle says. “You probably want to hire more engineers instead” of native speakers.
  • But the need to serve 1 billion customers has a way of forcing the company to trade understanding for expediency. You don’t have to push Google Translate very far to see the compromises its developers have made for coverage, and speed, and ease of engineering. Although Google Translate captures, in its way, the products of human intelligence, it isn’t intelligent itself.
  • “Did we sit down when we built Watson and try to model human cognition?” Dave Ferrucci, who led the Watson team at IBM, pauses for emphasis. “Absolutely not. We just tried to create a machine that could win at Jeopardy.”
  • For Ferrucci, the definition of intelligence is simple: it’s what a program can do. Deep Blue was intelligent because it could beat Garry Kasparov at chess. Watson was intelligent because it could beat Ken Jennings at Jeopardy.
  • “There’s a limited number of things you can do as an individual, and I think when you dedicate your life to something, you’ve got to ask yourself the question: To what end? And I think at some point I asked myself that question, and what it came out to was, I’m fascinated by how the human mind works, it would be fantastic to understand cognition, I love to read books on it, I love to get a grip on it”—he called Hofstadter’s work inspiring—“but where am I going to go with it? Really what I want to do is build computer systems that do something.
  • Peter Norvig, one of Google’s directors of research, echoes Ferrucci almost exactly. “I thought he was tackling a really hard problem,” he told me about Hofstadter’s work. “And I guess I wanted to do an easier problem.”
  • Of course, the folly of being above the fray is that you’re also not a part of it
  • As our machines get faster and ingest more data, we allow ourselves to be dumber. Instead of wrestling with our hardest problems in earnest, we can just plug in billions of examples of them.
  • Hofstadter hasn’t been to an artificial-intelligence conference in 30 years. “There’s no communication between me and these people,” he says of his AI peers. “None. Zero. I don’t want to talk to colleagues that I find very, very intransigent and hard to convince of anything
  • Everything from plate tectonics to evolution—all those ideas, someone had to fight for them, because people didn’t agree with those ideas.
  • Academia is not an environment where you just sit in your bath and have ideas and expect everyone to run around getting excited. It’s possible that in 50 years’ time we’ll say, ‘We really should have listened more to Doug Hofstadter.’ But it’s incumbent on every scientist to at least think about what is needed to get people to understand the ideas.”
Javier E

Think Less, Think Better - The New York Times - 1 views

  • the capacity for original and creative thinking is markedly stymied by stray thoughts, obsessive ruminations and other forms of “mental load.”
  • Many psychologists assume that the mind, left to its own devices, is inclined to follow a well-worn path of familiar associations. But our findings suggest that innovative thinking, not routine ideation, is our default cognitive mode when our minds are clear.
  • We found that a high mental load consistently diminished the originality and creativity of the response: Participants with seven digits to recall resorted to the most statistically common responses (e.g., white/black), whereas participants with two digits gave less typical, more varied pairings (e.g., white/cloud).
  • ...8 more annotations...
  • In another experiment, we found that longer response times were correlated with less diverse responses, ruling out the possibility that participants with low mental loads simply took more time to generate an interesting response.
  • it seems that with a high mental load, you need more time to generate even a conventional thought. These experiments suggest that the mind’s natural tendency is to explore and to favor novelty, but when occupied it looks for the most familiar and inevitably least interesting solution.
  • In general, there is a tension in our brains between exploration and exploitation. When we are exploratory, we attend to things with a wide scope, curious and desiring to learn. Other times, we rely on, or “exploit,” what we already know, leaning on our expectations, trusting the comfort of a predictable environment
  • Much of our lives are spent somewhere between those extremes. There are functional benefits to both modes: If we were not exploratory, we would never have ventured out of the caves; if we did not exploit the certainty of the familiar, we would have taken too many risks and gone extinct. But there needs to be a healthy balance
  • All these loads can consume mental capacity, leading to dull thought and anhedonia — a flattened ability to experience pleasure.
  • ancient meditative practice helps free the mind to have richer experiences of the present
  • your life leaves too much room for your mind to wander. As a result, only a small fraction of your mental capacity remains engaged in what is before it, and mind-wandering and ruminations become a tax on the quality of your life
  • Honing an ability to unburden the load on your mind, be it through meditation or some other practice, can bring with it a wonderfully magnified experience of the world — and, as our study suggests, of your own mind.
Javier E

The Mind of a Flip-Flopper - NYTimes.com - 0 views

  • Moral attitudes are especially difficult to change, Haidt said, because the emotions attached to those preferences largely define who we are. “Certain beliefs are so important for a society or group that they become part of how you prove your identity,” he said. “It’s as though we circle around these ideas. It’s how we become one.”
  • We tend to side with people who share our identity — even when the facts disagree — and calling someone a flip-flopper is a way of calling them morally suspect
  • People change their minds all the time, even about very important matters. It’s just hard to do when the stakes are high. That’s why marshaling data and making rational arguments won’t work. Whether you’re changing your own mind or someone else’s, the key is emotional, persuasive storytelling.
  • ...6 more annotations...
  • Stories are more powerful than data, Wilson says, because they allow individuals to identify emotionally with ideas and people they might otherwise see as “outsiders.”
  • Once you care about a character, Wilson says, you can find a way to fit them into your identity.
  • Our identities, of course, are also stories we tell ourselves about ourselves. In some cases — if we want to think of ourselves as thoughtful and open-minded — we can adopt identities that actually encourage flip-flopping.
  • Simply having to articulate why you believe what you do can also end up changing your attitude
  • Even when we do change our minds, we often convince ourselves that we haven’t.
  • understanding the power of stories could go a long ways toward bridging gaps that only get bigger when we expect those who disagree to rationally accept data and evidence. “We fight it out by throwing arguments at each other and are upset when they have no effect,” Haidt says. “It makes us accuse our opponents of bad faith and ulterior motives. But the truth is that our minds just aren’t set up to be changed by mere evidence and argument presented by a ‘stranger.’ ”
Javier E

Book Review: The Moral Lives of Animals - WSJ.com - 0 views

  • have elucidated very real differences between human and nonhuman minds in the realm of conceptual reasoning, particularly with respect to what has been termed "theory of mind." This is the uniquely human ability to have thoughts about thoughts and to perceive that other minds exist and that they can hold ideas and beliefs different from one's own. While human and animal minds share a broadly similar ability to learn from experience, formulate intentions and store memories, careful experiments have repeatedly come up empty when attempting to establish the existence of a theory of mind in nonhumans.
  • A "theory of mind" is what makes it even possible to formulate abstract notions, to imagine the future, to try out ideas before acting upon them, to reflect about our own conduct and to see things from another's viewpoint. Charles Darwin observed that such a capacity is indeed the sine qua non of moral thought: "A moral being is one who is capable of reflecting on his past actions and their motives—of approving some and disapproving of others," he wrote in "The Descent of Man."
Javier E

Raymond Tallis Takes Out the 'Neurotrash' - The Chronicle Review - The Chronicle of Hig... - 0 views

  • Tallis informs 60 people gathered in a Kent lecture hall that his talk will demolish two "pillars of unwisdom." The first, "neuromania," is the notion that to understand people you must peer into the "intracranial darkness" of their skulls with brain-scanning technology. The second, "Darwinitis," is the idea that Charles Darwin's evolutionary theory can explain not just the origin of the human species—a claim Tallis enthusiastically accepts—but also the nature of human behavior and institutions.
  • Aping Mankind argues that neuroscientific approaches to things like love, wisdom, and beauty are flawed because you can't reduce the mind to brain activity alone.
  • Stephen Cave, a Berlin-based philosopher and writer who has called Aping Mankind "an important work," points out that most philosophers and scientists do in fact believe "that mind is just the product of certain brain activity, even if we do not currently know quite how." Tallis "does both the reader and these thinkers an injustice" by declaring that view "obviously" wrong,
  • ...5 more annotations...
  • Geraint Rees, director of University College London's Institute of Cognitive Neuroscience, complains that reading Tallis is "a bit like trying to nail jelly to the wall." He "rubbishes every current theory of the relationship between mind and brain, whether philosophical or neuroscientific," while offering "little or no alternative,"
  • cultural memes. The Darwinesque concept originates in Dawkins's 1976 book, The Selfish Gene. Memes are analogous to genes, Dennett has said, "replicating units of culture" that spread from mind to mind like a virus. Religion, chess, songs, clothing, tolerance for free speech—all have been described as memes. Tallis considers it absurd to talk of a noun-phrase like "tolerance for free speech" as a discrete entity. But Dennett argues that Tallis's objections are based on "a simplistic idea of what one might mean by a unit." Memes aren't units? Well, in that spirit, says Dennett, organisms aren't units of biology, nor are species—they're too complex, with too much variation. "He's got to allow theory to talk about entities which are not simple building blocks," Dennett says.
  • How is it that he perceives the glass of water on the table? How is it that he feels a sense of self over time? How is it that he can remember a patient he saw in 1973, and then cast his mind forward to his impending visit to the zoo? There are serious problems with trying to reduce such things to impulses in the brain, he argues. We can explain "how the light gets in," he says, but not "how the gaze looks out." And isn't it astonishing, he adds, that much neural activity seems to have no link to consciousness? Instead, it's associated with things like controlling automatic movements and regulating blood pressure. Sure, we need the brain for consciousness: "Chop my head off, and my IQ descends." But it's not the whole story. There is more to perceptions, memories, and beliefs than neural impulses can explain. The human sphere encompasses a "community of minds," Tallis has written, "woven out of a trillion cognitive handshakes of shared attention, within which our freedom operates and our narrated lives are led." Those views on perception and memory anchor his attack on "neurobollocks." Because if you can't get the basics right, he says, then it's premature to look to neuroscience for clues to complex things like love.
  • Yes, many unanswered questions persist. But these are early days, and neuroscience remains immature, says Churchland, a professor emerita of philosophy at University of California at San Diego and author of the subfield-spawning 1986 book Neurophilosophy. In the 19th century, she points out, people thought we'd never understand light. "Well, by gosh," she says, "by the time the 20th century rolls around, it turns out that light is electromagnetic radiation. ... So the fact that at a certain point in time something seems like a smooth-walled mystery that we can't get a grip on, doesn't tell us anything about whether some real smart graduate student is going to sort it out in the next 10 years or not."
  • Dennett claims he's got much of it sorted out already. He wrote a landmark book on the topic in 1991, Consciousness Explained. (The title "should have landed him in court, charged with breach of the Trade Descriptions Act," writes Tallis.) Dennett uses the vocabulary of computer science to explain how consciousness emerges from the huge volume of things happening in the brain all at once. We're not aware of everything, he tells me, only a "limited window." He describes that stream of consciousness as "the activities of a virtual machine which is running on the parallel hardware of the brain." "You—the fruits of all your experience, not just your genetic background, but everything you've learned and done and all your memories—what ties those all together? What makes a self?" Dennett asks. "The answer is, and has to be, the self is like a software program that organizes the activities of the brain."
Javier E

How Meditation Changes the Brain and Body - The New York Times - 0 views

  • a study published in Biological Psychiatry brings scientific thoroughness to mindfulness meditation and for the first time shows that, unlike a placebo, it can change the brains of ordinary people and potentially improve their health.
  • One difficulty of investigating meditation has been the placebo problem. In rigorous studies, some participants receive treatment while others get a placebo: They believe they are getting the same treatment when they are not. But people can usually tell if they are meditating. Dr. Creswell, working with scientists from a number of other universities, managed to fake mindfulness.
  • Half the subjects were then taught formal mindfulness meditation at a residential retreat center; the rest completed a kind of sham mindfulness meditation that was focused on relaxation and distracting oneself from worries and stress.
  • ...3 more annotations...
  • Dr. Creswell and his colleagues believe that the changes in the brain contributed to the subsequent reduction in inflammation, although precisely how remains unknown.
  • follow-up brain scans showed differences in only those who underwent mindfulness meditation. There was more activity, or communication, among the portions of their brains that process stress-related reactions and other areas related to focus and calm. Four months later, those who had practiced mindfulness showed much lower levels in their blood of a marker of unhealthy inflammation than the relaxation group, even though few were still meditating.
  • When it comes to how much mindfulness is needed to improve health, Dr. Creswell says, ‘‘we still have no idea about the ideal dose.”
Javier E

ROUGH TYPE | Nicholas Carr's blog - 0 views

  • The smartphone has become a repository of the self, recording and dispensing the words, sounds and images that define what we think, what we experience and who we are. In a 2015 Gallup survey, more than half of iPhone owners said that they couldn’t imagine life without the device.
  • So what happens to our minds when we allow a single tool such dominion over our perception and cognition?
  • Not only do our phones shape our thoughts in deep and complicated ways, but the effects persist even when we aren’t using the devices. As the brain grows dependent on the technology, the research suggests, the intellect weakens.
  • ...39 more annotations...
  • he has seen mounting evidence that using a smartphone, or even hearing one ring or vibrate, produces a welter of distractions that makes it harder to concentrate on a difficult problem or job. The division of attention impedes reasoning and performance.
  • Another 2015 study, appearing in the Journal of Computer-Mediated Communication, showed that when people hear their phone ring but are unable to answer it, their blood pressure spikes, their pulse quickens, and their problem-solving skills decline.
  • The researchers recruited 520 undergraduates at UCSD and gave them two standard tests of intellectual acuity. One test gauged “available working-memory capacity,” a measure of how fully a person’s mind can focus on a particular task. The second assessed “fluid intelligence,” a person’s ability to interpret and solve an unfamiliar problem. The only variable in the experiment was the location of the subjects’ smartphones. Some of the students were asked to place their phones in front of them on their desks; others were told to stow their phones in their pockets or handbags; still others were required to leave their phones in a different room.
  • In both tests, the subjects whose phones were in view posted the worst scores, while those who left their phones in a different room did the best. The students who kept their phones in their pockets or bags came out in the middle. As the phone’s proximity increased, brainpower decreased.
  • In subsequent interviews, nearly all the participants said that their phones hadn’t been a distraction—that they hadn’t even thought about the devices during the experiment. They remained oblivious even as the phones disrupted their focus and thinking.
  • In a 2013 study conducted at the University of Essex in England, 142 participants were divided into pairs and asked to converse in private for ten minutes. Half talked with a phone in the room, half without a phone present. The subjects were then given tests of affinity, trust and empathy. “The mere presence of mobile phones,” the researchers reported in the Journal of Social and Personal Relationships, “inhibited the development of interpersonal closeness and trust” and diminished “the extent to which individuals felt empathy and understanding from their partners.”
  • the “integration of smartphones into daily life” appears to cause a “brain drain” that can diminish such vital mental skills as “learning, logical reasoning, abstract thought, problem solving, and creativity.”
  •  Smartphones have become so entangled with our existence that, even when we’re not peering or pawing at them, they tug at our attention, diverting precious cognitive resources. Just suppressing the desire to check our phone, which we do routinely and subconsciously throughout the day, can debilitate our thinking.
  • They found that students who didn’t bring their phones to the classroom scored a full letter-grade higher on a test of the material presented than those who brought their phones. It didn’t matter whether the students who had their phones used them or not: All of them scored equally poorly.
  • A study of nearly a hundred secondary schools in the U.K., published last year in the journal Labour Economics, found that when schools ban smartphones, students’ examination scores go up substantially, with the weakest students benefiting the most.
  • Social skills and relationships seem to suffer as well.
  • Because smartphones serve as constant reminders of all the friends we could be chatting with electronically, they pull at our minds when we’re talking with people in person, leaving our conversations shallower and less satisfying.
  • A second experiment conducted by the researchers produced similar results, while also revealing that the more heavily students relied on their phones in their everyday lives, the greater the cognitive penalty they suffered.
  • The evidence that our phones can get inside our heads so forcefully is unsettling. It suggests that our thoughts and feelings, far from being sequestered in our skulls, can be skewed by external forces we’re not even aware o
  •  Scientists have long known that the brain is a monitoring system as well as a thinking system. Its attention is drawn toward any object that is new, intriguing or otherwise striking — that has, in the psychological jargon, “salience.”
  • even in the history of captivating media, the smartphone stands out. It is an attention magnet unlike any our minds have had to grapple with before. Because the phone is packed with so many forms of information and so many useful and entertaining functions, it acts as what Dr. Ward calls a “supernormal stimulus,” one that can “hijack” attention whenever it is part of our surroundings — and it is always part of our surroundings.
  • Imagine combining a mailbox, a newspaper, a TV, a radio, a photo album, a public library and a boisterous party attended by everyone you know, and then compressing them all into a single, small, radiant object. That is what a smartphone represents to us. No wonder we can’t take our minds off it.
  • The irony of the smartphone is that the qualities that make it so appealing to us — its constant connection to the net, its multiplicity of apps, its responsiveness, its portability — are the very ones that give it such sway over our minds.
  • Phone makers like Apple and Samsung and app writers like Facebook, Google and Snap design their products to consume as much of our attention as possible during every one of our waking hours
  • Social media apps were designed to exploit “a vulnerability in human psychology,” former Facebook president Sean Parker said in a recent interview. “[We] understood this consciously. And we did it anyway.”
  • A quarter-century ago, when we first started going online, we took it on faith that the web would make us smarter: More information would breed sharper thinking. We now know it’s not that simple.
  • As strange as it might seem, people’s knowledge and understanding may actually dwindle as gadgets grant them easier access to online data stores
  • In a seminal 2011 study published in Science, a team of researchers — led by the Columbia University psychologist Betsy Sparrow and including the late Harvard memory expert Daniel Wegner — had a group of volunteers read forty brief, factual statements (such as “The space shuttle Columbia disintegrated during re-entry over Texas in Feb. 2003”) and then type the statements into a computer. Half the people were told that the machine would save what they typed; half were told that the statements would be erased.
  • Afterward, the researchers asked the subjects to write down as many of the statements as they could remember. Those who believed that the facts had been recorded in the computer demonstrated much weaker recall than those who assumed the facts wouldn’t be stored. Anticipating that information would be readily available in digital form seemed to reduce the mental effort that people made to remember it
  • The researchers dubbed this phenomenon the “Google effect” and noted its broad implications: “Because search engines are continually available to us, we may often be in a state of not feeling we need to encode the information internally. When we need it, we will look it up.”
  • as the pioneering psychologist and philosopher William James said in an 1892 lecture, “the art of remembering is the art of thinking.”
  • Only by encoding information in our biological memory can we weave the rich intellectual associations that form the essence of personal knowledge and give rise to critical and conceptual thinking. No matter how much information swirls around us, the less well-stocked our memory, the less we have to think with.
  • As Dr. Wegner and Dr. Ward explained in a 2013 Scientific American article, when people call up information through their devices, they often end up suffering from delusions of intelligence. They feel as though “their own mental capacities” had generated the information, not their devices. “The advent of the ‘information age’ seems to have created a generation of people who feel they know more than ever before,” the scholars concluded, even though “they may know ever less about the world around them.”
  • That insight sheds light on society’s current gullibility crisis, in which people are all too quick to credit lies and half-truths spread through social media. If your phone has sapped your powers of discernment, you’ll believe anything it tells you.
  • Data, the novelist and critic Cynthia Ozick once wrote, is “memory without history.” Her observation points to the problem with allowing smartphones to commandeer our brains
  • When we constrict our capacity for reasoning and recall or transfer those skills to a gadget, we sacrifice our ability to turn information into knowledge. We get the data but lose the meaning
  • We need to give our minds more room to think. And that means putting some distance between ourselves and our phones.
  • Harvard Business School professor emerita Shoshana Zuboff argues in her new book that the Valley’s wealth and power are predicated on an insidious, essentially pathological form of private enterprise—what she calls “surveillance capitalism.” Pioneered by Google, perfected by Facebook, and now spreading throughout the economy, surveillance capitalism uses human life as its raw material. Our everyday experiences, distilled into data, have become a privately-owned business asset used to predict and mold our behavior, whether we’re shopping or socializing, working or voting.
  • By reengineering the economy and society to their own benefit, Google and Facebook are perverting capitalism in a way that undermines personal freedom and corrodes democracy.
  • Under the Fordist model of mass production and consumption that prevailed for much of the twentieth century, industrial capitalism achieved a relatively benign balance among the contending interests of business owners, workers, and consumers. Enlightened executives understood that good pay and decent working conditions would ensure a prosperous middle class eager to buy the goods and services their companies produced. It was the product itself — made by workers, sold by companies, bought by consumers — that tied the interests of capitalism’s participants together. Economic and social equilibrium was negotiated through the product.
  • By removing the tangible product from the center of commerce, surveillance capitalism upsets the equilibrium. Whenever we use free apps and online services, it’s often said, we become the products, our attention harvested and sold to advertisers
  • this truism gets it wrong. Surveillance capitalism’s real products, vaporous but immensely valuable, are predictions about our future behavior — what we’ll look at, where we’ll go, what we’ll buy, what opinions we’ll hold — that internet companies derive from our personal data and sell to businesses, political operatives, and other bidders.
  • Unlike financial derivatives, which they in some ways resemble, these new data derivatives draw their value, parasite-like, from human experience.To the Googles and Facebooks of the world, we are neither the customer nor the product. We are the source of what Silicon Valley technologists call “data exhaust” — the informational byproducts of online activity that become the inputs to prediction algorithms
  • internet companies operate in what Zuboff terms “extreme structural independence from people.” When databases displace goods as the engine of the economy, our own interests, as consumers but also as citizens, cease to be part of the negotiation. We are no longer one of the forces guiding the market’s invisible hand. We are the objects of surveillance and control.
Javier E

Thieves of experience: On the rise of surveillance capitalism - 1 views

  • Harvard Business School professor emerita Shoshana Zuboff argues in her new book that the Valley’s wealth and power are predicated on an insidious, essentially pathological form of private enterprise—what she calls “surveillance capitalism.” Pioneered by Google, perfected by Facebook, and now spreading throughout the economy, surveillance capitalism uses human life as its raw material. Our everyday experiences, distilled into data, have become a privately-owned business asset used to predict and mold our behavior, whether we’re shopping or socializing, working or voting.
  • By reengineering the economy and society to their own benefit, Google and Facebook are perverting capitalism in a way that undermines personal freedom and corrodes democracy.
  • Under the Fordist model of mass production and consumption that prevailed for much of the twentieth century, industrial capitalism achieved a relatively benign balance among the contending interests of business owners, workers, and consumers. Enlightened executives understood that good pay and decent working conditions would ensure a prosperous middle class eager to buy the goods and services their companies produced. It was the product itself — made by workers, sold by companies, bought by consumers — that tied the interests of capitalism’s participants together. Economic and social equilibrium was negotiated through the product.
  • ...72 more annotations...
  • By removing the tangible product from the center of commerce, surveillance capitalism upsets the equilibrium. Whenever we use free apps and online services, it’s often said, we become the products, our attention harvested and sold to advertisers
  • this truism gets it wrong. Surveillance capitalism’s real products, vaporous but immensely valuable, are predictions about our future behavior — what we’ll look at, where we’ll go, what we’ll buy, what opinions we’ll hold — that internet companies derive from our personal data and sell to businesses, political operatives, and other bidders.
  • Unlike financial derivatives, which they in some ways resemble, these new data derivatives draw their value, parasite-like, from human experience.To the Googles and Facebooks of the world, we are neither the customer nor the product. We are the source of what Silicon Valley technologists call “data exhaust” — the informational byproducts of online activity that become the inputs to prediction algorithms
  • Another 2015 study, appearing in the Journal of Computer-Mediated Communication, showed that when people hear their phone ring but are unable to answer it, their blood pressure spikes, their pulse quickens, and their problem-solving skills decline.
  • The smartphone has become a repository of the self, recording and dispensing the words, sounds and images that define what we think, what we experience and who we are. In a 2015 Gallup survey, more than half of iPhone owners said that they couldn’t imagine life without the device.
  • So what happens to our minds when we allow a single tool such dominion over our perception and cognition?
  • Not only do our phones shape our thoughts in deep and complicated ways, but the effects persist even when we aren’t using the devices. As the brain grows dependent on the technology, the research suggests, the intellect weakens.
  • he has seen mounting evidence that using a smartphone, or even hearing one ring or vibrate, produces a welter of distractions that makes it harder to concentrate on a difficult problem or job. The division of attention impedes reasoning and performance.
  • internet companies operate in what Zuboff terms “extreme structural independence from people.” When databases displace goods as the engine of the economy, our own interests, as consumers but also as citizens, cease to be part of the negotiation. We are no longer one of the forces guiding the market’s invisible hand. We are the objects of surveillance and control.
  • Social skills and relationships seem to suffer as well.
  • In both tests, the subjects whose phones were in view posted the worst scores, while those who left their phones in a different room did the best. The students who kept their phones in their pockets or bags came out in the middle. As the phone’s proximity increased, brainpower decreased.
  • In subsequent interviews, nearly all the participants said that their phones hadn’t been a distraction—that they hadn’t even thought about the devices during the experiment. They remained oblivious even as the phones disrupted their focus and thinking.
  • The researchers recruited 520 undergraduates at UCSD and gave them two standard tests of intellectual acuity. One test gauged “available working-memory capacity,” a measure of how fully a person’s mind can focus on a particular task. The second assessed “fluid intelligence,” a person’s ability to interpret and solve an unfamiliar problem. The only variable in the experiment was the location of the subjects’ smartphones. Some of the students were asked to place their phones in front of them on their desks; others were told to stow their phones in their pockets or handbags; still others were required to leave their phones in a different room.
  • the “integration of smartphones into daily life” appears to cause a “brain drain” that can diminish such vital mental skills as “learning, logical reasoning, abstract thought, problem solving, and creativity.”
  •  Smartphones have become so entangled with our existence that, even when we’re not peering or pawing at them, they tug at our attention, diverting precious cognitive resources. Just suppressing the desire to check our phone, which we do routinely and subconsciously throughout the day, can debilitate our thinking.
  • They found that students who didn’t bring their phones to the classroom scored a full letter-grade higher on a test of the material presented than those who brought their phones. It didn’t matter whether the students who had their phones used them or not: All of them scored equally poorly.
  • A study of nearly a hundred secondary schools in the U.K., published last year in the journal Labour Economics, found that when schools ban smartphones, students’ examination scores go up substantially, with the weakest students benefiting the most.
  • Data, the novelist and critic Cynthia Ozick once wrote, is “memory without history.” Her observation points to the problem with allowing smartphones to commandeer our brains
  • Because smartphones serve as constant reminders of all the friends we could be chatting with electronically, they pull at our minds when we’re talking with people in person, leaving our conversations shallower and less satisfying.
  • In a 2013 study conducted at the University of Essex in England, 142 participants were divided into pairs and asked to converse in private for ten minutes. Half talked with a phone in the room, half without a phone present. The subjects were then given tests of affinity, trust and empathy. “The mere presence of mobile phones,” the researchers reported in the Journal of Social and Personal Relationships, “inhibited the development of interpersonal closeness and trust” and diminished “the extent to which individuals felt empathy and understanding from their partners.”
  • The evidence that our phones can get inside our heads so forcefully is unsettling. It suggests that our thoughts and feelings, far from being sequestered in our skulls, can be skewed by external forces we’re not even aware o
  •  Scientists have long known that the brain is a monitoring system as well as a thinking system. Its attention is drawn toward any object that is new, intriguing or otherwise striking — that has, in the psychological jargon, “salience.”
  • even in the history of captivating media, the smartphone stands out. It is an attention magnet unlike any our minds have had to grapple with before. Because the phone is packed with so many forms of information and so many useful and entertaining functions, it acts as what Dr. Ward calls a “supernormal stimulus,” one that can “hijack” attention whenever it is part of our surroundings — and it is always part of our surroundings.
  • Imagine combining a mailbox, a newspaper, a TV, a radio, a photo album, a public library and a boisterous party attended by everyone you know, and then compressing them all into a single, small, radiant object. That is what a smartphone represents to us. No wonder we can’t take our minds off it.
  • The irony of the smartphone is that the qualities that make it so appealing to us — its constant connection to the net, its multiplicity of apps, its responsiveness, its portability — are the very ones that give it such sway over our minds.
  • Phone makers like Apple and Samsung and app writers like Facebook, Google and Snap design their products to consume as much of our attention as possible during every one of our waking hours
  • Social media apps were designed to exploit “a vulnerability in human psychology,” former Facebook president Sean Parker said in a recent interview. “[We] understood this consciously. And we did it anyway.”
  • A quarter-century ago, when we first started going online, we took it on faith that the web would make us smarter: More information would breed sharper thinking. We now know it’s not that simple.
  • As strange as it might seem, people’s knowledge and understanding may actually dwindle as gadgets grant them easier access to online data stores
  • In a seminal 2011 study published in Science, a team of researchers — led by the Columbia University psychologist Betsy Sparrow and including the late Harvard memory expert Daniel Wegner — had a group of volunteers read forty brief, factual statements (such as “The space shuttle Columbia disintegrated during re-entry over Texas in Feb. 2003”) and then type the statements into a computer. Half the people were told that the machine would save what they typed; half were told that the statements would be erased.
  • Afterward, the researchers asked the subjects to write down as many of the statements as they could remember. Those who believed that the facts had been recorded in the computer demonstrated much weaker recall than those who assumed the facts wouldn’t be stored. Anticipating that information would be readily available in digital form seemed to reduce the mental effort that people made to remember it
  • The researchers dubbed this phenomenon the “Google effect” and noted its broad implications: “Because search engines are continually available to us, we may often be in a state of not feeling we need to encode the information internally. When we need it, we will look it up.”
  • as the pioneering psychologist and philosopher William James said in an 1892 lecture, “the art of remembering is the art of thinking.”
  • Only by encoding information in our biological memory can we weave the rich intellectual associations that form the essence of personal knowledge and give rise to critical and conceptual thinking. No matter how much information swirls around us, the less well-stocked our memory, the less we have to think with.
  • As Dr. Wegner and Dr. Ward explained in a 2013 Scientific American article, when people call up information through their devices, they often end up suffering from delusions of intelligence. They feel as though “their own mental capacities” had generated the information, not their devices. “The advent of the ‘information age’ seems to have created a generation of people who feel they know more than ever before,” the scholars concluded, even though “they may know ever less about the world around them.”
  • That insight sheds light on society’s current gullibility crisis, in which people are all too quick to credit lies and half-truths spread through social media. If your phone has sapped your powers of discernment, you’ll believe anything it tells you.
  • A second experiment conducted by the researchers produced similar results, while also revealing that the more heavily students relied on their phones in their everyday lives, the greater the cognitive penalty they suffered.
  • When we constrict our capacity for reasoning and recall or transfer those skills to a gadget, we sacrifice our ability to turn information into knowledge. We get the data but lose the meaning
  • We need to give our minds more room to think. And that means putting some distance between ourselves and our phones.
  • Google’s once-patient investors grew restive, demanding that the founders figure out a way to make money, preferably lots of it.
  • nder pressure, Page and Brin authorized the launch of an auction system for selling advertisements tied to search queries. The system was designed so that the company would get paid by an advertiser only when a user clicked on an ad. This feature gave Google a huge financial incentive to make accurate predictions about how users would respond to ads and other online content. Even tiny increases in click rates would bring big gains in income. And so the company began deploying its stores of behavioral data not for the benefit of users but to aid advertisers — and to juice its own profits. Surveillance capitalism had arrived.
  • Google’s business now hinged on what Zuboff calls “the extraction imperative.” To improve its predictions, it had to mine as much information as possible from web users. It aggressively expanded its online services to widen the scope of its surveillance.
  • Through Gmail, it secured access to the contents of people’s emails and address books. Through Google Maps, it gained a bead on people’s whereabouts and movements. Through Google Calendar, it learned what people were doing at different moments during the day and whom they were doing it with. Through Google News, it got a readout of people’s interests and political leanings. Through Google Shopping, it opened a window onto people’s wish lists,
  • The company gave all these services away for free to ensure they’d be used by as many people as possible. It knew the money lay in the data.
  • the organization grew insular and secretive. Seeking to keep the true nature of its work from the public, it adopted what its CEO at the time, Eric Schmidt, called a “hiding strategy” — a kind of corporate omerta backed up by stringent nondisclosure agreements.
  • Page and Brin further shielded themselves from outside oversight by establishing a stock structure that guaranteed their power could never be challenged, neither by investors nor by directors.
  • What’s most remarkable about the birth of surveillance capitalism is the speed and audacity with which Google overturned social conventions and norms about data and privacy. Without permission, without compensation, and with little in the way of resistance, the company seized and declared ownership over everyone’s information
  • The companies that followed Google presumed that they too had an unfettered right to collect, parse, and sell personal data in pretty much any way they pleased. In the smart homes being built today, it’s understood that any and all data will be beamed up to corporate clouds.
  • Google conducted its great data heist under the cover of novelty. The web was an exciting frontier — something new in the world — and few people understood or cared about what they were revealing as they searched and surfed. In those innocent days, data was there for the taking, and Google took it
  • Google also benefited from decisions made by lawmakers, regulators, and judges — decisions that granted internet companies free use of a vast taxpayer-funded communication infrastructure, relieved them of legal and ethical responsibility for the information and messages they distributed, and gave them carte blanche to collect and exploit user data.
  • Consider the terms-of-service agreements that govern the division of rights and the delegation of ownership online. Non-negotiable, subject to emendation and extension at the company’s whim, and requiring only a casual click to bind the user, TOS agreements are parodies of contracts, yet they have been granted legal legitimacy by the court
  • Law professors, writes Zuboff, “call these ‘contracts of adhesion’ because they impose take-it-or-leave-it conditions on users that stick to them whether they like it or not.” Fundamentally undemocratic, the ubiquitous agreements helped Google and other firms commandeer personal data as if by fiat.
  • n the choices we make as consumers and private citizens, we have always traded some of our autonomy to gain other rewards. Many people, it seems clear, experience surveillance capitalism less as a prison, where their agency is restricted in a noxious way, than as an all-inclusive resort, where their agency is restricted in a pleasing way
  • Zuboff makes a convincing case that this is a short-sighted and dangerous view — that the bargain we’ve struck with the internet giants is a Faustian one
  • but her case would have been stronger still had she more fully addressed the benefits side of the ledger.
  • there’s a piece missing. While Zuboff’s assessment of the costs that people incur under surveillance capitalism is exhaustive, she largely ignores the benefits people receive in return — convenience, customization, savings, entertainment, social connection, and so on
  • hat the industries of the future will seek to manufacture is the self.
  • Behavior modification is the thread that ties today’s search engines, social networks, and smartphone trackers to tomorrow’s facial-recognition systems, emotion-detection sensors, and artificial-intelligence bots.
  • All of Facebook’s information wrangling and algorithmic fine-tuning, she writes, “is aimed at solving one problem: how and when to intervene in the state of play that is your daily life in order to modify your behavior and thus sharply increase the predictability of your actions now, soon, and later.”
  • “The goal of everything we do is to change people’s actual behavior at scale,” a top Silicon Valley data scientist told her in an interview. “We can test how actionable our cues are for them and how profitable certain behaviors are for us.”
  • This goal, she suggests, is not limited to Facebook. It is coming to guide much of the economy, as financial and social power shifts to the surveillance capitalists
  • Combining rich information on individuals’ behavioral triggers with the ability to deliver precisely tailored and timed messages turns out to be a recipe for behavior modification on an unprecedented scale.
  • it was Facebook, with its incredibly detailed data on people’s social lives, that grasped digital media’s full potential for behavior modification. By using what it called its “social graph” to map the intentions, desires, and interactions of literally billions of individuals, it saw that it could turn its network into a worldwide Skinner box, employing psychological triggers and rewards to program not only what people see but how they react.
  • spying on the populace is not the end game. The real prize lies in figuring out ways to use the data to shape how people think and act. “The best way to predict the future is to invent it,” the computer scientist Alan Kay once observed. And the best way to predict behavior is to script it.
  • competition for personal data intensified. It was no longer enough to monitor people online; making better predictions required that surveillance be extended into homes, stores, schools, workplaces, and the public squares of cities and towns. Much of the recent innovation in the tech industry has entailed the creation of products and services designed to vacuum up data from every corner of our lives
  • “The typical complaint is that privacy is eroded, but that is misleading,” Zuboff writes. “In the larger societal pattern, privacy is not eroded but redistributed . . . . Instead of people having the rights to decide how and what they will disclose, these rights are concentrated within the domain of surveillance capitalism.” The transfer of decision rights is also a transfer of autonomy and agency, from the citizen to the corporation.
  • What we lose under this regime is something more fundamental than privacy. It’s the right to make our own decisions about privacy — to draw our own lines between those aspects of our lives we are comfortable sharing and those we are not
  • Other possible ways of organizing online markets, such as through paid subscriptions for apps and services, never even got a chance to be tested.
  • Online surveillance came to be viewed as normal and even necessary by politicians, government bureaucrats, and the general public
  • Google and other Silicon Valley companies benefited directly from the government’s new stress on digital surveillance. They earned millions through contracts to share their data collection and analysis techniques with the National Security Agenc
  • As much as the dot-com crash, the horrors of 9/11 set the stage for the rise of surveillance capitalism. Zuboff notes that, in 2000, members of the Federal Trade Commission, frustrated by internet companies’ lack of progress in adopting privacy protections, began formulating legislation to secure people’s control over their online information and severely restrict the companies’ ability to collect and store it. It seemed obvious to the regulators that ownership of personal data should by default lie in the hands of private citizens, not corporations.
  • The 9/11 attacks changed the calculus. The centralized collection and analysis of online data, on a vast scale, came to be seen as essential to national security. “The privacy provisions debated just months earlier vanished from the conversation more or less overnight,”
1 - 20 of 757 Next › Last »
Showing 20 items per page