Skip to main content

Home/ TOK Friends/ Group items tagged description

Rss Feed Group items tagged

katherineharron

FBI arrests spotlight lessons learned after Charlottesville (opinion) - CNN - 0 views

  • On Thursday, the FBI arrested three men, Patrik J. Mathews, 27, Brian M. Lemley Jr., 33, and William G. Bilbrough IV, 19, with firearms charges, and they had plans, an official said, to attend a Virginia pro-gun rally. This followed Virginia Gov. Ralph Northam's declaration of a temporary state of emergency after authorities learned that extremists hoped to use the anti-gun control rally planned next Monday -- Martin Luther King, Jr. Day -- to incite a violent clash.
  • These arrests add to mounting evidence that a decades-old and violent white-power movement is alive and well, perhaps even gaining strength. White power is a social movement that has united neo-Nazis, Klansmen, skinheads, and militiamen around a shared fear of racial annihilation and cultural change. Since 1983, when movement leaders declared war on the federal government, members of such groups have worked together to bring about a race war.
  • JUST WATCHEDOn GPS: What motivates white power activists?ReplayMore Videos ...MUST WATCH position: absol
  • ...4 more annotations...
  • Silver linings aside, it will take many, many more instances of coordinated response to stop a movement generations in the making. In more than a decade of studying the earlier white power movement, I have become familiar with the themes of underground activity that are today clearly drawing from the earlier movement. In the absence of decisive action across multiple institutions, a rich record of criminal activity and violence will continue to provide these activists with a playbook for further chaos.
  • The Base, furthermore, is what experts call "accelerationist," meaning that its members hope to provoke what they see as an inevitable race war. They have conducted paramilitary training in the Pacific Northwest. Both of these strategies date back to the 1980s, when the Order trained in those forests with hopes of provoking the same race war.
  • One of the men arrested Thursday was formerly a reservist in the Canadian Army, where he received training in explosives and demolition, according to the New York Times. This kind of preparation, too, is common among extremists like these. To take just a few representative examples, in the 1960s, Bobby Frank Cherry, a former Marine trained in demolition, helped fellow members of the United Klans of America to bomb the 16th Street Birmingham Baptist Church, killing four black girls.
  • This news out of Virginia shows that there is a real social benefit when people direct their attention to these events -- and sustain the public conversation about the presence of a renewed white-power movement and what it means for our society.
Javier E

There Is No Theory of Everything - The New York Times - 3 views

  • fered by science and the kinds of humanistic description we find, say, in the novels of Dickens or Dostoevsky, or in the sociological writings of Erving Goffman and David Riesman. His quest was to try and clarify the occasions when a scientific explanation was appropriate and when it was not, and we need instead a humanistic remark.
  • His conviction was that our confusions about science and the humanities had wide-ranging and malign societal consequences.
  • However efficacious the blue pill might be, in this instance the doctor’s causal diagnosis is the wrong one. What is required is for you to be able to talk, to feel that someone understands your problems and perhaps can offer some insight or even suggestions on how you might move forward in your life. This, one imagines, is why people go into therapy.
  • ...13 more annotations...
  • One huge problem with scientism is that it invites, as an almost allergic reaction, the total rejection of science. As we know to our cost, we witness this every day with climate change deniers, flat-earthers and religious fundamentalists
  • Frank’s point is that our society is deeply confused by the occasions when a blue pill is required and not required, or when we need a causal explanation and when we need a further description, clarification or elucidation. We tend to get muddled and imagine that one kind of explanation (usually the causal one) is appropriate in all occasions when it is not.
  • What is in play here is the classical distinction made by Max Weber between explanation and clarification, between causal or causal-sounding hypotheses and interpretation. Weber’s idea is that natural phenomena require causal explanation, of the kind given by physics, say, whereas social phenomena require elucidation — richer, more expressive descriptions.
  • In Frank’s view, one major task of philosophy is help us get clear on this distinction and to provide the right response at the right time. This, of course, requires judgment, which is no easy thing to teach.
  • Frank’s point, which is still hugely important, is that there is no theory of everything, nor should there be. There is a gap between nature and society. The mistake, for which scientism is the name, is the belief that this gap can or should be filled.
  • On a ferry you want a blue pill that is going to alleviate the symptoms of seasickness and make you feel better.
  • in order to confront the challenge of obscurantism, we do not simply need to run into the arms of scientism. What is needed is a clearer overview of the occasions when a scientific remark is appropriate and when we need something else, the kind of elucidation we find in stories, poetry or indeed when we watch a movie
  • People often wonder why there appears to be no progress in philosophy, unlike in natural science, and why it is that after some three millenniums of philosophical activity no dramatic changes seem to have been made to the questions philosophers ask. The reason is because people keep asking the same questions and perplexed by the same difficulties
  • Wittgenstein puts the point rather directly: “Philosophy hasn’t made any progress? If somebody scratches the spot where he has an itch, do we have to see some progress?”
  • Philosophy scratches at the various itches we have, not in order that we might find some cure for what ails us, but in order to scratch in the right place and begin to understand why we engage in such apparently irritating activity.
  • This is one way of approaching the question of life’s meaning. Human beings have been asking the same kinds of questions for millenniums and this is not an error. It testifies to the fact that human being are rightly perplexed by their lives. The mistake is to believe that there is an answer to the question of life’s meaning
  • The point, then, is not to seek an answer to the meaning of life, but to continue to ask the question.
  • We don’t need an answer to the question of life’s meaning, just as we don’t need a theory of everything. What we need are multifarious descriptions of many things, further descriptions of phenomena that change the aspect under which they are seen, that light them up and let us see them anew
sissij

Scientists Are Attempting to Unlock the Secret Potential of the Human Brain | Big Think - 1 views

  • Sometimes, it occurs when a person suffers a nearly fatal accident or life-threatening situation. In others, they are born with a developmental disorder, such as autism.
  • This is known as savant syndrome. Of course, it’s exceedingly rare.
  • Upon entering the bathroom and turning on the faucet, he saw “lines emanating out perpendicularly from the flow.” He couldn’t believe it.
  • ...7 more annotations...
  • “At first, I was startled, and worried for myself, but it was so beautiful that I just stood in my slippers and stared.” It was like, “watching a slow-motion film.”
  • Before, he never rose beyond pre-Algebra.
  • savant syndrome
  • Padgett is one of the few people on earth who can draw fractals accurately, freehand.
  • There are two ways for it to occur, either through an injury that causes brain damage or through a disorder, such as autism.
  • It’s estimated that around 50% of those with savant syndrome are autistic.
  • “The most common ability to emerge is art, followed by music,” Treffert told The Guardian. “But I’ve had cases where brain damage makes people suddenly interested in dance, or in Pinball Wizard.”
  •  
    This article reminds me of a scientific myth I saw one day about that people only use 10% percent of their brain. However, this description is not accurate because when we are using our brain, every parts of our brain is active and their is no vacant part of it. It is only that it has more potential than it seems. I think this description is 10% is misleading. I found the savant syndrome very interesting. I think this amazing talent in human brain is very amazing. So is it possible that the cognitive bias in human brain is because of the potential of human brain was not activated. --Sissi (4/17/2017)
katherineharron

What is dementia? - CNN Video - 0 views

  • 47.5 million people around the world suffer from dementia, which leads to a loss of memory and intellectual abilities. Watch this video to learn more.Source: CNN
  • 47.5 million people around the world suffer from dementia, which leads to a loss of memory and intellectual abilities. Watch this video to learn more.Source: CNN
Javier E

Opinion | Noam Chomsky: The False Promise of ChatGPT - The New York Times - 0 views

  • we fear that the most popular and fashionable strain of A.I. — machine learning — will degrade our science and debase our ethics by incorporating into our technology a fundamentally flawed conception of language and knowledge.
  • OpenAI’s ChatGPT, Google’s Bard and Microsoft’s Sydney are marvels of machine learning. Roughly speaking, they take huge amounts of data, search for patterns in it and become increasingly proficient at generating statistically probable outputs — such as seemingly humanlike language and thought
  • if machine learning programs like ChatGPT continue to dominate the field of A.I
  • ...22 more annotations...
  • , we know from the science of linguistics and the philosophy of knowledge that they differ profoundly from how humans reason and use language. These differences place significant limitations on what these programs can do, encoding them with ineradicable defects.
  • It is at once comic and tragic, as Borges might have noted, that so much money and attention should be concentrated on so little a thing — something so trivial when contrasted with the human mind, which by dint of language, in the words of Wilhelm von Humboldt, can make “infinite use of finite means,” creating ideas and theories with universal reach.
  • The human mind is not, like ChatGPT and its ilk, a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question
  • the human mind is a surprisingly efficient and even elegant system that operates with small amounts of information; it seeks not to infer brute correlations among data points but to create explanations
  • such programs are stuck in a prehuman or nonhuman phase of cognitive evolution. Their deepest flaw is the absence of the most critical capacity of any intelligence: to say not only what is the case, what was the case and what will be the case — that’s description and prediction — but also what is not the case and what could and could not be the case
  • Those are the ingredients of explanation, the mark of true intelligence.
  • Here’s an example. Suppose you are holding an apple in your hand. Now you let the apple go. You observe the result and say, “The apple falls.” That is a description. A prediction might have been the statement “The apple will fall if I open my hand.”
  • an explanation is something more: It includes not only descriptions and predictions but also counterfactual conjectures like “Any such object would fall,” plus the additional clause “because of the force of gravity” or “because of the curvature of space-time” or whatever. That is a causal explanation: “The apple would not have fallen but for the force of gravity.” That is thinking.
  • The crux of machine learning is description and prediction; it does not posit any causal mechanisms or physical laws
  • any human-style explanation is not necessarily correct; we are fallible. But this is part of what it means to think: To be right, it must be possible to be wrong. Intelligence consists not only of creative conjectures but also of creative criticism. Human-style thought is based on possible explanations and error correction, a process that gradually limits what possibilities can be rationally considered.
  • ChatGPT and similar programs are, by design, unlimited in what they can “learn” (which is to say, memorize); they are incapable of distinguishing the possible from the impossible.
  • Whereas humans are limited in the kinds of explanations we can rationally conjecture, machine learning systems can learn both that the earth is flat and that the earth is round. They trade merely in probabilities that change over time.
  • For this reason, the predictions of machine learning systems will always be superficial and dubious.
  • some machine learning enthusiasts seem to be proud that their creations can generate correct “scientific” predictions (say, about the motion of physical bodies) without making use of explanations (involving, say, Newton’s laws of motion and universal gravitation). But this kind of prediction, even when successful, is pseudoscienc
  • While scientists certainly seek theories that have a high degree of empirical corroboration, as the philosopher Karl Popper noted, “we do not seek highly probable theories but explanations; that is to say, powerful and highly improbable theories.”
  • The theory that apples fall to earth because mass bends space-time (Einstein’s view) is highly improbable, but it actually tells you why they fall. True intelligence is demonstrated in the ability to think and express improbable but insightful things.
  • This means constraining the otherwise limitless creativity of our minds with a set of ethical principles that determines what ought and ought not to be (and of course subjecting those principles themselves to creative criticism)
  • True intelligence is also capable of moral thinking
  • To be useful, ChatGPT must be empowered to generate novel-looking output; to be acceptable to most of its users, it must steer clear of morally objectionable content
  • In 2016, for example, Microsoft’s Tay chatbot (a precursor to ChatGPT) flooded the internet with misogynistic and racist content, having been polluted by online trolls who filled it with offensive training data. How to solve the problem in the future? In the absence of a capacity to reason from moral principles, ChatGPT was crudely restricted by its programmers from contributing anything novel to controversial — that is, important — discussions. It sacrificed creativity for a kind of amorality.
  • Here, ChatGPT exhibits something like the banality of evil: plagiarism and apathy and obviation. It summarizes the standard arguments in the literature by a kind of super-autocomplete, refuses to take a stand on anything, pleads not merely ignorance but lack of intelligence and ultimately offers a “just following orders” defense, shifting responsibility to its creators.
  • In short, ChatGPT and its brethren are constitutionally unable to balance creativity with constraint. They either overgenerate (producing both truths and falsehoods, endorsing ethical and unethical decisions alike) or undergenerate (exhibiting noncommitment to any decisions and indifference to consequences). Given the amorality, faux science and linguistic incompetence of these systems, we can only laugh or cry at their popularity.
Javier E

At the Existentialist Café: Freedom, Being, and Apricot Cocktails with Jean-P... - 0 views

  • The phenomenologists’ leading thinker, Edmund Husserl, provided a rallying cry, ‘To the things themselves!’ It meant: don’t waste time on the interpretations that accrue upon things, and especially don’t waste time wondering whether the things are real. Just look at this that’s presenting itself to you, whatever this may be, and describe it as precisely as possible.
  • You might think you have defined me by some label, but you are wrong, for I am always a work in progress. I create myself constantly through action, and this is so fundamental to my human condition that, for Sartre, it is the human condition, from the moment of first consciousness to the moment when death wipes it out. I am my own freedom: no more, no less.
  • Sartre wrote like a novelist — not surprisingly, since he was one. In his novels, short stories and plays as well as in his philosophical treatises, he wrote about the physical sensations of the world and the structures and moods of human life. Above all, he wrote about one big subject: what it meant to be free. Freedom, for him, lay at the heart of all human experience, and this set humans apart from all other kinds of object.
  • ...97 more annotations...
  • Sartre listened to his problem and said simply, ‘You are free, therefore choose — that is to say, invent.’ No signs are vouchsafed in this world, he said. None of the old authorities can relieve you of the burden of freedom. You can weigh up moral or practical considerations as carefully as you like, but ultimately you must take the plunge and do something, and it’s up to you what that something is.
  • Even if the situation is unbearable — perhaps you are facing execution, or sitting in a Gestapo prison, or about to fall off a cliff — you are still free to decide what to make of it in mind and deed. Starting from where you are now, you choose. And in choosing, you also choose who you will be.
  • The war had made people realise that they and their fellow humans were capable of departing entirely from civilised norms; no wonder the idea of a fixed human nature seemed questionable.
  • If this sounds difficult and unnerving, it’s because it is. Sartre does not deny that the need to keep making decisions brings constant anxiety. He heightens this anxiety by pointing out that what you do really matters. You should make your choices as though you were choosing on behalf of the whole of humanity, taking the entire burden of responsibility for how the human race behaves. If you avoid this responsibility by fooling yourself that you are the victim of circumstance or of someone else’s bad advice, you are failing to meet the demands of human life and choosing a fake existence, cut off from your own ‘authenticity’.
  • Along with the terrifying side of this comes a great promise: Sartre’s existentialism implies that it is possible to be authentic and free, as long as you keep up the effort.
  • almost all agreed that it was, as an article in Les nouvelles littéraires phrased it, a ‘sickening mixture of philosophic pretentiousness, equivocal dreams, physiological technicalities, morbid tastes and hesitant eroticism … an introspective embryo that one would take distinct pleasure in crushing’.
  • he offered a philosophy designed for a species that had just scared the hell out of itself, but that finally felt ready to grow up and take responsibility.
  • In this rebellious world, just as with the Parisian bohemians and Dadaists in earlier generations, everything that was dangerous and provocative was good, and everything that was nice or bourgeois was bad.
  • Such interweaving of ideas and life had a long pedigree, although the existentialists gave it a new twist. Stoic and Epicurean thinkers in the classical world had practised philosophy as a means of living well, rather than of seeking knowledge or wisdom for their own sake. By reflecting on life’s vagaries in philosophical ways, they believed they could become more resilient, more able to rise above circumstances, and better equipped to manage grief, fear, anger, disappointment or anxiety.
  • In the tradition they passed on, philosophy is neither a pure intellectual pursuit nor a collection of cheap self-help tricks, but a discipline for flourishing and living a fully human, responsible life.
  • For Kierkegaard, Descartes had things back to front. In his own view, human existence comes first: it is the starting point for everything we do, not the result of a logical deduction. My existence is active: I live it and choose it, and this precedes any statement I can make about myself.
  • Studying our own moral genealogy cannot help us to escape or transcend ourselves. But it can enable us to see our illusions more clearly and lead a more vital, assertive existence.
  • What was needed, he felt, was not high moral or theological ideals, but a deeply critical form of cultural history or ‘genealogy’ that would uncover the reasons why we humans are as we are, and how we came to be that way. For him, all philosophy could even be redefined as a form of psychology, or history.
  • For those oppressed on grounds of race or class, or for those fighting against colonialism, existentialism offered a change of perspective — literally, as Sartre proposed that all situations be judged according to how they appeared in the eyes of those most oppressed, or those whose suffering was greatest.
  • She observed that we need not expect moral philosophers to ‘live by’ their ideas in a simplistic way, as if they were following a set of rules. But we can expect them to show how their ideas are lived in. We should be able to look in through the windows of a philosophy, as it were, and see how people occupy it, how they move about and how they conduct themselves.
  • the existentialists inhabited their historical and personal world, as they inhabited their ideas. This notion of ‘inhabited philosophy’ is one I’ve borrowed from the English philosopher and novelist Iris Murdoch, who wrote the first full-length book on Sartre and was an early adopter of existentialism
  • What is existentialism anyway?
  • An existentialist who is also phenomenological provides no easy rules for dealing with this condition, but instead concentrates on describing lived experience as it presents itself. — By describing experience well, he or she hopes to understand this existence and awaken us to ways of living more authentic lives.
  • Existentialists concern themselves with individual, concrete human existence. — They consider human existence different from the kind of being other things have. Other entities are what they are, but as a human I am whatever I choose to make of myself at every moment. I am free — — and therefore I’m responsible for everything I do, a dizzying fact which causes — an anxiety inseparable from human existence itself.
  • On the other hand, I am only free within situations, which can include factors in my own biology and psychology as well as physical, historical and social variables of the world into which I have been thrown. — Despite the limitations, I always want more: I am passionately involved in personal projects of all kinds. — Human existence is thus ambiguous: at once boxed in by borders and yet transcendent and exhilarating. —
  • The first part of this is straightforward: a phenomenologist’s job is to describe. This is the activity that Husserl kept reminding his students to do. It meant stripping away distractions, habits, clichés of thought, presumptions and received ideas, in order to return our attention to what he called the ‘things themselves’. We must fix our beady gaze on them and capture them exactly as they appear, rather than as we think they are supposed to be.
  • Husserl therefore says that, to phenomenologically describe a cup of coffee, I should set aside both the abstract suppositions and any intrusive emotional associations. Then I can concentrate on the dark, fragrant, rich phenomenon in front of me now. This ‘setting aside’ or ‘bracketing out’ of speculative add-ons Husserl called epoché — a term borrowed from the ancient Sceptics,
  • The point about rigour is crucial; it brings us back to the first half of the command to describe phenomena. A phenomenologist cannot get away with listening to a piece of music and saying, ‘How lovely!’ He or she must ask: is it plaintive? is it dignified? is it colossal and sublime? The point is to keep coming back to the ‘things themselves’ — phenomena stripped of their conceptual baggage — so as to bail out weak or extraneous material and get to the heart of the experience.
  • Husserlian ‘bracketing out’ or epoché allows the phenomenologist to temporarily ignore the question ‘But is it real?’, in order to ask how a person experiences his or her world. Phenomenology gives a formal mode of access to human experience. It lets philosophers talk about life more or less as non-philosophers do, while still being able to tell themselves they are being methodical and rigorous.
  • Besides claiming to transform the way we think about reality, phenomenologists promised to change how we think about ourselves. They believed that we should not try to find out what the human mind is, as if it were some kind of substance. Instead, we should consider what it does, and how it grasps its experiences.
  • For Brentano, this reaching towards objects is what our minds do all the time. Our thoughts are invariably of or about something, he wrote: in love, something is loved, in hatred, something is hated, in judgement, something is affirmed or denied. Even when I imagine an object that isn’t there, my mental structure is still one of ‘about-ness’ or ‘of-ness’.
  • Except in deepest sleep, my mind is always engaged in this aboutness: it has ‘intentionality’. Having taken the germ of this from Brentano, Husserl made it central to his whole philosophy.
  • Husserl saw in the idea of intentionality a way to sidestep two great unsolved puzzles of philosophical history: the question of what objects ‘really’ are, and the question of what the mind ‘really’ is. By doing the epoché and bracketing out all consideration of reality from both topics, one is freed to concentrate on the relationship in the middle. One can apply one’s descriptive energies to the endless dance of intentionality that takes place in our lives: the whirl of our minds as they seize their intended phenomena one after the other and whisk them around the floor,
  • Understood in this way, the mind hardly is anything at all: it is its aboutness. This makes the human mind (and possibly some animal minds) different from any other naturally occurring entity. Nothing else can be as thoroughly about or of things as the mind is:
  • Some Eastern meditation techniques aim to still this scurrying creature, but the extreme difficulty of this shows how unnatural it is to be mentally inert. Left to itself, the mind reaches out in all directions as long as it is awake — and even carries on doing it in the dreaming phase of its sleep.
  • a mind that is experiencing nothing, imagining nothing, or speculating about nothing can hardly be said to be a mind at all.
  • Three simple ideas — description, phenomenon, intentionality — provided enough inspiration to keep roomfuls of Husserlian assistants busy in Freiburg for decades. With all of human existence awaiting their attention, how could they ever run out of things to do?
  • For Sartre, this gives the mind an immense freedom. If we are nothing but what we think about, then no predefined ‘inner nature’ can hold us back. We are protean.
  • way of this interpretation. Real, not real; inside, outside; what difference did it make? Reflecting on this, Husserl began turning his phenomenology into a branch of ‘idealism’ — the philosophical tradition which denied external reality and defined everything as a kind of private hallucination.
  • For Sartre, if we try to shut ourselves up inside our own minds, ‘in a nice warm room with the shutters closed’, we cease to exist. We have no cosy home: being out on the dusty road is the very definition of what we are.
  • One might think that, if Heidegger had anything worth saying, he could have communicated it in ordinary language. The fact is that he does not want to be ordinary, and he may not even want to communicate in the usual sense. He wants to make the familiar obscure, and to vex us. George Steiner thought that Heidegger’s purpose was less to be understood than to be experienced through a ‘felt strangeness’.
  • He takes Dasein in its most ordinary moments, then talks about it in the most innovative way he can. For Heidegger, Dasein’s everyday Being is right here: it is Being-in-the-world, or In-der-Welt-sein. The main feature of Dasein’s everyday Being-in-the-world right here is that it is usually busy doing something.
  • Thus, for Heidegger, all Being-in-the-world is also a ‘Being-with’ or Mitsein. We cohabit with others in a ‘with-world’, or Mitwelt. The old philosophical problem of how we prove the existence of other minds has now vanished. Dasein swims in the with-world long before it wonders about other minds.
  • Sometimes the best-educated people were those least inclined to take the Nazis seriously, dismissing them as too absurd to last. Karl Jaspers was one of those who made this mistake, as he later recalled, and Beauvoir observed similar dismissive attitudes among the French students in Berlin.
  • In any case, most of those who disagreed with Hitler’s ideology soon learned to keep their view to themselves. If a Nazi parade passed on the street, they would either slip out of view or give the obligatory salute like everyone else, telling themselves that the gesture meant nothing if they did not believe in it. As the psychologist Bruno Bettelheim later wrote of this period, few people will risk their life for such a small thing as raising an arm — yet that is how one’s powers of resistance are eroded away, and eventually one’s responsibility and integrity go with them.
  • for Arendt, if you do not respond adequately when the times demand it, you show a lack of imagination and attention that is as dangerous as deliberately committing an abuse. It amounts to disobeying the one command she had absorbed from Heidegger in those Marburg days: Think!
  • ‘Everything takes place under a kind of anaesthesia. Objectively dreadful events produce a thin, puny emotional response. Murders are committed like schoolboy pranks. Humiliation and moral decay are accepted like minor incidents.’ Haffner thought modernity itself was partly to blame: people had become yoked to their habits and to mass media, forgetting to stop and think, or to disrupt their routines long enough to question what was going on.
  • Heidegger’s former lover and student Hannah Arendt would argue, in her 1951 study The Origins of Totalitarianism, that totalitarian movements thrived at least partly because of this fragmentation in modern lives, which made people more vulnerable to being swept away by demagogues. Elsewhere, she coined the phrase ‘the banality of evil’ to describe the most extreme failures of personal moral awareness.
  • His communicative ideal fed into a whole theory of history: he traced all civilisation to an ‘Axial Period’ in the fifth century BC, during which philosophy and culture exploded simultaneously in Europe, the Middle East and Asia, as though a great bubble of minds had erupted from the earth’s surface. ‘True philosophy needs communion to come into existence,’ he wrote, and added, ‘Uncommunicativeness in a philosopher is virtually a criterion of the untruth of his thinking.’
  • The idea of being called to authenticity became a major theme in later existentialism, the call being interpreted as saying something like ‘Be yourself!’, as opposed to being phony. For Heidegger, the call is more fundamental than that. It is a call to take up a self that you didn’t know you had: to wake up to your Being. Moreover, it is a call to action. It requires you to do something: to take a decision of some sort.
  • Being and Time contained at least one big idea that should have been of use in resisting totalitarianism. Dasein, Heidegger wrote there, tends to fall under the sway of something called das Man or ‘the they’ — an impersonal entity that robs us of the freedom to think for ourselves. To live authentically requires resisting or outwitting this influence, but this is not easy because das Man is so nebulous. Man in German does not mean ‘man’ as in English (that’s der Mann), but a neutral abstraction, something like ‘one’ in the English phrase ‘one doesn’t do that’,
  • for Heidegger, das Man is me. It is everywhere and nowhere; it is nothing definite, but each of us is it. As with Being, it is so ubiquitous that it is difficult to see. If I am not careful, however, das Man takes over the important decisions that should be my own. It drains away my responsibility or ‘answerability’. As Arendt might put it, we slip into banality, failing to think.
  • Jaspers focused on what he called Grenzsituationen — border situations, or limit situations. These are the moments when one finds oneself constrained or boxed in by what is happening, but at the same time pushed by these events towards the limits or outer edge of normal experience. For example, you might have to make a life-or-death choice, or something might remind you suddenly of your mortality,
  • Jaspers’ interest in border situations probably had much to do with his own early confrontation with mortality. From childhood, he had suffered from a heart condition so severe that he always expected to die at any moment. He also had emphysema, which forced him to speak slowly, taking long pauses to catch his breath. Both illnesses meant that he had to budget his energies with care in order to get his work done without endangering his life.
  • If I am to resist das Man, I must become answerable to the call of my ‘voice of conscience’. This call does not come from God, as a traditional Christian definition of the voice of conscience might suppose. It comes from a truly existentialist source: my own authentic self. Alas, this voice is one I do not recognise and may not hear, because it is not the voice of my habitual ‘they-self’. It is an alien or uncanny version of my usual voice. I am familiar with my they-self, but not with my unalienated voice — so, in a weird twist, my real voice is the one that sounds strangest to me.
  • Marcel developed a strongly theological branch of existentialism. His faith distanced him from both Sartre and Heidegger, but he shared a sense of how history makes demands on individuals. In his essay ‘On the Ontological Mystery’, written in 1932 and published in the fateful year of 1933, Marcel wrote of the human tendency to become stuck in habits, received ideas, and a narrow-minded attachment to possessions and familiar scenes. Instead, he urged his readers to develop a capacity for remaining ‘available’ to situations as they arise. Similar ideas of disponibilité or availability had been explored by other writers,
  • Marcel made it his central existential imperative. He was aware of how rare and difficult it was. Most people fall into what he calls ‘crispation’: a tensed, encrusted shape in life — ‘as though each one of us secreted a kind of shell which gradually hardened and imprisoned him’.
  • Bettelheim later observed that, under Nazism, only a few people realised at once that life could not continue unaltered: these were the ones who got away quickly. Bettelheim himself was not among them. Caught in Austria when Hitler annexed it, he was sent first to Dachau and then to Buchenwald, but was then released in a mass amnesty to celebrate Hitler’s birthday in 1939 — an extraordinary reprieve, after which he left at once for America.
  • we are used to reading philosophy as offering a universal message for all times and places — or at least as aiming to do so. But Heidegger disliked the notion of universal truths or universal humanity, which he considered a fantasy. For him, Dasein is not defined by shared faculties of reason and understanding, as the Enlightenment philosophers thought. Still less is it defined by any kind of transcendent eternal soul, as in religious tradition. We do not exist on a higher, eternal plane at all. Dasein’s Being is local: it has a historical situation, and is constituted in time and place.
  • For Marcel, learning to stay open to reality in this way is the philosopher’s prime job. Everyone can do it, but the philosopher is the one who is called on above all to stay awake, so as to be the first to sound the alarm if something seems wrong.
  • Second, it also means understanding that we are historical beings, and grasping the demands our particular historical situation is making on us. In what Heidegger calls ‘anticipatory resoluteness’, Dasein discovers ‘that its uttermost possibility lies in giving itself up’. At that moment, through Being-towards-death and resoluteness in facing up to one’s time, one is freed from the they-self and attains one’s true, authentic self.
  • If we are temporal beings by our very nature, then authentic existence means accepting, first, that we are finite and mortal. We will die: this all-important realisation is what Heidegger calls authentic ‘Being-towards-Death’, and it is fundamental to his philosophy.
  • Hannah Arendt, instead, left early on: she had the benefit of a powerful warning. Just after the Nazi takeover, in spring 1933, she had been arrested while researching materials on anti-Semitism for the German Zionist Organisation at Berlin’s Prussian State Library. Her apartment was searched; both she and her mother were locked up briefly, then released. They fled, without stopping to arrange travel documents. They crossed to Czechoslovakia (then still safe) by a method that sounds almost too fabulous to be true: a sympathetic German family on the border had a house with its front door in Germany and its back door in Czechoslovakia. The family would invite people for dinner, then let them leave through the back door at night.
  • As Sartre argued in his 1943 review of The Stranger, basic phenomenological principles show that experience comes to us already charged with significance. A piano sonata is a melancholy evocation of longing. If I watch a soccer match, I see it as a soccer match, not as a meaningless scene in which a number of people run around taking turns to apply their lower limbs to a spherical object. If the latter is what I’m seeing, then I am not watching some more essential, truer version of soccer; I am failing to watch it properly as soccer at all.
  • Much as they liked Camus personally, neither Sartre nor Beauvoir accepted his vision of absurdity. For them, life is not absurd, even when viewed on a cosmic scale, and nothing can be gained by saying it is. Life for them is full of real meaning, although that meaning emerges differently for each of us.
  • For Sartre, we show bad faith whenever we portray ourselves as passive creations of our race, class, job, history, nation, family, heredity, childhood influences, events, or even hidden drives in our subconscious which we claim are out of our control. It is not that such factors are unimportant: class and race, in particular, he acknowledged as powerful forces in people’s lives, and Simone de Beauvoir would soon add gender to that list.
  • Sartre takes his argument to an extreme point by asserting that even war, imprisonment or the prospect of imminent death cannot take away my existential freedom. They form part of my ‘situation’, and this may be an extreme and intolerable situation, but it still provides only a context for whatever I choose to do next. If I am about to die, I can decide how to face that death. Sartre here resurrects the ancient Stoic idea that I may not choose what happens to me, but I can choose what to make of it, spiritually speaking.
  • But the Stoics cultivated indifference in the face of terrible events, whereas Sartre thought we should remain passionately, even furiously engaged with what happens to us and with what we can achieve. We should not expect freedom to be anything less than fiendishly difficult.
  • Freedom does not mean entirely unconstrained movement, and it certainly does not mean acting randomly. We often mistake the very things that enable us to be free — context, meaning, facticity, situation, a general direction in our lives — for things that define us and take away our freedom. It is only with all of these that we can be free in a real sense.
  • Nor did he mean that privileged groups have the right to pontificate to the poor and downtrodden about the need to ‘take responsibility’ for themselves. That would be a grotesque misreading of Sartre’s point, since his sympathy in any encounter always lay with the more oppressed side. But for each of us — for me — to be in good faith means not making excuses for myself.
  • Camus’ novel gives us a deliberately understated vision of heroism and decisive action compared to those of Sartre and Beauvoir. One can only do so much. It can look like defeatism, but it shows a more realistic perception of what it takes to actually accomplish difficult tasks like liberating one’s country.
  • Camus just kept returning to his core principle: no torture, no killing — at least not with state approval. Beauvoir and Sartre believed they were taking a more subtle and more realistic view. If asked why a couple of innocuous philosophers had suddenly become so harsh, they would have said it was because the war had changed them in profound ways. It had shown them that one’s duties to humanity could be more complicated than they seemed. ‘The war really divided my life in two,’ Sartre said later.
  • Poets and artists ‘let things be’, but they also let things come out and show themselves. They help to ease things into ‘unconcealment’ (Unverborgenheit), which is Heidegger’s rendition of the Greek term alētheia, usually translated as ‘truth’. This is a deeper kind of truth than the mere correspondence of a statement to reality, as when we say ‘The cat is on the mat’ and point to a mat with a cat on it. Long before we can do this, both cat and mat must ‘stand forth out of concealedness’. They must un-hide themselves.
  • Heidegger does not use the word ‘consciousness’ here because — as with his earlier work — he is trying to make us think in a radically different way about ourselves. We are not to think of the mind as an empty cavern, or as a container filled with representations of things. We are not even supposed to think of it as firing off arrows of intentional ‘aboutness’, as in the earlier phenomenology of Brentano. Instead, Heidegger draws us into the depths of his Schwarzwald, and asks us to imagine a gap with sunlight filtering in. We remain in the forest, but we provide a relatively open spot where other beings can bask for a moment. If we did not do this, everything would remain in the thickets, hidden even to itself.
  • The astronomer Carl Sagan began his 1980 television series Cosmos by saying that human beings, though made of the same stuff as the stars, are conscious and are therefore ‘a way for the cosmos to know itself’. Merleau-Ponty similarly quoted his favourite painter Cézanne as saying, ‘The landscape thinks itself in me, and I am its consciousness.’ This is something like what Heidegger thinks humanity contributes to the earth. We are not made of spiritual nothingness; we are part of Being, but we also bring something unique with us. It is not much: a little open space, perhaps with a path and a bench like the one the young Heidegger used to sit on to do his homework. But through us, the miracle occurs.
  • Beauty aside, Heidegger’s late writing can also be troubling, with its increasingly mystical notion of what it is to be human. If one speaks of a human being mainly as an open space or a clearing, or a means of ‘letting beings be’ and dwelling poetically on the earth, then one doesn’t seem to be talking about any recognisable person. The old Dasein has become less human than ever. It is now a forestry feature.
  • Even today, Jaspers, the dedicated communicator, is far less widely read than Heidegger, who has influenced architects, social theorists, critics, psychologists, artists, film-makers, environmental activists, and innumerable students and enthusiasts — including the later deconstructionist and post-structuralist schools, which took their starting point from his late thinking. Having spent the late 1940s as an outsider and then been rehabilitated, Heidegger became the overwhelming presence in university philosophy all over the European continent from then on.
  • As Levinas reflected on this experience, it helped to lead him to a philosophy that was essentially ethical, rather than ontological like Heidegger’s. He developed his ideas from the work of Jewish theologian Martin Buber, whose I and Thou in 1923 had distinguished between my relationship with an impersonal ‘it’ or ‘them’, and the direct personal encounter I have with a ‘you’. Levinas took it further: when I encounter you, we normally meet face-to-face, and it is through your face that you, as another person, can make ethical demands on me. This is very different from Heidegger’s Mitsein or Being-with, which suggests a group of people standing alongside one another, shoulder to shoulder as if in solidarity — perhaps as a unified nation or Volk.
  • For Levinas, we literally face each other, one individual at a time, and that relationship becomes one of communication and moral expectation. We do not merge; we respond to one another. Instead of being co-opted into playing some role in my personal drama of authenticity, you look me in the eyes — and you remain Other. You remain you.
  • This relationship is more fundamental than the self, more fundamental than consciousness, more fundamental even than Being — and it brings an unavoidable ethical obligation. Ever since Husserl, phenomenologists and existentialists had being trying to stretch the definition of existence to incorporate our social lives and relationships. Levinas did more: he turned philosophy around entirely so that these relationships were the foundation of our existence, not an extension of it.
  • Her last work, The Need for Roots, argues, among other things, that none of us has rights, but each one of us has a near-infinite degree of duty and obligation to the other. Whatever the underlying cause of her death — and anorexia nervosa seems to have been involved — no one could deny that she lived out her philosophy with total commitment. Of all the lives touched on in this book, hers is surely the most profound and challenging application of Iris Murdoch’s notion that a philosophy can be ‘inhabited’.
  • Other thinkers took radical ethical turns during the war years. The most extreme was Simone Weil, who actually tried to live by the principle of putting other people’s ethical demands first. Having returned to France after her travels through Germany in 1932, she had worked in a factory so as to experience the degrading nature of such work for herself. When France fell in 1940, her family fled to Marseilles (against her protests), and later to the US and to Britain. Even in exile, Weil made extraordinary sacrifices. If there were people in the world who could not sleep in a bed, she would not do so either, so she slept on the floor.
  • The mystery tradition had roots in Kierkegaard’s ‘leap of faith’. It owed much to the other great nineteenth-century mystic of the impossible, Dostoevsky, and to older theological notions. But it also grew from the protracted trauma that was the first half of the twentieth century. Since 1914, and especially since 1939, people in Europe and elsewhere had come to the realisation that we cannot fully know or trust ourselves; that we have no excuses or explanations for what we do — and yet that we must ground our existence and relationships on something firm, because otherwise we cannot survive.
  • One striking link between these radical ethical thinkers, all on the fringes of our main story, is that they had religious faith. They also granted a special role to the notion of ‘mystery’ — that which cannot be known, calculated or understood, especially when it concerns our relationships with each other. Heidegger was different from them, since he rejected the religion he grew up with and had no real interest in ethics — probably as a consequence of his having no real interest in the human.
  • Meanwhile, the Christian existentialist Gabriel Marcel was also still arguing, as he had since the 1930s, that ethics trumps everything else in philosophy and that our duty to each other is so great as to play the role of a transcendent ‘mystery’. He too had been led to this position partly by a wartime experience: during the First World War he had worked for the Red Cross’ Information Service, with the unenviable job of answering relatives’ inquiries about missing soldiers. Whenever news came, he passed it on, and usually it was not good. As Marcel later said, this task permanently inoculated him against warmongering rhetoric of any kind, and it made him aware of the power of what is unknown in our lives.
  • As the play’s much-quoted and frequently misunderstood final line has it: ‘Hell is other people.’ Sartre later explained that he did not mean to say that other people were hellish in general. He meant that after death we become frozen in their view, unable any longer to fend off their interpretation. In life, we can still do something to manage the impression we make; in death, this freedom goes and we are left entombed in other’s people’s memories and perceptions.
  • We have to do two near-impossible things at once: understand ourselves as limited by circumstances, and yet continue to pursue our projects as though we are truly in control. In Beauvoir’s view, existentialism is the philosophy that best enables us to do this, because it concerns itself so deeply with both freedom and contingency. It acknowledges the radical and terrifying scope of our freedom in life, but also the concrete influences that other philosophies tend to ignore: history, the body, social relationships and the environment.
  • The aspects of our existence that limit us, Merleau-Ponty says, are the very same ones that bind us to the world and give us scope for action and perception. They make us what we are. Sartre acknowledged the need for this trade-off, but he found it more painful to accept. Everything in him longed to be free of bonds, of impediments and limitations
  • Of course we have to learn this skill of interpreting and anticipating the world, and this happens in early childhood, which is why Merleau-Ponty thought child psychology was essential to philosophy. This is an extraordinary insight. Apart from Rousseau, very few philosophers before him had taken childhood seriously; most wrote as though all human experience were that of a fully conscious, rational, verbal adult who has been dropped into this world from the sky — perhaps by a stork.
  • For Merleau-Ponty, we cannot understand our experience if we don’t think of ourselves in part as overgrown babies. We fall for optical illusions because we once learned to see the world in terms of shapes, objects and things relevant to our own interests. Our first perceptions came to us in tandem with our first active experiments in observing the world and reaching out to explore it, and are still linked with those experiences.
  • Another factor in all of this, for Merleau-Ponty, is our social existence: we cannot thrive without others, or not for long, and we need this especially in early life. This makes solipsistic speculation about the reality of others ridiculous; we could never engage in such speculation if we hadn’t already been formed by them.
  • As Descartes could have said (but didn’t), ‘I think, therefore other people exist.’ We grow up with people playing with us, pointing things out, talking, listening, and getting us used to reading emotions and movements; this is how we become capable, reflective, smoothly integrated beings.
  • In general, Merleau-Ponty thinks human experience only makes sense if we abandon philosophy’s time-honoured habit of starting with a solitary, capsule-like, immobile adult self, isolated from its body and world, which must then be connected up again — adding each element around it as though adding clothing to a doll. Instead, for him, we slide from the womb to the birth canal to an equally close and total immersion in the world. That immersion continues as long as we live, although we may also cultivate the art of partially withdrawing from time to time when we want to think or daydream.
  • When he looks for his own metaphor to describe how he sees consciousness, he comes up with a beautiful one: consciousness, he suggests, is like a ‘fold’ in the world, as though someone had crumpled a piece of cloth to make a little nest or hollow. It stays for a while, before eventually being unfolded and smoothed away. There is something seductive, even erotic, in this idea of my conscious self as an improvised pouch in the cloth of the world. I still have my privacy — my withdrawing room. But I am part of the world’s fabric, and I remain formed out of it for as long as I am here.
  • By the time of these works, Merleau-Ponty is taking his desire to describe experience to the outer limits of what language can convey. Just as with the late Husserl or Heidegger, or Sartre in his Flaubert book, we see a philosopher venturing so far from shore that we can barely follow. Emmanuel Levinas would head out to the fringes too, eventually becoming incomprehensible to all but his most patient initiates.
  • Sartre once remarked — speaking of a disagreement they had about Husserl in 1941 — that ‘we discovered, astounded, that our conflicts had, at times, stemmed from our childhood, or went back to the elementary differences of our two organisms’. Merleau-Ponty also said in an interview that Sartre’s work seemed strange to him, not because of philosophical differences, but because of a certain ‘register of feeling’, especially in Nausea, that he could not share. Their difference was one of temperament and of the whole way the world presented itself to them.
  • The two also differed in their purpose. When Sartre writes about the body or other aspects of experience, he generally does it in order to make a different point. He expertly evokes the grace of his café waiter, gliding between the tables, bending at an angle just so, steering the drink-laden tray through the air on the tips of his fingers — but he does it all in order to illustrate his ideas about bad faith. When Merleau-Ponty writes about skilled and graceful movement, the movement itself is his point. This is the thing he wants to understand.
  • We can never move definitively from ignorance to certainty, for the thread of the inquiry will constantly lead us back to ignorance again. This is the most attractive description of philosophy I’ve ever read, and the best argument for why it is worth doing, even (or especially) when it takes us no distance at all from our starting point.
  • By prioritising perception, the body, social life and childhood development, Merleau-Ponty gathered up philosophy’s far-flung outsider subjects and brought them in to occupy the centre of his thought.
  • In his inaugural lecture at the Collège de France on 15 January 1953, published as In Praise of Philosophy, he said that philosophers should concern themselves above all with whatever is ambiguous in our experience. At the same time, they should think clearly about these ambiguities, using reason and science. Thus, he said, ‘The philosopher is marked by the distinguishing trait that he possesses inseparably the taste for evidence and the feeling for ambiguity.’ A constant movement is required between these two
  • As Sartre wrote in response to Hiroshima, humanity had now gained the power to wipe itself out, and must decide every single day that it wanted to live. Camus also wrote that humanity faced the task of choosing between collective suicide and a more intelligent use of its technology — ‘between hell and reason’. After 1945, there seemed little reason to trust in humanity’s ability to choose well.
  • Merleau-Ponty observed in a lecture of 1951 that, more than any previous century, the twentieth century had reminded people how ‘contingent’ their lives were — how at the mercy of historical events and other changes that they could not control. This feeling went on long after the war ended. After the A-bombs were dropped on Hiroshima and Nagasaki, many feared that a Third World War would not be long in coming, this time between the Soviet Union and the United States.
runlai_jiang

8 Infinity Facts That Will Blow Your Mind - 0 views

  • Infinity has its own special symbol: ∞. The symbol, sometimes called the lemniscate, was introduced by clergyman and mathematician John Wallis in 1655. The word "lemniscate" comes from the Latin word lemniscus, which means "ribbon," while the word "infinity" comes from the Latin word infinitas, which means "boundless."
  • Of all Zeno's paradoxes, the most famous is his paradox of the Tortoise and Achilles. In the paradox, a tortoise challenges the Greek hero Achilles to a race, providing the tortoise is given a small head start. The tortoise argues he will win the race because as Achilles catches up to him, the tortoise will have gone a bit further, adding to the distance.
  • Pi as an Example of Infinity Pi is a number consisting of an infinite number of digits. Jeffrey Coolidge / Getty Images Another good example of infinity is the number π or pi. Mathematicians use a symbol for pi because it's impossible to write the number down. Pi consists of an infinite number of digits. It's often rounded to 3.14 or even 3.14159, yet no matter how many digits you write, it's impossible to get to the end.
  • ...2 more annotations...
  • Fractals and Infinity A fractal may be magnified over and over, to infinity, always revealing more detail. PhotoviewPlus / Getty Images A fractal is an abstract mathematical object, used in art and to simulate natural phenomena. Written as a mathematical equation, most fractals are nowhere differentiable. When viewing an image of a fractal, this means you could zoom in and see new detail. In other words, a fractal is infinitely magnifiable.The Koch snowflake is an interesting example of a fractal. The snowflake starts as an equilateral triangle. For each iteration of the fractal:Each line segment is divided into three equal segments.
  • Cosmology and Infinity Even if the universe is finite, it might be one of an infinite number of "bubbles.". Detlev van Ravenswaay / Getty Images Cosmologists study the universe and ponder infinity. Does space go on and on without end? This remains an open question. Even if the physical universe as we know it has a boundary, there is still the multiverse theory to consider. Our universe may be but one in an infinite number of them.
sissij

Does the Language I Speak Influence the Way I Think? | Linguistic Society of America - 1 views

  • What we have learned is that the answer to this question is complicated. To some extent, it's a chicken-and-egg question: Are you unable to think about things you don't have words for, or do you lack words for them because you don't think about them?
  • There's a language called Guugu Yimithirr (spoken in North Queensland, Australia) that doesn't have words like left and right or front and back. Its speakers always describe locations and directions using the Guugu Yimithirr words for north, south, east, and west. So, they would never say that a boy is standing in front of a house; instead, they'd say he is standing (for example) east of the house. They would also, no doubt, think of the boy as standing east of the house, while a speaker of English would think of him as standing in front of the house.
  • Whorf believed that because of this difference, Hopi speakers and English speakers think about events differently, with Hopi speakers focusing more on the source of the information and English speakers focusing more on the time of the event.
  • ...2 more annotations...
  • But not always. You can easily conjure up mental images and sensations that would be hard to describe in words.
  • Not really, but if the new language is very different from your own, it may give you some insight into another culture and another way of life.
  •  
    I found it very interesting that how we speak can have some influence on how we think. It may even have a slice of the logic and ideology of our culture. As my second language, English is very different from Chinese. We may have different description for the same event because we value things differently. --Sissi (10/24/2016)
Javier E

Book Review: Models Behaving Badly - WSJ.com - 1 views

  • Mr. Derman is perhaps a bit too harsh when he describes EMM—the so-called Efficient Market Model. EMM does not, as he claims, imply that prices are always correct and that price always equals value. Prices are always wrong. What EMM says is that we can never be sure if prices are too high or too low.
  • The Efficient Market Model does not suggest that any particular model of valuation—such as the Capital Asset Pricing Model—fully accounts for risk and uncertainty or that we should rely on it to predict security returns. EMM does not, as Mr. Derman says, "stubbornly assume that all uncertainty about the future is quantifiable." The basic lesson of EMM is that it is very difficult—well nigh impossible—to beat the market consistently.
  • Mr. Derman gives an eloquent description of James Clerk Maxwell's electromagnetic theory in a chapter titled "The Sublime." He writes: "The electromagnetic field is not like Maxwell's equations; it is Maxwell's equations."
  • ...4 more annotations...
  • He sums up his key points about how to keep models from going bad by quoting excerpts from his "Financial Modeler's Manifesto" (written with Paul Wilmott), a paper he published a couple of years ago. Among its admonitions: "I will always look over my shoulder and never forget that the model is not the world"; "I will not be overly impressed with mathematics"; "I will never sacrifice reality for elegance"; "I will not give the people who use my models false comfort about their accuracy"; "I understand that my work may have enormous effects on society and the economy, many beyond my apprehension."
  • As the collapse of the subprime collateralized debt market in 2008 made clear, it is a terrible mistake to put too much faith in models purporting to value financial instruments. "In crises," Mr. Derman writes, "the behavior of people changes and normal models fail. While quantum electrodynamics is a genuine theory of all reality, financial models are only mediocre metaphors for a part of it."
  • Although financial models employ the mathematics and style of physics, they are fundamentally different from the models that science produces. Physical models can provide an accurate description of reality. Financial models, despite their mathematical sophistication, can at best provide a vast oversimplification of reality. In the universe of finance, the behavior of individuals determines value—and, as he says, "people change their minds."
  • Bringing ethics into his analysis, Mr. Derman has no patience for coddling the folly of individuals and institutions who over-rely on faulty models and then seek to escape the consequences. He laments the aftermath of the 2008 financial meltdown, when banks rebounded "to record profits and bonuses" thanks to taxpayer bailouts. If you want to benefit from the seven fat years, he writes, "you must suffer the seven lean years too, even the catastrophically lean ones. We need free markets, but we need them to be principled."
rachelramirez

How Phantom Limbs Explain Consciousness - The Atlantic - 0 views

  • How Phantom Limbs Explain Consciousness
  • Almost all people who have an amputation experience a phantom. Usually the effect fades within days, but in some cases it can remain for a lifetime.
  • The brain constructs a model of the self that neuroscientists call the body schema. The body schema is a simulation. It takes in touch, vision, and baseline information about what’s connected to what, and builds a virtual model of your body.
  • ...6 more annotations...
  • As a child grows up, the body schema adjusts to the changing body, but its adaptability has limits.
  • Attention is the selective enhancement of some signals over others, such that the brain’s resources are strategically deployed.
  • The brain needs to control its attention, just as it controls the body.
  • In control theory, if a machine is to control something optimally, it needs a working model of whatever it’s controlling. The brain certainly follows this principle in controlling the body.
  • Just as the body schema is a surreal description of the body, so the attention schema would be a surreal description of attention.
  • This is called the attention schema theory, a theory that my lab has been developing and testing experimentally for the past five years. It’s a theory of why we insist with such certainty that we have subjective experience. Attention is fundamental.
sissij

Will 'Skam,' a Norwegian Hit, Translate? - The New York Times - 0 views

  • OSLO — “Skam,” a racy, emotionally intense, true-to-life Norwegian web and television series, follows a group of Oslo teenagers as they navigate sex, school, drinking, depression, rape, religion, coming out and the pains of status anxiety, in real life and online.
  • Each season of “Skam” centers on one character. Season 3, which wraps up next week, has focused on Isak (Tarjei Sandvik Moe), who is coming to terms with being gay and has fallen for an older boy, Even (Henrik Holm).
  • “‘Skam’ is combining reality and fiction and the line between them isn’t so clear,” said Mari Magnus, 27, the web producer for the show, who writes the text messages and masterminds the Instagram accounts.
  •  
    I strongly recommend this TV series because it has a very good depiction of the coming of age problems. I really like the way that the TV series switches the main character for every season. The main characters are all in the same story frame but with different identities. I really like this variety of perspectives in the TV series. Something related to TOK is that nobody is the center of the world, or rather there are a lot of different version of the world, so that why when we are referring to the same even we all experience, we may have completely different description. --Sissi (12/9/2016)
sissij

How "Arrival"'s Alien Language Might Actually Make You See the Future | Big Think - 0 views

  • The language we speak can help us see the future.
  • First, learning a language makes you smarter.
  • children learn languages more easily because the plasticity of their developing brains lets them use both hemispheres in language acquisition, while in most adults language is lateralized to one hemisphere - usually the left
  • ...6 more annotations...
  • Second, Time isn’t really linear
  • time is really just “your assessment of how long something took.”
  • Time can also feel faster or slower depending on how we experience it.
  • “Language doesn’t determine how you think, but it can determine how you think about things.”
  • Defining snow with such specific language grants it multiple meanings, and those multiple meanings increase its significance within the culture of the people using the language.
  • sees beyond the linear bounds of time
  •  
    In TOK, we discussed about the what language can affect us. Language doesn't shape who we are, but it oblige us to think in certain ways. By speaking in different languages, we can switch among different mindsets. In this article, the author also uses the example of the descriptive snow words in Eskimos cliche to show that language can enable us to see through different lenses since different languages focus on different stresses and significances. The "time traveler" in the title doesn't give the literal meaning, we get to travel from time to time because we can see beyond linear bounds of time. We are switching perspective from one observer of time to another observer of time. --Sissi (2/13/2017)
Javier E

"Wikipedia Is Not Truth" - The Dish | By Andrew Sullivan - The Daily Beast - 0 views

  • entriesOnPage.push("6a00d83451c45669e20168e7872016970c"); facebookButtons['6a00d83451c45669e20168e7872016970c'] = ''; twitterButtons['6a00d83451c45669e20168e7872016970c'] = ''; email permalink 20 Feb 2012 12:30 PM "Wikipedia Is Not Truth" Timothy Messer-Kruse tried to update the Wiki page on the Haymarket riot of 1886 to correct a long-standing inaccurate claim. Even though he's written two books and numerous articles on the subject, his changes were instantly rejected: I had cited the documents that proved my point, including verbatim testimony from the trial published online by the Library of Congress. I also noted one of my own peer-reviewed articles. One of the people who had assumed the role of keeper of this bit of history for Wikipedia quoted the Web site's "undue weight" policy, which states that "articles should not give minority views as much or as detailed a description as more popular views."
  • "Explain to me, then, how a 'minority' source with facts on its side would ever appear against a wrong 'majority' one?" I asked the Wiki-gatekeeper. ...  Another editor cheerfully tutored me in what this means: "Wikipedia is not 'truth,' Wikipedia is 'verifiability' of reliable sources. Hence, if most secondary sources which are taken as reliable happen to repeat a flawed account or description of something, Wikipedia will echo that."
Emilio Ergueta

No Consolation For Kalashnikov | Issue 59 | Philosophy Now - 0 views

  • The legendary AK 47 assault rifle was invented in 1946 by Mikhail Kalashnikov. It was issued to the armies of the old Warsaw Pact countries and has been used in many conflicts, eg by the North Vietnamese Army (NVA), Soviet soldiers in Afghanistan, and even this year by Al Qaeda operatives in Iraq.
  • Whatever interpretation one puts on those two conflicts, almost no-one sane would condone the use of the AK 47 in killing civilians, for instance Shiites in Iraq.
  • Mikhail Kalashnikov has come to have some doubts about his invention. He told The Times in June 2006, “I don’t worry when my guns are used for national liberation or defence. But when I see how peaceful people are killed and wounded by these weapons, I get very distressed and upset. I calm down by telling myself that I invented this gun 60 years ago to protect the interests of my country.”
  • ...10 more annotations...
  • Weapons research produces in the first place not guns, bombs, bullets and planes and the various command, control and communications hardware and software needed to use these things, but plans, blueprints and designs – knowledge and know-how. Unless these useful plans are lost or destroyed, they can be implemented or instantiated many times over, and thus project unforeseen into the future.
  • If any one person invented the atomic bomb, it was Leo Szilard. It seems he had the idea, and he made great efforts from 1935 until 1942, when the Manhattan Project was set up, to get the research done that would show whether an atomic bomb was possible; how to make one; and if need be, to provide the basis for actually making one.
  • This perception was greatly strengthened when Hahn and Strassmann discovered nuclear fission in Berlin in 1938. So Szilard, worried about the Nazis getting an atomic bomb, thought that the Allies should do the research to see if and how one could be made, in order to deter or otherwise prevent the Nazis from using one.
  • As far as Szilard and a good number of other atomic scientists were concerned there was no longer a rationale for the bomb project. Szilard, Philip Franck and others wrote The Franck Report in June 1945, which among other things advocated a demonstration of the power of the atomic bomb by dropping one on an uninhabited island. The Franck Report was ignored.The project was not abandoned, of course, and two of its products were used on Japanese cities, to kill mostly Japanese civilians.
  • The point of this example is to show how scientists lose control of their work when they take part in weapons research – they lose control of it in other settings besides, but this case is the most problematic.
  • One way out of the dilemma is to refuse to do war research under any circumstances. I’d like to endorse this option, especially as it does not imply that we should judge Kalashnikov, Szilard, Watson-Watt and other well-intentioned researchers harshly, since we can argue that the dilemma has only become evident recently.
  • Another possibility is to deny that weapons research must take place within history, as a good Marxist might put it. That is, as I would put it: Perhaps weapons research is not an activity that must take account of historical contingencies.
  • We must acknowledge that there is no such thing as an inherently defensive weapon, something that can only be used for the morally acceptable purpose of responding against an aggressor. Doing weapons research for defensive systems is therefore not morally acceptable, as any weapons might feasibly be used as part of an unjust war of aggression.
  • Kalashnikov’s preferred description of what he did when he designed the AK 47 is something like “providing the means for liberation,” or “defending my country,” not “providing the means to kill innocents.” However, he acknowledges that the latter description applies to his situation equally well. Nevertheless, J might try to portray her actions as something like “provide the means for deterrence,” the idea being that what she is helping to create is intended to deter, and hence prevent harm rather than cause it.
  • You might say that this is utopian, and it would never work, but then it might console Kalashnikov, who, after all, was a Marxist, and perhaps also a utopian.
johnsonle1

A Brand New Organ Has Been Found In The Human Body, Say Scientists | The Huffington Post - 1 views

  •  
    "The anatomic description that had been laid down over 100 years of anatomy was incorrect. This organ is far from fragmented and complex. It is simply one continuous structure," Professor Coffey explained.
Javier E

The Coming Software Apocalypse - The Atlantic - 1 views

  • Our standard framework for thinking about engineering failures—reflected, for instance, in regulations for medical devices—was developed shortly after World War II, before the advent of software, for electromechanical systems. The idea was that you make something reliable by making its parts reliable (say, you build your engine to withstand 40,000 takeoff-and-landing cycles) and by planning for the breakdown of those parts (you have two engines). But software doesn’t break. Intrado’s faulty threshold is not like the faulty rivet that leads to the crash of an airliner. The software did exactly what it was told to do. In fact it did it perfectly. The reason it failed is that it was told to do the wrong thing.
  • Software failures are failures of understanding, and of imagination. Intrado actually had a backup router, which, had it been switched to automatically, would have restored 911 service almost immediately. But, as described in a report to the FCC, “the situation occurred at a point in the application logic that was not designed to perform any automated corrective actions.”
  • “The problem is that software engineers don’t understand the problem they’re trying to solve, and don’t care to,” says Leveson, the MIT software-safety expert. The reason is that they’re too wrapped up in getting their code to work.
  • ...52 more annotations...
  • Code is too hard to think about. Before trying to understand the attempts themselves, then, it’s worth understanding why this might be: what it is about code that makes it so foreign to the mind, and so unlike anything that came before it.
  • Technological progress used to change the way the world looked—you could watch the roads getting paved; you could see the skylines rise. Today you can hardly tell when something is remade, because so often it is remade by code.
  • Software has enabled us to make the most intricate machines that have ever existed. And yet we have hardly noticed, because all of that complexity is packed into tiny silicon chips as millions and millions of lines of cod
  • The programmer, the renowned Dutch computer scientist Edsger Dijkstra wrote in 1988, “has to be able to think in terms of conceptual hierarchies that are much deeper than a single mind ever needed to face before.” Dijkstra meant this as a warning.
  • As programmers eagerly poured software into critical systems, they became, more and more, the linchpins of the built world—and Dijkstra thought they had perhaps overestimated themselves.
  • What made programming so difficult was that it required you to think like a computer.
  • The introduction of programming languages like Fortran and C, which resemble English, and tools, known as “integrated development environments,” or IDEs, that help correct simple mistakes (like Microsoft Word’s grammar checker but for code), obscured, though did little to actually change, this basic alienation—the fact that the programmer didn’t work on a problem directly, but rather spent their days writing out instructions for a machine.
  • Though he runs a lab that studies the future of computing, he seems less interested in technology per se than in the minds of the people who use it. Like any good toolmaker, he has a way of looking at the world that is equal parts technical and humane. He graduated top of his class at the California Institute of Technology for electrical engineering,
  • “The serious problems that have happened with software have to do with requirements, not coding errors.” When you’re writing code that controls a car’s throttle, for instance, what’s important is the rules about when and how and by how much to open it. But these systems have become so complicated that hardly anyone can keep them straight in their head. “There’s 100 million lines of code in cars now,” Leveson says. “You just cannot anticipate all these things.”
  • a nearly decade-long investigation into claims of so-called unintended acceleration in Toyota cars. Toyota blamed the incidents on poorly designed floor mats, “sticky” pedals, and driver error, but outsiders suspected that faulty software might be responsible
  • software experts spend 18 months with the Toyota code, picking up where NASA left off. Barr described what they found as “spaghetti code,” programmer lingo for software that has become a tangled mess. Code turns to spaghetti when it accretes over many years, with feature after feature piling on top of, and being woven around
  • Using the same model as the Camry involved in the accident, Barr’s team demonstrated that there were actually more than 10 million ways for the onboard computer to cause unintended acceleration. They showed that as little as a single bit flip—a one in the computer’s memory becoming a zero or vice versa—could make a car run out of control. The fail-safe code that Toyota had put in place wasn’t enough to stop it
  • . In all, Toyota recalled more than 9 million cars, and paid nearly $3 billion in settlements and fines related to unintended acceleration.
  • The problem is that programmers are having a hard time keeping up with their own creations. Since the 1980s, the way programmers work and the tools they use have changed remarkably little.
  • This is the trouble with making things out of code, as opposed to something physical. “The complexity,” as Leveson puts it, “is invisible to the eye.”
  • The fact that the two of them were thinking about the same problem in the same terms, at the same time, was not a coincidence. They had both just seen the same remarkable talk, given to a group of software-engineering students in a Montreal hotel by a computer researcher named Bret Victor. The talk, which went viral when it was posted online in February 2012, seemed to be making two bold claims. The first was that the way we make software is fundamentally broken. The second was that Victor knew how to fix it.
  • “Visual Studio is one of the single largest pieces of software in the world,” he said. “It’s over 55 million lines of code. And one of the things that I found out in this study is more than 98 percent of it is completely irrelevant. All this work had been put into this thing, but it missed the fundamental problems that people faced. And the biggest one that I took away from it was that basically people are playing computer inside their head.” Programmers were like chess players trying to play with a blindfold on—so much of their mental energy is spent just trying to picture where the pieces are that there’s hardly any left over to think about the game itself.
  • in early 2012, Victor had finally landed upon the principle that seemed to thread through all of his work. (He actually called the talk “Inventing on Principle.”) The principle was this: “Creators need an immediate connection to what they’re creating.” The problem with programming was that it violated the principle. That’s why software systems were so hard to think about, and so rife with bugs: The programmer, staring at a page of text, was abstracted from whatever it was they were actually making.
  • “Our current conception of what a computer program is,” he said, is “derived straight from Fortran and ALGOL in the late ’50s. Those languages were designed for punch cards.”
  • WYSIWYG (pronounced “wizzywig”) came along. It stood for “What You See Is What You Get.”
  • Victor’s point was that programming itself should be like that. For him, the idea that people were doing important work, like designing adaptive cruise-control systems or trying to understand cancer, by staring at a text editor, was appalling.
  • With the right interface, it was almost as if you weren’t working with code at all; you were manipulating the game’s behavior directly.
  • When the audience first saw this in action, they literally gasped. They knew they weren’t looking at a kid’s game, but rather the future of their industry. Most software involved behavior that unfolded, in complex ways, over time, and Victor had shown that if you were imaginative enough, you could develop ways to see that behavior and change it, as if playing with it in your hands. One programmer who saw the talk wrote later: “Suddenly all of my tools feel obsolete.”
  • hen John Resig saw the “Inventing on Principle” talk, he scrapped his plans for the Khan Academy programming curriculum. He wanted the site’s programming exercises to work just like Victor’s demos. On the left-hand side you’d have the code, and on the right, the running program: a picture or game or simulation. If you changed the code, it’d instantly change the picture. “In an environment that is truly responsive,” Resig wrote about the approach, “you can completely change the model of how a student learns ... [They] can now immediately see the result and intuit how underlying systems inherently work without ever following an explicit explanation.” Khan Academy has become perhaps the largest computer-programming class in the world, with a million students, on average, actively using the program each month.
  • “Everyone thought I was interested in programming environments,” he said. Really he was interested in how people see and understand systems—as he puts it, in the “visual representation of dynamic behavior.” Although code had increasingly become the tool of choice for creating dynamic behavior, it remained one of the worst tools for understanding it. The point of “Inventing on Principle” was to show that you could mitigate that problem by making the connection between a system’s behavior and its code immediate.
  • “Typically the main problem with software coding—and I’m a coder myself,” Bantegnie says, “is not the skills of the coders. The people know how to code. The problem is what to code. Because most of the requirements are kind of natural language, ambiguous, and a requirement is never extremely precise, it’s often understood differently by the guy who’s supposed to code.”
  • In a pair of later talks, “Stop Drawing Dead Fish” and “Drawing Dynamic Visualizations,” Victor went one further. He demoed two programs he’d built—the first for animators, the second for scientists trying to visualize their data—each of which took a process that used to involve writing lots of custom code and reduced it to playing around in a WYSIWYG interface.
  • Victor suggested that the same trick could be pulled for nearly every problem where code was being written today. “I’m not sure that programming has to exist at all,” he told me. “Or at least software developers.” In his mind, a software developer’s proper role was to create tools that removed the need for software developers. Only then would people with the most urgent computational problems be able to grasp those problems directly, without the intermediate muck of code.
  • Victor implored professional software developers to stop pouring their talent into tools for building apps like Snapchat and Uber. “The inconveniences of daily life are not the significant problems,” he wrote. Instead, they should focus on scientists and engineers—as he put it to me, “these people that are doing work that actually matters, and critically matters, and using really, really bad tools.”
  • Bantegnie’s company is one of the pioneers in the industrial use of model-based design, in which you no longer write code directly. Instead, you create a kind of flowchart that describes the rules your program should follow (the “model”), and the computer generates code for you based on those rules
  • In a model-based design tool, you’d represent this rule with a small diagram, as though drawing the logic out on a whiteboard, made of boxes that represent different states—like “door open,” “moving,” and “door closed”—and lines that define how you can get from one state to the other. The diagrams make the system’s rules obvious: Just by looking, you can see that the only way to get the elevator moving is to close the door, or that the only way to get the door open is to stop.
  • . In traditional programming, your task is to take complex rules and translate them into code; most of your energy is spent doing the translating, rather than thinking about the rules themselves. In the model-based approach, all you have is the rules. So that’s what you spend your time thinking about. It’s a way of focusing less on the machine and more on the problem you’re trying to get it to solve.
  • The ideas spread. The notion of liveness, of being able to see data flowing through your program instantly, made its way into flagship programming tools offered by Google and Apple. The default language for making new iPhone and Mac apps, called Swift, was developed by Apple from the ground up to support an environment, called Playgrounds, that was directly inspired by Light Table.
  • On this view, software becomes unruly because the media for describing what software should do—conversations, prose descriptions, drawings on a sheet of paper—are too different from the media describing what software does do, namely, code itself.
  • for this approach to succeed, much of the work has to be done well before the project even begins. Someone first has to build a tool for developing models that are natural for people—that feel just like the notes and drawings they’d make on their own—while still being unambiguous enough for a computer to understand. They have to make a program that turns these models into real code. And finally they have to prove that the generated code will always do what it’s supposed to.
  • tice brings order and accountability to large codebases. But, Shivappa says, “it’s a very labor-intensive process.” He estimates that before they used model-based design, on a two-year-long project only two to three months was spent writing code—the rest was spent working on the documentation.
  • uch of the benefit of the model-based approach comes from being able to add requirements on the fly while still ensuring that existing ones are met; with every change, the computer can verify that your program still works. You’re free to tweak your blueprint without fear of introducing new bugs. Your code is, in FAA parlance, “correct by construction.”
  • “people are not so easily transitioning to model-based software development: They perceive it as another opportunity to lose control, even more than they have already.”
  • The bias against model-based design, sometimes known as model-driven engineering, or MDE, is in fact so ingrained that according to a recent paper, “Some even argue that there is a stronger need to investigate people’s perception of MDE than to research new MDE technologies.”
  • “Human intuition is poor at estimating the true probability of supposedly ‘extremely rare’ combinations of events in systems operating at a scale of millions of requests per second,” he wrote in a paper. “That human fallibility means that some of the more subtle, dangerous bugs turn out to be errors in design; the code faithfully implements the intended design, but the design fails to correctly handle a particular ‘rare’ scenario.”
  • Newcombe was convinced that the algorithms behind truly critical systems—systems storing a significant portion of the web’s data, for instance—ought to be not just good, but perfect. A single subtle bug could be catastrophic. But he knew how hard bugs were to find, especially as an algorithm grew more complex. You could do all the testing you wanted and you’d never find them all.
  • An algorithm written in TLA+ could in principle be proven correct. In practice, it allowed you to create a realistic model of your problem and test it not just thoroughly, but exhaustively. This was exactly what he’d been looking for: a language for writing perfect algorithms.
  • TLA+, which stands for “Temporal Logic of Actions,” is similar in spirit to model-based design: It’s a language for writing down the requirements—TLA+ calls them “specifications”—of computer programs. These specifications can then be completely verified by a computer. That is, before you write any code, you write a concise outline of your program’s logic, along with the constraints you need it to satisfy
  • Programmers are drawn to the nitty-gritty of coding because code is what makes programs go; spending time on anything else can seem like a distraction. And there is a patient joy, a meditative kind of satisfaction, to be had from puzzling out the micro-mechanics of code. But code, Lamport argues, was never meant to be a medium for thought. “It really does constrain your ability to think when you’re thinking in terms of a programming language,”
  • Code makes you miss the forest for the trees: It draws your attention to the working of individual pieces, rather than to the bigger picture of how your program fits together, or what it’s supposed to do—and whether it actually does what you think. This is why Lamport created TLA+. As with model-based design, TLA+ draws your focus to the high-level structure of a system, its essential logic, rather than to the code that implements it.
  • But TLA+ occupies just a small, far corner of the mainstream, if it can be said to take up any space there at all. Even to a seasoned engineer like Newcombe, the language read at first as bizarre and esoteric—a zoo of symbols.
  • this is a failure of education. Though programming was born in mathematics, it has since largely been divorced from it. Most programmers aren’t very fluent in the kind of math—logic and set theory, mostly—that you need to work with TLA+. “Very few programmers—and including very few teachers of programming—understand the very basic concepts and how they’re applied in practice. And they seem to think that all they need is code,” Lamport says. “The idea that there’s some higher level than the code in which you need to be able to think precisely, and that mathematics actually allows you to think precisely about it, is just completely foreign. Because they never learned it.”
  • “In the 15th century,” he said, “people used to build cathedrals without knowing calculus, and nowadays I don’t think you’d allow anyone to build a cathedral without knowing calculus. And I would hope that after some suitably long period of time, people won’t be allowed to write programs if they don’t understand these simple things.”
  • Programmers, as a species, are relentlessly pragmatic. Tools like TLA+ reek of the ivory tower. When programmers encounter “formal methods” (so called because they involve mathematical, “formally” precise descriptions of programs), their deep-seated instinct is to recoil.
  • Formal methods had an image problem. And the way to fix it wasn’t to implore programmers to change—it was to change yourself. Newcombe realized that to bring tools like TLA+ to the programming mainstream, you had to start speaking their language.
  • he presented TLA+ as a new kind of “pseudocode,” a stepping-stone to real code that allowed you to exhaustively test your algorithms—and that got you thinking precisely early on in the design process. “Engineers think in terms of debugging rather than ‘verification,’” he wrote, so he titled his internal talk on the subject to fellow Amazon engineers “Debugging Designs.” Rather than bemoan the fact that programmers see the world in code, Newcombe embraced it. He knew he’d lose them otherwise. “I’ve had a bunch of people say, ‘Now I get it,’” Newcombe says.
  • In the world of the self-driving car, software can’t be an afterthought. It can’t be built like today’s airline-reservation systems or 911 systems or stock-trading systems. Code will be put in charge of hundreds of millions of lives on the road and it has to work. That is no small task.
Javier E

The View from Nowhere: Questions and Answers » Pressthink - 2 views

  • In pro journalism, American style, the View from Nowhere is a bid for trust that advertises the viewlessness of the news producer. Frequently it places the journalist between polarized extremes, and calls that neither-nor position “impartial.” Second, it’s a means of defense against a style of criticism that is fully anticipated: charges of bias originating in partisan politics and the two-party system. Third: it’s an attempt to secure a kind of universal legitimacy that is implicitly denied to those who stake out positions or betray a point of view. American journalists have almost a lust for the View from Nowhere because they think it has more authority than any other possible stance.
  • Who gets credit for the phrase, “view from nowhere?” # A. The philosopher Thomas Nagel, who wrote a very important book with that title.
  • Q. What does it say? # A. It says that human beings are, in fact, capable of stepping back from their position to gain an enlarged understanding, which includes the more limited view they had before the step back. Think of the cinema: when the camera pulls back to reveal where a character had been standing and shows us a fuller tableau. To Nagel, objectivity is that kind of motion. We try to “transcend our particular viewpoint and develop an expanded consciousness that takes in the world more fully.” #
  • ...11 more annotations...
  • But there are limits to this motion. We can’t transcend all our starting points. No matter how far it pulls back the camera is still occupying a position. We can’t actually take the “view from nowhere,” but this doesn’t mean that objectivity is a lie or an illusion. Our ability to step back and the fact that there are limits to it– both are real. And realism demands that we acknowledge both.
  • Q. So is objectivity a myth… or not? # A. One of the many interesting things Nagel says in that book is that “objectivity is both underrated and overrated, sometimes by the same persons.” It’s underrated by those who scoff at it as a myth. It is overrated by people who think it can replace the view from somewhere or transcend the human subject. It can’t.
  • When MSNBC suspends Keith Olbermann for donating without company permission to candidates he supports– that’s dumb. When NPR forbids its “news analysts” from expressing a view on matters they are empowered to analyze– that’s dumb. When reporters have to “launder” their views by putting them in the mouths of think tank experts: dumb. When editors at the Washington Post decline even to investigate whether the size of rallies on the Mall can be reliably estimated because they want to avoid charges of “leaning one way or the other,” as one of them recently put it, that is dumb. When CNN thinks that, because it’s not MSNBC and it’s not Fox, it’s the only the “real news network” on cable, CNN is being dumb about itself.
  • Let some in the press continue on with the mask of impartiality, which has advantages for cultivating sources and soothing advertisers. Let others experiment with transparency as the basis for trust. When you click on their by-line it takes you to a disclosure page where there is a bio, a kind of mission statement, and a creative attempt to say: here’s where I’m coming from (one example) along with campaign contributions, any affiliations or memberships, and–I’m just speculating now–a list of heroes and villains, or major influences, along with an archive of the work, plus anything else that might assist the user in placing this person on the user’s mattering map.
  • it has unearned authority in the American press. If in doing the serious work of journalism–digging, reporting, verification, mastering a beat–you develop a view, expressing that view does not diminish your authority. It may even add to it. The View from Nowhere doesn’t know from this. It also encourages journalists to develop bad habits. Like: criticism from both sides is a sign that you’re doing something right, when you could be doing everything wrong.
  • If it means trying to see things in that fuller perspective Thomas Nagel talked about–pulling the camera back, revealing our previous position as only one of many–I second the motion. If it means the struggle to get beyond the limited perspective that our experience and upbringing afford us… yeah, we need more of that, not less. I think there is value in acts of description that do not attempt to say whether the thing described is good or bad
  • I think we are in the midst of shift in the system by which trust is sustained in professional journalism. David Weinberger tried to capture it with his phrase: transparency is the new objectivity. My version of that: it’s easier to trust in “here’s where I’m coming from” than the View from Nowhere. These are two different ways of bidding for the confidence of the users.
  • In the newer way, the logic is different. “Look, I’m not going to pretend that I have no view. Instead, I am going to level with you about where I’m coming from on this. So factor that in when you evaluate my report. Because I’ve done the work and this is what I’ve concluded…”
  • if objectivity means trying to ground truth claims in verifiable facts, I am definitely for that. If it means there’s a “hard” reality out there that exists beyond any of our descriptions of it, sign me up. If objectivity is the requirement to acknowledge what is, regardless of whether we want it to be that way, then I want journalists who can be objective in that sense.
  • Who gets credit for the phrase, “view from nowhere?” # A. The philosopher Thomas Nagel, who wrote a very important book with that title.
  • It says that human beings are, in fact, capable of stepping back from their position to gain an enlarged understanding, which includes the more limited view they had before the step back. Think of the cinema: when the camera pulls back to reveal where a character had been standing and shows us a fuller tableau. To Nagel, objectivity is that kind of motion. We try to “transcend our particular viewpoint and develop an expanded consciousness that takes in the world more fully.”
Javier E

A News Organization That Rejects the View From Nowhere - Conor Friedersdorf - The Atlantic - 1 views

  • For many years, Rosen has been a leading critic of what he calls The View From Nowhere, or the conceit that journalists bring no prior commitments to their work. On his long-running blog, PressThink, he's advocated for "The View From Somewhere"—an effort by journalists to be transparent about their priors, whether ideological or otherwise.  Rosen is just one of several voices who'll shape NewCo. Still, the new venture may well be a practical test of his View from Somewhere theory of journalism. I chatted with Rosen about some questions he'll face. 
  • The View from Nowhere won’t be a requirement for our journalists. Nor will a single ideology prevail. NewCo itself will have a view of the world: Accountability journalism, exposing abuses of power, revealing injustices will no doubt be part of it. Under that banner many “views from somewhere” can fit.
  • The way "objectivity" evolves historically is out of something much more defensible and interesting, which is in that phrase "Of No Party or Clique." That's the founders of The Atlantic saying they want to be independent of party politics. They don't claim to have no politics, do they? They simply say: We're not the voice of an existing faction or coalition. But they're also not the Voice of God.
  • ...10 more annotations...
  • NewCo will emulate the founders of The Atlantic. At some point "independent from" turned into "objective about." That was the wrong turn, made long ago, by professional journalism, American-style.
  • You've written that The View From Nowhere is, in part, a defense mechanism against charges of bias originating in partisan politics. If you won't be invoking it, what will your defense be when those charges happen? There are two answers to that. 1) We told you where we're coming from. 2) High standards of verification. You need both.
  • What about ideological diversity? The View from Somewhere obviously permits it. You've said you'll have it. Is that because it is valuable in itself?
  • The basic insight is correct: Since "news judgment" is judgment, the product is improved when there are multiple perspectives at the table ... But, if the people who are recruited to the newsroom because they add perspectives that might otherwise be overlooked are also taught that they should leave their politics at the door, or think like professional journalists rather than representatives or their community, or privilege something called "news values" over the priorities they had when they decided to become journalists, then these people are being given a fatally mixed message, if you see what I mean. They are valued for the perspective they bring, and then told that they should transcend that perspective.
  • When people talk about objectivity in journalism they have many different things in mind. Some of these I have no quarrel with. You could even say I’m a “fan.” For example, if objectivity means trying to ground truth claims in verifiable facts, I am definitely for that. If it means there’s a “hard” reality out there that exists beyond any of our descriptions of it, sign me up. If objectivity is the requirement to acknowledge what is, regardless of whether we want it to be that way, then I want journalists who can be objective in that sense. Don’t you? If it means trying to see things in that fuller perspective Thomas Nagel talked about–pulling the camera back, revealing our previous position as only one of many–I second the motion. If it means the struggle to get beyond the limited perspective that our experience and upbringing afford us… yeah, we need more of that, not less. I think there is value in acts of description that do not attempt to say whether the thing described is good or bad. Is that objectivity? If so, I’m all for it, and I do that myself sometimes. 
  • By "we can do better than that" I mean: We can insist on the struggle to tell it like it is without also insisting on the View from Nowhere. The two are not connected. It was a mistake to think that they necessarily are. But why was this mistake made? To control people in the newsroom from "above." That's a big part of objectivity. Not truth. Control.
  • If it works out as you hope, if things are implemented well, etc., what's the potential payoff for readers? I think it's three things: First, this is a news site that is born into the digital world, but doesn't have to return profits to investors. That's not totally unique
  • Second: It's going to be a technology company as much as a news organization. That should result in better service.
  • a good formula for innovation is to start with something people want to do and eliminate some of the steps required to do it
  • The third upside is news with a human voice restored to it. This is the great lesson that blogging gives to journalism
knudsenlu

Will the Quantum Nature of Gravity Finally Be Measured? - The Atlantic - 0 views

  • In 1935, when both quantum mechanics and Albert Einstein’s general theory of relativity were young, a little-known Soviet physicist named Matvei Bronstein, just 28 himself, made the first detailed study of the problem of reconciling the two in a quantum theory of gravity. This “possible theory of the world as a whole,” as Bronstein called it, would supplant Einstein’s classical description of gravity, which casts it as curves in the space-time continuum, and rewrite it in the same quantum language as the rest of physics.
  • His words were prophetic. Eighty-three years later, physicists are still trying to understand how space-time curvature emerges on macroscopic scales from a more fundamental, presumably quantum picture of gravity; it’s arguably the deepest question in physics.
  • The search for the full theory of quantum gravity has been stymied by the fact that gravity’s quantum properties never seem to manifest in actual experience. Physicists never get to see how Einstein’s description of the smooth space-time continuum, or Bronstein’s quantum approximation of it when it’s weakly curved, goes wrong.
  • ...4 more annotations...
  • Not only that, but the universe appears to be governed by a kind of cosmic censorship: Regions of extreme gravity—where space-time curves so sharply that Einstein’s equations malfunction and the true, quantum nature of gravity and space-time must be revealed—always hide behind the horizons of black holes.
  • Dyson, who helped develop quantum electrodynamics (the theory of interactions between matter and light) and is professor emeritus at the Institute for Advanced Study in Princeton, New Jersey, where he overlapped with Einstein, disagrees with the argument that quantum gravity is needed to describe the unreachable interiors of black holes. And he wonders whether detecting the hypothetical graviton might be impossible, even in principle. In that case, he argues, quantum gravity is metaphysical, rather than physics.
  • The ability to detect the “grin” of quantum gravity would seem to refute Dyson’s argument. It would also kill the gravitational decoherence theory, by showing that gravity and space-time do maintain quantum superpositions.
  • If gravity is a quantum interaction, then the answer is: It depends. Each component of the blue diamond’s superposition will experience a stronger or weaker gravitational attraction to the red diamond, depending on whether the latter is in the branch of its superposition that’s closer or farther away. And the gravity felt by each component of the red diamond’s superposition similarly depends on where the blue diamond is.
1 - 20 of 96 Next › Last »
Showing 20 items per page