Skip to main content

Home/ TOK Friends/ Group items matching "anticipation" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
aliciathompson1

Can economics be ethical? | Prospect Magazine - 2 views

  • Recent debates about the economy have rediscovered the question, “is that right?”, where “right” means more than just profits or efficiency.
  • Some argue that because free markets allow for personal choice, they are already ethical. Others have accepted the ethical critique and embraced corporate social responsibility.
  • Most radical of all are the ethical systems that reject the market completely. Marxists, some feminists and a few Buddhist approaches to economics take this line: their ethics dispute the starting points of classical market economics—ideas like individual consumer sovereignty, private property and the attractiveness of material wealth. They conclude that to be ethical, an individual should withdraw from the market entirely, or even actively disrupt it.
  • ...1 more annotation...
  • These human quirks mean we can never make purely “rational” decisions. A new wave of behavioural economists, aided by neuroscientists, is trying to understand our psychology, both alone and in groups, so they can anticipate our decisions in the marketplace more accurately.
Javier E

ROUGH TYPE | Nicholas Carr's blog - 0 views

  • The smartphone has become a repository of the self, recording and dispensing the words, sounds and images that define what we think, what we experience and who we are. In a 2015 Gallup survey, more than half of iPhone owners said that they couldn’t imagine life without the device.
  • So what happens to our minds when we allow a single tool such dominion over our perception and cognition?
  • Not only do our phones shape our thoughts in deep and complicated ways, but the effects persist even when we aren’t using the devices. As the brain grows dependent on the technology, the research suggests, the intellect weakens.
  • ...39 more annotations...
  • he has seen mounting evidence that using a smartphone, or even hearing one ring or vibrate, produces a welter of distractions that makes it harder to concentrate on a difficult problem or job. The division of attention impedes reasoning and performance.
  • Another 2015 study, appearing in the Journal of Computer-Mediated Communication, showed that when people hear their phone ring but are unable to answer it, their blood pressure spikes, their pulse quickens, and their problem-solving skills decline.
  • The researchers recruited 520 undergraduates at UCSD and gave them two standard tests of intellectual acuity. One test gauged “available working-memory capacity,” a measure of how fully a person’s mind can focus on a particular task. The second assessed “fluid intelligence,” a person’s ability to interpret and solve an unfamiliar problem. The only variable in the experiment was the location of the subjects’ smartphones. Some of the students were asked to place their phones in front of them on their desks; others were told to stow their phones in their pockets or handbags; still others were required to leave their phones in a different room.
  • In both tests, the subjects whose phones were in view posted the worst scores, while those who left their phones in a different room did the best. The students who kept their phones in their pockets or bags came out in the middle. As the phone’s proximity increased, brainpower decreased.
  • In subsequent interviews, nearly all the participants said that their phones hadn’t been a distraction—that they hadn’t even thought about the devices during the experiment. They remained oblivious even as the phones disrupted their focus and thinking.
  • In a 2013 study conducted at the University of Essex in England, 142 participants were divided into pairs and asked to converse in private for ten minutes. Half talked with a phone in the room, half without a phone present. The subjects were then given tests of affinity, trust and empathy. “The mere presence of mobile phones,” the researchers reported in the Journal of Social and Personal Relationships, “inhibited the development of interpersonal closeness and trust” and diminished “the extent to which individuals felt empathy and understanding from their partners.”
  • the “integration of smartphones into daily life” appears to cause a “brain drain” that can diminish such vital mental skills as “learning, logical reasoning, abstract thought, problem solving, and creativity.”
  •  Smartphones have become so entangled with our existence that, even when we’re not peering or pawing at them, they tug at our attention, diverting precious cognitive resources. Just suppressing the desire to check our phone, which we do routinely and subconsciously throughout the day, can debilitate our thinking.
  • They found that students who didn’t bring their phones to the classroom scored a full letter-grade higher on a test of the material presented than those who brought their phones. It didn’t matter whether the students who had their phones used them or not: All of them scored equally poorly.
  • A study of nearly a hundred secondary schools in the U.K., published last year in the journal Labour Economics, found that when schools ban smartphones, students’ examination scores go up substantially, with the weakest students benefiting the most.
  • Social skills and relationships seem to suffer as well.
  • Because smartphones serve as constant reminders of all the friends we could be chatting with electronically, they pull at our minds when we’re talking with people in person, leaving our conversations shallower and less satisfying.
  • A second experiment conducted by the researchers produced similar results, while also revealing that the more heavily students relied on their phones in their everyday lives, the greater the cognitive penalty they suffered.
  • The evidence that our phones can get inside our heads so forcefully is unsettling. It suggests that our thoughts and feelings, far from being sequestered in our skulls, can be skewed by external forces we’re not even aware o
  •  Scientists have long known that the brain is a monitoring system as well as a thinking system. Its attention is drawn toward any object that is new, intriguing or otherwise striking — that has, in the psychological jargon, “salience.”
  • even in the history of captivating media, the smartphone stands out. It is an attention magnet unlike any our minds have had to grapple with before. Because the phone is packed with so many forms of information and so many useful and entertaining functions, it acts as what Dr. Ward calls a “supernormal stimulus,” one that can “hijack” attention whenever it is part of our surroundings — and it is always part of our surroundings.
  • Imagine combining a mailbox, a newspaper, a TV, a radio, a photo album, a public library and a boisterous party attended by everyone you know, and then compressing them all into a single, small, radiant object. That is what a smartphone represents to us. No wonder we can’t take our minds off it.
  • The irony of the smartphone is that the qualities that make it so appealing to us — its constant connection to the net, its multiplicity of apps, its responsiveness, its portability — are the very ones that give it such sway over our minds.
  • Phone makers like Apple and Samsung and app writers like Facebook, Google and Snap design their products to consume as much of our attention as possible during every one of our waking hours
  • Social media apps were designed to exploit “a vulnerability in human psychology,” former Facebook president Sean Parker said in a recent interview. “[We] understood this consciously. And we did it anyway.”
  • A quarter-century ago, when we first started going online, we took it on faith that the web would make us smarter: More information would breed sharper thinking. We now know it’s not that simple.
  • As strange as it might seem, people’s knowledge and understanding may actually dwindle as gadgets grant them easier access to online data stores
  • In a seminal 2011 study published in Science, a team of researchers — led by the Columbia University psychologist Betsy Sparrow and including the late Harvard memory expert Daniel Wegner — had a group of volunteers read forty brief, factual statements (such as “The space shuttle Columbia disintegrated during re-entry over Texas in Feb. 2003”) and then type the statements into a computer. Half the people were told that the machine would save what they typed; half were told that the statements would be erased.
  • Afterward, the researchers asked the subjects to write down as many of the statements as they could remember. Those who believed that the facts had been recorded in the computer demonstrated much weaker recall than those who assumed the facts wouldn’t be stored. Anticipating that information would be readily available in digital form seemed to reduce the mental effort that people made to remember it
  • The researchers dubbed this phenomenon the “Google effect” and noted its broad implications: “Because search engines are continually available to us, we may often be in a state of not feeling we need to encode the information internally. When we need it, we will look it up.”
  • as the pioneering psychologist and philosopher William James said in an 1892 lecture, “the art of remembering is the art of thinking.”
  • Only by encoding information in our biological memory can we weave the rich intellectual associations that form the essence of personal knowledge and give rise to critical and conceptual thinking. No matter how much information swirls around us, the less well-stocked our memory, the less we have to think with.
  • As Dr. Wegner and Dr. Ward explained in a 2013 Scientific American article, when people call up information through their devices, they often end up suffering from delusions of intelligence. They feel as though “their own mental capacities” had generated the information, not their devices. “The advent of the ‘information age’ seems to have created a generation of people who feel they know more than ever before,” the scholars concluded, even though “they may know ever less about the world around them.”
  • That insight sheds light on society’s current gullibility crisis, in which people are all too quick to credit lies and half-truths spread through social media. If your phone has sapped your powers of discernment, you’ll believe anything it tells you.
  • Data, the novelist and critic Cynthia Ozick once wrote, is “memory without history.” Her observation points to the problem with allowing smartphones to commandeer our brains
  • When we constrict our capacity for reasoning and recall or transfer those skills to a gadget, we sacrifice our ability to turn information into knowledge. We get the data but lose the meaning
  • We need to give our minds more room to think. And that means putting some distance between ourselves and our phones.
  • Harvard Business School professor emerita Shoshana Zuboff argues in her new book that the Valley’s wealth and power are predicated on an insidious, essentially pathological form of private enterprise—what she calls “surveillance capitalism.” Pioneered by Google, perfected by Facebook, and now spreading throughout the economy, surveillance capitalism uses human life as its raw material. Our everyday experiences, distilled into data, have become a privately-owned business asset used to predict and mold our behavior, whether we’re shopping or socializing, working or voting.
  • By reengineering the economy and society to their own benefit, Google and Facebook are perverting capitalism in a way that undermines personal freedom and corrodes democracy.
  • Under the Fordist model of mass production and consumption that prevailed for much of the twentieth century, industrial capitalism achieved a relatively benign balance among the contending interests of business owners, workers, and consumers. Enlightened executives understood that good pay and decent working conditions would ensure a prosperous middle class eager to buy the goods and services their companies produced. It was the product itself — made by workers, sold by companies, bought by consumers — that tied the interests of capitalism’s participants together. Economic and social equilibrium was negotiated through the product.
  • By removing the tangible product from the center of commerce, surveillance capitalism upsets the equilibrium. Whenever we use free apps and online services, it’s often said, we become the products, our attention harvested and sold to advertisers
  • this truism gets it wrong. Surveillance capitalism’s real products, vaporous but immensely valuable, are predictions about our future behavior — what we’ll look at, where we’ll go, what we’ll buy, what opinions we’ll hold — that internet companies derive from our personal data and sell to businesses, political operatives, and other bidders.
  • Unlike financial derivatives, which they in some ways resemble, these new data derivatives draw their value, parasite-like, from human experience.To the Googles and Facebooks of the world, we are neither the customer nor the product. We are the source of what Silicon Valley technologists call “data exhaust” — the informational byproducts of online activity that become the inputs to prediction algorithms
  • internet companies operate in what Zuboff terms “extreme structural independence from people.” When databases displace goods as the engine of the economy, our own interests, as consumers but also as citizens, cease to be part of the negotiation. We are no longer one of the forces guiding the market’s invisible hand. We are the objects of surveillance and control.
Javier E

A New Dark Age Looms - The New York Times - 1 views

  • picture yourself in our grandchildren’s time, a century hence. Significant global warming has occurred, as scientists predicted. Nature’s longstanding, repeatable patterns — relied on for millenniums by humanity to plan everything from infrastructure to agriculture — are no longer so reliable. Cycles that have been largely unwavering during modern human history are disrupted by substantial changes in temperature and precipitation.
  • As Earth’s warming stabilizes, new patterns begin to appear. At first, they are confusing and hard to identify. Scientists note similarities to Earth’s emergence from the last ice age. These new patterns need many years — sometimes decades or more — to reveal themselves fully, even when monitored with our sophisticated observing systems
  • Disruptive societal impacts will be widespread.
  • ...9 more annotations...
  • Our foundation of Earth knowledge, largely derived from historically observed patterns, has been central to society’s progress. Early cultures kept track of nature’s ebb and flow, passing improved knowledge about hunting and agriculture to each new generation. Science has accelerated this learning process through advanced observation methods and pattern discovery techniques. These allow us to anticipate the future with a consistency unimaginable to our ancestors.
  • But as Earth warms, our historical understanding will turn obsolete faster than we can replace it with new knowledge. Some patterns will change significantly; others will be largely unaffected
  • The list of possible disruptions is long and alarming.
  • Historians of the next century will grasp the importance of this decline in our ability to predict the future. They may mark the coming decades of this century as the period during which humanity, despite rapid technological and scientific advances, achieved “peak knowledge” about the planet it occupies
  • One exception to this pattern-based knowledge is the weather, whose underlying physics governs how the atmosphere moves and adjusts. Because we understand the physics, we can replicate the atmosphere with computer models.
  • But farmers need to think a season or more ahead. So do infrastructure planners as they design new energy and water systems
  • The intermediate time period is our big challenge. Without substantial scientific breakthroughs, we will remain reliant on pattern-based methods for time periods between a month and a decade. The problem is, as the planet warms, these patterns will become increasingly difficult to discern.
  • The oceans, which play a major role in global weather patterns, will also see substantial changes as global temperatures rise. Ocean currents and circulation patterns evolve on time scales of decades and longer, and fisheries change in response. We lack reliable, physics-based models to tell us how this occurs.
  • Our grandchildren could grow up knowing less about the planet than we do today. This is not a legacy we want to leave them. Yet we are on the verge of ensuring this happens.
Javier E

It's True: False News Spreads Faster and Wider. And Humans Are to Blame. - The New York Times - 0 views

  • What if the scourge of false news on the internet is not the result of Russian operatives or partisan zealots or computer-controlled bots? What if the main problem is us?
  • People are the principal culprits
  • people, the study’s authors also say, prefer false news.
  • ...18 more annotations...
  • As a result, false news travels faster, farther and deeper through the social network than true news.
  • those patterns applied to every subject they studied, not only politics and urban legends, but also business, science and technology.
  • The stories were classified as true or false, using information from six independent fact-checking organizations including Snopes, PolitiFact and FactCheck.org
  • with or without the bots, the results were essentially the same.
  • “It’s not really the robots that are to blame.”
  • “News” and “stories” were defined broadly — as claims of fact — regardless of the source. And the study explicitly avoided the term “fake news,” which, the authors write, has become “irredeemably polarized in our current political and media climate.”
  • False claims were 70 percent more likely than the truth to be shared on Twitter. True stories were rarely retweeted by more than 1,000 people, but the top 1 percent of false stories were routinely shared by 1,000 to 100,000 people. And it took true stories about six times as long as false ones to reach 1,500 people.
  • the researchers enlisted students to annotate as true or false more than 13,000 other stories that circulated on Twitter.
  • “The comprehensiveness is important here, spanning the entire history of Twitter,” said Jon Kleinberg, a computer scientist at Cornell University. “And this study shines a spotlight on the open question of the success of false information online.”
  • The M.I.T. researchers pointed to factors that contribute to the appeal of false news. Applying standard text-analysis tools, they found that false claims were significantly more novel than true ones — maybe not a surprise, since falsehoods are made up.
  • The goal, said Soroush Vosoughi, a postdoctoral researcher at the M.I.T. Media Lab and the lead author, was to find clues about what is “in the nature of humans that makes them like to share false news.”
  • The study analyzed the sentiment expressed by users in replies to claims posted on Twitter. As a measurement tool, the researchers used a system created by Canada’s National Research Council that associates English words with eight emotions
  • False claims elicited replies expressing greater surprise and disgust. True news inspired more anticipation, sadness and joy, depending on the nature of the stories.
  • The M.I.T. researchers said that understanding how false news spreads is a first step toward curbing it. They concluded that human behavior plays a large role in explaining the phenomenon, and mention possible interventions, like better labeling, to alter behavior.
  • For all the concern about false news, there is little certainty about its influence on people’s beliefs and actions. A recent study of the browsing histories of thousands of American adults in the months before the 2016 election found that false news accounted for only a small portion of the total news people consumed.
  • In fall 2016, Mr. Roy, an associate professor at the M.I.T. Media Lab, became a founder and the chairman of Cortico, a nonprofit that is developing tools to measure public conversations online to gauge attributes like shared attention, variety of opinion and receptivity. The idea is that improving the ability to measure such attributes would lead to better decision-making that would counteract misinformation.
  • Mr. Roy acknowledged the challenge in trying to not only alter individual behavior but also in enlisting the support of big internet platforms like Facebook, Google, YouTube and Twitter, and media companies
  • “Polarization,” he said, “has turned out to be a great business model.”
Javier E

Movie Review: Inside Job - Barron's - 0 views

  • On the outsize role of the GSEs and other federal agencies in high-risk mortgages, figures compiled by former Fannie Mae Chief Credit Officer Edward Pinto show that as of mid-2008, more than 70% were accounted for by the federal government in one way or another, with nearly two-thirds of that held by Fannie and Freddie.
  • As has been documented, for example, in a forthcoming book on the GSEs called Guaranteed to Fail, there was a steady increase in affordable housing mandates imposed on these enterprises by Congress, one of several reasons why they were hardly like other capitalist enterprises, but tools and beneficiaries of government.
  • I asked Ferguson why Inside Job made such brief mention of Fannie Mae and Freddie Mac, and even then without noting that they are government-sponsored enterprises, subject to special protection by the federal government—which their creditors clearly appreciated, given the unusually low interest rates their debt commanded.
  • ...7 more annotations...
  • Ferguson replied that their role in subprime mortgages was not very significant, and that in any case their behavior was not much different from that of other capitalist enterprises.
  • We get no inkling that Rajan's views on what made the world riskier, as set forth in his book, veer quite radically from those of Inside Job. They include, as he has written, "the political push for easy housing credit in the United States and the lax monetary policy [by the Federal Reserve] in the years 2003-2005."
  • Rajan, author of Fault Lines, a recent book on the debacle, speaks with special authority to fans of Inside Job. Not only is he in the movie—one of the talking heads speaking wisdom about what occurred—he is accurately presented as having anticipated the meltdown in a 2005 paper called "Has Financial Development Made the World Riskier?" But the things he is quoted as saying in the film are restricted to serving its themes.
  • Yet it's impossible to understand what happened without grasping the proactive role played by government. "The banking sector did not decide out of the goodness of its heart to extend mortgages to poor people," commented University of Chicago Booth School of Business Finance Professor Raghuram Rajan in a telephone interview last week. "Politicians did that, and they would have taken great umbrage if the regulator stood in the way of more housing credit."
  • THE STORY RECOUNTED in Inside Job is that principles like safety and soundness were flouted by greedy Wall Street capitalists who brought down the economy with the help of certain politicians, political appointees and corrupt academicians. Despite the attempts and desires of some, including Barney Frank, to regulate the mania, the juggernaut prevails to this day, under the presidency of Barack Obama.
  • This version of the story contains some elements of truth.
  • Text Size Regular Medium Large "A MASTERPIECE OF INVESTIGATIVE nonfiction moviemaking," wrote the film critic of the Boston Globe. "Rests its outrage on reason, research and careful argument," opined the New York Times. The "masterpiece" referred to was the recently released Inside Job, a documentary film that focuses on the causes of the 2008 financial crisis.
Javier E

At the Existentialist Café: Freedom, Being, and Apricot Cocktails with Jean-Paul Sartre, Simone de Beauvoir, Albert Camus, Martin Heidegger, Maurice Merleau-Ponty and Others (Sarah Bakewell) - 0 views

  • The phenomenologists’ leading thinker, Edmund Husserl, provided a rallying cry, ‘To the things themselves!’ It meant: don’t waste time on the interpretations that accrue upon things, and especially don’t waste time wondering whether the things are real. Just look at this that’s presenting itself to you, whatever this may be, and describe it as precisely as possible.
  • You might think you have defined me by some label, but you are wrong, for I am always a work in progress. I create myself constantly through action, and this is so fundamental to my human condition that, for Sartre, it is the human condition, from the moment of first consciousness to the moment when death wipes it out. I am my own freedom: no more, no less.
  • Sartre wrote like a novelist — not surprisingly, since he was one. In his novels, short stories and plays as well as in his philosophical treatises, he wrote about the physical sensations of the world and the structures and moods of human life. Above all, he wrote about one big subject: what it meant to be free. Freedom, for him, lay at the heart of all human experience, and this set humans apart from all other kinds of object.
  • ...97 more annotations...
  • Sartre listened to his problem and said simply, ‘You are free, therefore choose — that is to say, invent.’ No signs are vouchsafed in this world, he said. None of the old authorities can relieve you of the burden of freedom. You can weigh up moral or practical considerations as carefully as you like, but ultimately you must take the plunge and do something, and it’s up to you what that something is.
  • Even if the situation is unbearable — perhaps you are facing execution, or sitting in a Gestapo prison, or about to fall off a cliff — you are still free to decide what to make of it in mind and deed. Starting from where you are now, you choose. And in choosing, you also choose who you will be.
  • The war had made people realise that they and their fellow humans were capable of departing entirely from civilised norms; no wonder the idea of a fixed human nature seemed questionable.
  • If this sounds difficult and unnerving, it’s because it is. Sartre does not deny that the need to keep making decisions brings constant anxiety. He heightens this anxiety by pointing out that what you do really matters. You should make your choices as though you were choosing on behalf of the whole of humanity, taking the entire burden of responsibility for how the human race behaves. If you avoid this responsibility by fooling yourself that you are the victim of circumstance or of someone else’s bad advice, you are failing to meet the demands of human life and choosing a fake existence, cut off from your own ‘authenticity’.
  • Along with the terrifying side of this comes a great promise: Sartre’s existentialism implies that it is possible to be authentic and free, as long as you keep up the effort.
  • almost all agreed that it was, as an article in Les nouvelles littéraires phrased it, a ‘sickening mixture of philosophic pretentiousness, equivocal dreams, physiological technicalities, morbid tastes and hesitant eroticism … an introspective embryo that one would take distinct pleasure in crushing’.
  • he offered a philosophy designed for a species that had just scared the hell out of itself, but that finally felt ready to grow up and take responsibility.
  • In this rebellious world, just as with the Parisian bohemians and Dadaists in earlier generations, everything that was dangerous and provocative was good, and everything that was nice or bourgeois was bad.
  • Such interweaving of ideas and life had a long pedigree, although the existentialists gave it a new twist. Stoic and Epicurean thinkers in the classical world had practised philosophy as a means of living well, rather than of seeking knowledge or wisdom for their own sake. By reflecting on life’s vagaries in philosophical ways, they believed they could become more resilient, more able to rise above circumstances, and better equipped to manage grief, fear, anger, disappointment or anxiety.
  • In the tradition they passed on, philosophy is neither a pure intellectual pursuit nor a collection of cheap self-help tricks, but a discipline for flourishing and living a fully human, responsible life.
  • For Kierkegaard, Descartes had things back to front. In his own view, human existence comes first: it is the starting point for everything we do, not the result of a logical deduction. My existence is active: I live it and choose it, and this precedes any statement I can make about myself.
  • Studying our own moral genealogy cannot help us to escape or transcend ourselves. But it can enable us to see our illusions more clearly and lead a more vital, assertive existence.
  • What was needed, he felt, was not high moral or theological ideals, but a deeply critical form of cultural history or ‘genealogy’ that would uncover the reasons why we humans are as we are, and how we came to be that way. For him, all philosophy could even be redefined as a form of psychology, or history.
  • For those oppressed on grounds of race or class, or for those fighting against colonialism, existentialism offered a change of perspective — literally, as Sartre proposed that all situations be judged according to how they appeared in the eyes of those most oppressed, or those whose suffering was greatest.
  • She observed that we need not expect moral philosophers to ‘live by’ their ideas in a simplistic way, as if they were following a set of rules. But we can expect them to show how their ideas are lived in. We should be able to look in through the windows of a philosophy, as it were, and see how people occupy it, how they move about and how they conduct themselves.
  • the existentialists inhabited their historical and personal world, as they inhabited their ideas. This notion of ‘inhabited philosophy’ is one I’ve borrowed from the English philosopher and novelist Iris Murdoch, who wrote the first full-length book on Sartre and was an early adopter of existentialism
  • What is existentialism anyway?
  • An existentialist who is also phenomenological provides no easy rules for dealing with this condition, but instead concentrates on describing lived experience as it presents itself. — By describing experience well, he or she hopes to understand this existence and awaken us to ways of living more authentic lives.
  • Existentialists concern themselves with individual, concrete human existence. — They consider human existence different from the kind of being other things have. Other entities are what they are, but as a human I am whatever I choose to make of myself at every moment. I am free — — and therefore I’m responsible for everything I do, a dizzying fact which causes — an anxiety inseparable from human existence itself.
  • On the other hand, I am only free within situations, which can include factors in my own biology and psychology as well as physical, historical and social variables of the world into which I have been thrown. — Despite the limitations, I always want more: I am passionately involved in personal projects of all kinds. — Human existence is thus ambiguous: at once boxed in by borders and yet transcendent and exhilarating. —
  • The first part of this is straightforward: a phenomenologist’s job is to describe. This is the activity that Husserl kept reminding his students to do. It meant stripping away distractions, habits, clichés of thought, presumptions and received ideas, in order to return our attention to what he called the ‘things themselves’. We must fix our beady gaze on them and capture them exactly as they appear, rather than as we think they are supposed to be.
  • Husserl therefore says that, to phenomenologically describe a cup of coffee, I should set aside both the abstract suppositions and any intrusive emotional associations. Then I can concentrate on the dark, fragrant, rich phenomenon in front of me now. This ‘setting aside’ or ‘bracketing out’ of speculative add-ons Husserl called epoché — a term borrowed from the ancient Sceptics,
  • The point about rigour is crucial; it brings us back to the first half of the command to describe phenomena. A phenomenologist cannot get away with listening to a piece of music and saying, ‘How lovely!’ He or she must ask: is it plaintive? is it dignified? is it colossal and sublime? The point is to keep coming back to the ‘things themselves’ — phenomena stripped of their conceptual baggage — so as to bail out weak or extraneous material and get to the heart of the experience.
  • Husserlian ‘bracketing out’ or epoché allows the phenomenologist to temporarily ignore the question ‘But is it real?’, in order to ask how a person experiences his or her world. Phenomenology gives a formal mode of access to human experience. It lets philosophers talk about life more or less as non-philosophers do, while still being able to tell themselves they are being methodical and rigorous.
  • Besides claiming to transform the way we think about reality, phenomenologists promised to change how we think about ourselves. They believed that we should not try to find out what the human mind is, as if it were some kind of substance. Instead, we should consider what it does, and how it grasps its experiences.
  • For Brentano, this reaching towards objects is what our minds do all the time. Our thoughts are invariably of or about something, he wrote: in love, something is loved, in hatred, something is hated, in judgement, something is affirmed or denied. Even when I imagine an object that isn’t there, my mental structure is still one of ‘about-ness’ or ‘of-ness’.
  • Except in deepest sleep, my mind is always engaged in this aboutness: it has ‘intentionality’. Having taken the germ of this from Brentano, Husserl made it central to his whole philosophy.
  • Husserl saw in the idea of intentionality a way to sidestep two great unsolved puzzles of philosophical history: the question of what objects ‘really’ are, and the question of what the mind ‘really’ is. By doing the epoché and bracketing out all consideration of reality from both topics, one is freed to concentrate on the relationship in the middle. One can apply one’s descriptive energies to the endless dance of intentionality that takes place in our lives: the whirl of our minds as they seize their intended phenomena one after the other and whisk them around the floor,
  • Understood in this way, the mind hardly is anything at all: it is its aboutness. This makes the human mind (and possibly some animal minds) different from any other naturally occurring entity. Nothing else can be as thoroughly about or of things as the mind is:
  • Some Eastern meditation techniques aim to still this scurrying creature, but the extreme difficulty of this shows how unnatural it is to be mentally inert. Left to itself, the mind reaches out in all directions as long as it is awake — and even carries on doing it in the dreaming phase of its sleep.
  • a mind that is experiencing nothing, imagining nothing, or speculating about nothing can hardly be said to be a mind at all.
  • Three simple ideas — description, phenomenon, intentionality — provided enough inspiration to keep roomfuls of Husserlian assistants busy in Freiburg for decades. With all of human existence awaiting their attention, how could they ever run out of things to do?
  • For Sartre, this gives the mind an immense freedom. If we are nothing but what we think about, then no predefined ‘inner nature’ can hold us back. We are protean.
  • way of this interpretation. Real, not real; inside, outside; what difference did it make? Reflecting on this, Husserl began turning his phenomenology into a branch of ‘idealism’ — the philosophical tradition which denied external reality and defined everything as a kind of private hallucination.
  • For Sartre, if we try to shut ourselves up inside our own minds, ‘in a nice warm room with the shutters closed’, we cease to exist. We have no cosy home: being out on the dusty road is the very definition of what we are.
  • One might think that, if Heidegger had anything worth saying, he could have communicated it in ordinary language. The fact is that he does not want to be ordinary, and he may not even want to communicate in the usual sense. He wants to make the familiar obscure, and to vex us. George Steiner thought that Heidegger’s purpose was less to be understood than to be experienced through a ‘felt strangeness’.
  • He takes Dasein in its most ordinary moments, then talks about it in the most innovative way he can. For Heidegger, Dasein’s everyday Being is right here: it is Being-in-the-world, or In-der-Welt-sein. The main feature of Dasein’s everyday Being-in-the-world right here is that it is usually busy doing something.
  • Thus, for Heidegger, all Being-in-the-world is also a ‘Being-with’ or Mitsein. We cohabit with others in a ‘with-world’, or Mitwelt. The old philosophical problem of how we prove the existence of other minds has now vanished. Dasein swims in the with-world long before it wonders about other minds.
  • Sometimes the best-educated people were those least inclined to take the Nazis seriously, dismissing them as too absurd to last. Karl Jaspers was one of those who made this mistake, as he later recalled, and Beauvoir observed similar dismissive attitudes among the French students in Berlin.
  • In any case, most of those who disagreed with Hitler’s ideology soon learned to keep their view to themselves. If a Nazi parade passed on the street, they would either slip out of view or give the obligatory salute like everyone else, telling themselves that the gesture meant nothing if they did not believe in it. As the psychologist Bruno Bettelheim later wrote of this period, few people will risk their life for such a small thing as raising an arm — yet that is how one’s powers of resistance are eroded away, and eventually one’s responsibility and integrity go with them.
  • for Arendt, if you do not respond adequately when the times demand it, you show a lack of imagination and attention that is as dangerous as deliberately committing an abuse. It amounts to disobeying the one command she had absorbed from Heidegger in those Marburg days: Think!
  • ‘Everything takes place under a kind of anaesthesia. Objectively dreadful events produce a thin, puny emotional response. Murders are committed like schoolboy pranks. Humiliation and moral decay are accepted like minor incidents.’ Haffner thought modernity itself was partly to blame: people had become yoked to their habits and to mass media, forgetting to stop and think, or to disrupt their routines long enough to question what was going on.
  • Heidegger’s former lover and student Hannah Arendt would argue, in her 1951 study The Origins of Totalitarianism, that totalitarian movements thrived at least partly because of this fragmentation in modern lives, which made people more vulnerable to being swept away by demagogues. Elsewhere, she coined the phrase ‘the banality of evil’ to describe the most extreme failures of personal moral awareness.
  • His communicative ideal fed into a whole theory of history: he traced all civilisation to an ‘Axial Period’ in the fifth century BC, during which philosophy and culture exploded simultaneously in Europe, the Middle East and Asia, as though a great bubble of minds had erupted from the earth’s surface. ‘True philosophy needs communion to come into existence,’ he wrote, and added, ‘Uncommunicativeness in a philosopher is virtually a criterion of the untruth of his thinking.’
  • The idea of being called to authenticity became a major theme in later existentialism, the call being interpreted as saying something like ‘Be yourself!’, as opposed to being phony. For Heidegger, the call is more fundamental than that. It is a call to take up a self that you didn’t know you had: to wake up to your Being. Moreover, it is a call to action. It requires you to do something: to take a decision of some sort.
  • Being and Time contained at least one big idea that should have been of use in resisting totalitarianism. Dasein, Heidegger wrote there, tends to fall under the sway of something called das Man or ‘the they’ — an impersonal entity that robs us of the freedom to think for ourselves. To live authentically requires resisting or outwitting this influence, but this is not easy because das Man is so nebulous. Man in German does not mean ‘man’ as in English (that’s der Mann), but a neutral abstraction, something like ‘one’ in the English phrase ‘one doesn’t do that’,
  • for Heidegger, das Man is me. It is everywhere and nowhere; it is nothing definite, but each of us is it. As with Being, it is so ubiquitous that it is difficult to see. If I am not careful, however, das Man takes over the important decisions that should be my own. It drains away my responsibility or ‘answerability’. As Arendt might put it, we slip into banality, failing to think.
  • Jaspers focused on what he called Grenzsituationen — border situations, or limit situations. These are the moments when one finds oneself constrained or boxed in by what is happening, but at the same time pushed by these events towards the limits or outer edge of normal experience. For example, you might have to make a life-or-death choice, or something might remind you suddenly of your mortality,
  • Jaspers’ interest in border situations probably had much to do with his own early confrontation with mortality. From childhood, he had suffered from a heart condition so severe that he always expected to die at any moment. He also had emphysema, which forced him to speak slowly, taking long pauses to catch his breath. Both illnesses meant that he had to budget his energies with care in order to get his work done without endangering his life.
  • If I am to resist das Man, I must become answerable to the call of my ‘voice of conscience’. This call does not come from God, as a traditional Christian definition of the voice of conscience might suppose. It comes from a truly existentialist source: my own authentic self. Alas, this voice is one I do not recognise and may not hear, because it is not the voice of my habitual ‘they-self’. It is an alien or uncanny version of my usual voice. I am familiar with my they-self, but not with my unalienated voice — so, in a weird twist, my real voice is the one that sounds strangest to me.
  • Marcel developed a strongly theological branch of existentialism. His faith distanced him from both Sartre and Heidegger, but he shared a sense of how history makes demands on individuals. In his essay ‘On the Ontological Mystery’, written in 1932 and published in the fateful year of 1933, Marcel wrote of the human tendency to become stuck in habits, received ideas, and a narrow-minded attachment to possessions and familiar scenes. Instead, he urged his readers to develop a capacity for remaining ‘available’ to situations as they arise. Similar ideas of disponibilité or availability had been explored by other writers,
  • Marcel made it his central existential imperative. He was aware of how rare and difficult it was. Most people fall into what he calls ‘crispation’: a tensed, encrusted shape in life — ‘as though each one of us secreted a kind of shell which gradually hardened and imprisoned him’.
  • Bettelheim later observed that, under Nazism, only a few people realised at once that life could not continue unaltered: these were the ones who got away quickly. Bettelheim himself was not among them. Caught in Austria when Hitler annexed it, he was sent first to Dachau and then to Buchenwald, but was then released in a mass amnesty to celebrate Hitler’s birthday in 1939 — an extraordinary reprieve, after which he left at once for America.
  • we are used to reading philosophy as offering a universal message for all times and places — or at least as aiming to do so. But Heidegger disliked the notion of universal truths or universal humanity, which he considered a fantasy. For him, Dasein is not defined by shared faculties of reason and understanding, as the Enlightenment philosophers thought. Still less is it defined by any kind of transcendent eternal soul, as in religious tradition. We do not exist on a higher, eternal plane at all. Dasein’s Being is local: it has a historical situation, and is constituted in time and place.
  • For Marcel, learning to stay open to reality in this way is the philosopher’s prime job. Everyone can do it, but the philosopher is the one who is called on above all to stay awake, so as to be the first to sound the alarm if something seems wrong.
  • Second, it also means understanding that we are historical beings, and grasping the demands our particular historical situation is making on us. In what Heidegger calls ‘anticipatory resoluteness’, Dasein discovers ‘that its uttermost possibility lies in giving itself up’. At that moment, through Being-towards-death and resoluteness in facing up to one’s time, one is freed from the they-self and attains one’s true, authentic self.
  • If we are temporal beings by our very nature, then authentic existence means accepting, first, that we are finite and mortal. We will die: this all-important realisation is what Heidegger calls authentic ‘Being-towards-Death’, and it is fundamental to his philosophy.
  • Hannah Arendt, instead, left early on: she had the benefit of a powerful warning. Just after the Nazi takeover, in spring 1933, she had been arrested while researching materials on anti-Semitism for the German Zionist Organisation at Berlin’s Prussian State Library. Her apartment was searched; both she and her mother were locked up briefly, then released. They fled, without stopping to arrange travel documents. They crossed to Czechoslovakia (then still safe) by a method that sounds almost too fabulous to be true: a sympathetic German family on the border had a house with its front door in Germany and its back door in Czechoslovakia. The family would invite people for dinner, then let them leave through the back door at night.
  • As Sartre argued in his 1943 review of The Stranger, basic phenomenological principles show that experience comes to us already charged with significance. A piano sonata is a melancholy evocation of longing. If I watch a soccer match, I see it as a soccer match, not as a meaningless scene in which a number of people run around taking turns to apply their lower limbs to a spherical object. If the latter is what I’m seeing, then I am not watching some more essential, truer version of soccer; I am failing to watch it properly as soccer at all.
  • Much as they liked Camus personally, neither Sartre nor Beauvoir accepted his vision of absurdity. For them, life is not absurd, even when viewed on a cosmic scale, and nothing can be gained by saying it is. Life for them is full of real meaning, although that meaning emerges differently for each of us.
  • For Sartre, we show bad faith whenever we portray ourselves as passive creations of our race, class, job, history, nation, family, heredity, childhood influences, events, or even hidden drives in our subconscious which we claim are out of our control. It is not that such factors are unimportant: class and race, in particular, he acknowledged as powerful forces in people’s lives, and Simone de Beauvoir would soon add gender to that list.
  • Sartre takes his argument to an extreme point by asserting that even war, imprisonment or the prospect of imminent death cannot take away my existential freedom. They form part of my ‘situation’, and this may be an extreme and intolerable situation, but it still provides only a context for whatever I choose to do next. If I am about to die, I can decide how to face that death. Sartre here resurrects the ancient Stoic idea that I may not choose what happens to me, but I can choose what to make of it, spiritually speaking.
  • But the Stoics cultivated indifference in the face of terrible events, whereas Sartre thought we should remain passionately, even furiously engaged with what happens to us and with what we can achieve. We should not expect freedom to be anything less than fiendishly difficult.
  • Freedom does not mean entirely unconstrained movement, and it certainly does not mean acting randomly. We often mistake the very things that enable us to be free — context, meaning, facticity, situation, a general direction in our lives — for things that define us and take away our freedom. It is only with all of these that we can be free in a real sense.
  • Nor did he mean that privileged groups have the right to pontificate to the poor and downtrodden about the need to ‘take responsibility’ for themselves. That would be a grotesque misreading of Sartre’s point, since his sympathy in any encounter always lay with the more oppressed side. But for each of us — for me — to be in good faith means not making excuses for myself.
  • Camus’ novel gives us a deliberately understated vision of heroism and decisive action compared to those of Sartre and Beauvoir. One can only do so much. It can look like defeatism, but it shows a more realistic perception of what it takes to actually accomplish difficult tasks like liberating one’s country.
  • Camus just kept returning to his core principle: no torture, no killing — at least not with state approval. Beauvoir and Sartre believed they were taking a more subtle and more realistic view. If asked why a couple of innocuous philosophers had suddenly become so harsh, they would have said it was because the war had changed them in profound ways. It had shown them that one’s duties to humanity could be more complicated than they seemed. ‘The war really divided my life in two,’ Sartre said later.
  • Poets and artists ‘let things be’, but they also let things come out and show themselves. They help to ease things into ‘unconcealment’ (Unverborgenheit), which is Heidegger’s rendition of the Greek term alētheia, usually translated as ‘truth’. This is a deeper kind of truth than the mere correspondence of a statement to reality, as when we say ‘The cat is on the mat’ and point to a mat with a cat on it. Long before we can do this, both cat and mat must ‘stand forth out of concealedness’. They must un-hide themselves.
  • Heidegger does not use the word ‘consciousness’ here because — as with his earlier work — he is trying to make us think in a radically different way about ourselves. We are not to think of the mind as an empty cavern, or as a container filled with representations of things. We are not even supposed to think of it as firing off arrows of intentional ‘aboutness’, as in the earlier phenomenology of Brentano. Instead, Heidegger draws us into the depths of his Schwarzwald, and asks us to imagine a gap with sunlight filtering in. We remain in the forest, but we provide a relatively open spot where other beings can bask for a moment. If we did not do this, everything would remain in the thickets, hidden even to itself.
  • The astronomer Carl Sagan began his 1980 television series Cosmos by saying that human beings, though made of the same stuff as the stars, are conscious and are therefore ‘a way for the cosmos to know itself’. Merleau-Ponty similarly quoted his favourite painter Cézanne as saying, ‘The landscape thinks itself in me, and I am its consciousness.’ This is something like what Heidegger thinks humanity contributes to the earth. We are not made of spiritual nothingness; we are part of Being, but we also bring something unique with us. It is not much: a little open space, perhaps with a path and a bench like the one the young Heidegger used to sit on to do his homework. But through us, the miracle occurs.
  • Beauty aside, Heidegger’s late writing can also be troubling, with its increasingly mystical notion of what it is to be human. If one speaks of a human being mainly as an open space or a clearing, or a means of ‘letting beings be’ and dwelling poetically on the earth, then one doesn’t seem to be talking about any recognisable person. The old Dasein has become less human than ever. It is now a forestry feature.
  • Even today, Jaspers, the dedicated communicator, is far less widely read than Heidegger, who has influenced architects, social theorists, critics, psychologists, artists, film-makers, environmental activists, and innumerable students and enthusiasts — including the later deconstructionist and post-structuralist schools, which took their starting point from his late thinking. Having spent the late 1940s as an outsider and then been rehabilitated, Heidegger became the overwhelming presence in university philosophy all over the European continent from then on.
  • As Levinas reflected on this experience, it helped to lead him to a philosophy that was essentially ethical, rather than ontological like Heidegger’s. He developed his ideas from the work of Jewish theologian Martin Buber, whose I and Thou in 1923 had distinguished between my relationship with an impersonal ‘it’ or ‘them’, and the direct personal encounter I have with a ‘you’. Levinas took it further: when I encounter you, we normally meet face-to-face, and it is through your face that you, as another person, can make ethical demands on me. This is very different from Heidegger’s Mitsein or Being-with, which suggests a group of people standing alongside one another, shoulder to shoulder as if in solidarity — perhaps as a unified nation or Volk.
  • For Levinas, we literally face each other, one individual at a time, and that relationship becomes one of communication and moral expectation. We do not merge; we respond to one another. Instead of being co-opted into playing some role in my personal drama of authenticity, you look me in the eyes — and you remain Other. You remain you.
  • This relationship is more fundamental than the self, more fundamental than consciousness, more fundamental even than Being — and it brings an unavoidable ethical obligation. Ever since Husserl, phenomenologists and existentialists had being trying to stretch the definition of existence to incorporate our social lives and relationships. Levinas did more: he turned philosophy around entirely so that these relationships were the foundation of our existence, not an extension of it.
  • Her last work, The Need for Roots, argues, among other things, that none of us has rights, but each one of us has a near-infinite degree of duty and obligation to the other. Whatever the underlying cause of her death — and anorexia nervosa seems to have been involved — no one could deny that she lived out her philosophy with total commitment. Of all the lives touched on in this book, hers is surely the most profound and challenging application of Iris Murdoch’s notion that a philosophy can be ‘inhabited’.
  • Other thinkers took radical ethical turns during the war years. The most extreme was Simone Weil, who actually tried to live by the principle of putting other people’s ethical demands first. Having returned to France after her travels through Germany in 1932, she had worked in a factory so as to experience the degrading nature of such work for herself. When France fell in 1940, her family fled to Marseilles (against her protests), and later to the US and to Britain. Even in exile, Weil made extraordinary sacrifices. If there were people in the world who could not sleep in a bed, she would not do so either, so she slept on the floor.
  • The mystery tradition had roots in Kierkegaard’s ‘leap of faith’. It owed much to the other great nineteenth-century mystic of the impossible, Dostoevsky, and to older theological notions. But it also grew from the protracted trauma that was the first half of the twentieth century. Since 1914, and especially since 1939, people in Europe and elsewhere had come to the realisation that we cannot fully know or trust ourselves; that we have no excuses or explanations for what we do — and yet that we must ground our existence and relationships on something firm, because otherwise we cannot survive.
  • One striking link between these radical ethical thinkers, all on the fringes of our main story, is that they had religious faith. They also granted a special role to the notion of ‘mystery’ — that which cannot be known, calculated or understood, especially when it concerns our relationships with each other. Heidegger was different from them, since he rejected the religion he grew up with and had no real interest in ethics — probably as a consequence of his having no real interest in the human.
  • Meanwhile, the Christian existentialist Gabriel Marcel was also still arguing, as he had since the 1930s, that ethics trumps everything else in philosophy and that our duty to each other is so great as to play the role of a transcendent ‘mystery’. He too had been led to this position partly by a wartime experience: during the First World War he had worked for the Red Cross’ Information Service, with the unenviable job of answering relatives’ inquiries about missing soldiers. Whenever news came, he passed it on, and usually it was not good. As Marcel later said, this task permanently inoculated him against warmongering rhetoric of any kind, and it made him aware of the power of what is unknown in our lives.
  • As the play’s much-quoted and frequently misunderstood final line has it: ‘Hell is other people.’ Sartre later explained that he did not mean to say that other people were hellish in general. He meant that after death we become frozen in their view, unable any longer to fend off their interpretation. In life, we can still do something to manage the impression we make; in death, this freedom goes and we are left entombed in other’s people’s memories and perceptions.
  • We have to do two near-impossible things at once: understand ourselves as limited by circumstances, and yet continue to pursue our projects as though we are truly in control. In Beauvoir’s view, existentialism is the philosophy that best enables us to do this, because it concerns itself so deeply with both freedom and contingency. It acknowledges the radical and terrifying scope of our freedom in life, but also the concrete influences that other philosophies tend to ignore: history, the body, social relationships and the environment.
  • The aspects of our existence that limit us, Merleau-Ponty says, are the very same ones that bind us to the world and give us scope for action and perception. They make us what we are. Sartre acknowledged the need for this trade-off, but he found it more painful to accept. Everything in him longed to be free of bonds, of impediments and limitations
  • Of course we have to learn this skill of interpreting and anticipating the world, and this happens in early childhood, which is why Merleau-Ponty thought child psychology was essential to philosophy. This is an extraordinary insight. Apart from Rousseau, very few philosophers before him had taken childhood seriously; most wrote as though all human experience were that of a fully conscious, rational, verbal adult who has been dropped into this world from the sky — perhaps by a stork.
  • For Merleau-Ponty, we cannot understand our experience if we don’t think of ourselves in part as overgrown babies. We fall for optical illusions because we once learned to see the world in terms of shapes, objects and things relevant to our own interests. Our first perceptions came to us in tandem with our first active experiments in observing the world and reaching out to explore it, and are still linked with those experiences.
  • Another factor in all of this, for Merleau-Ponty, is our social existence: we cannot thrive without others, or not for long, and we need this especially in early life. This makes solipsistic speculation about the reality of others ridiculous; we could never engage in such speculation if we hadn’t already been formed by them.
  • As Descartes could have said (but didn’t), ‘I think, therefore other people exist.’ We grow up with people playing with us, pointing things out, talking, listening, and getting us used to reading emotions and movements; this is how we become capable, reflective, smoothly integrated beings.
  • In general, Merleau-Ponty thinks human experience only makes sense if we abandon philosophy’s time-honoured habit of starting with a solitary, capsule-like, immobile adult self, isolated from its body and world, which must then be connected up again — adding each element around it as though adding clothing to a doll. Instead, for him, we slide from the womb to the birth canal to an equally close and total immersion in the world. That immersion continues as long as we live, although we may also cultivate the art of partially withdrawing from time to time when we want to think or daydream.
  • When he looks for his own metaphor to describe how he sees consciousness, he comes up with a beautiful one: consciousness, he suggests, is like a ‘fold’ in the world, as though someone had crumpled a piece of cloth to make a little nest or hollow. It stays for a while, before eventually being unfolded and smoothed away. There is something seductive, even erotic, in this idea of my conscious self as an improvised pouch in the cloth of the world. I still have my privacy — my withdrawing room. But I am part of the world’s fabric, and I remain formed out of it for as long as I am here.
  • By the time of these works, Merleau-Ponty is taking his desire to describe experience to the outer limits of what language can convey. Just as with the late Husserl or Heidegger, or Sartre in his Flaubert book, we see a philosopher venturing so far from shore that we can barely follow. Emmanuel Levinas would head out to the fringes too, eventually becoming incomprehensible to all but his most patient initiates.
  • Sartre once remarked — speaking of a disagreement they had about Husserl in 1941 — that ‘we discovered, astounded, that our conflicts had, at times, stemmed from our childhood, or went back to the elementary differences of our two organisms’. Merleau-Ponty also said in an interview that Sartre’s work seemed strange to him, not because of philosophical differences, but because of a certain ‘register of feeling’, especially in Nausea, that he could not share. Their difference was one of temperament and of the whole way the world presented itself to them.
  • The two also differed in their purpose. When Sartre writes about the body or other aspects of experience, he generally does it in order to make a different point. He expertly evokes the grace of his café waiter, gliding between the tables, bending at an angle just so, steering the drink-laden tray through the air on the tips of his fingers — but he does it all in order to illustrate his ideas about bad faith. When Merleau-Ponty writes about skilled and graceful movement, the movement itself is his point. This is the thing he wants to understand.
  • We can never move definitively from ignorance to certainty, for the thread of the inquiry will constantly lead us back to ignorance again. This is the most attractive description of philosophy I’ve ever read, and the best argument for why it is worth doing, even (or especially) when it takes us no distance at all from our starting point.
  • By prioritising perception, the body, social life and childhood development, Merleau-Ponty gathered up philosophy’s far-flung outsider subjects and brought them in to occupy the centre of his thought.
  • In his inaugural lecture at the Collège de France on 15 January 1953, published as In Praise of Philosophy, he said that philosophers should concern themselves above all with whatever is ambiguous in our experience. At the same time, they should think clearly about these ambiguities, using reason and science. Thus, he said, ‘The philosopher is marked by the distinguishing trait that he possesses inseparably the taste for evidence and the feeling for ambiguity.’ A constant movement is required between these two
  • As Sartre wrote in response to Hiroshima, humanity had now gained the power to wipe itself out, and must decide every single day that it wanted to live. Camus also wrote that humanity faced the task of choosing between collective suicide and a more intelligent use of its technology — ‘between hell and reason’. After 1945, there seemed little reason to trust in humanity’s ability to choose well.
  • Merleau-Ponty observed in a lecture of 1951 that, more than any previous century, the twentieth century had reminded people how ‘contingent’ their lives were — how at the mercy of historical events and other changes that they could not control. This feeling went on long after the war ended. After the A-bombs were dropped on Hiroshima and Nagasaki, many feared that a Third World War would not be long in coming, this time between the Soviet Union and the United States.
Javier E

Philosophy isn't dead yet | Raymond Tallis | Comment is free | The Guardian - 1 views

  • Fundamental physics is in a metaphysical mess and needs help. The attempt to reconcile its two big theories, general relativity and quantum mechanics, has stalled for nearly 40 years. Endeavours to unite them, such as string theory, are mathematically ingenious but incomprehensible even to many who work with them. This is well known.
  • A better-kept secret is that at the heart of quantum mechanics is a disturbing paradox – the so-called measurement problem, arising ultimately out of the Uncertainty Principle – which apparently demonstrates that the very measurements that have established and confirmed quantum theory should be impossible. Oxford philosopher of physics David Wallace has argued that this threatens to make quantum mechanics incoherent which can be remedied only by vastly multiplying worlds.
  • there is the failure of physics to accommodate conscious beings. The attempt to fit consciousness into the material world, usually by identifying it with activity in the brain, has failed dismally, if only because there is no way of accounting for the fact that certain nerve impulses are supposed to be conscious (of themselves or of the world) while the overwhelming majority (physically essentially the same) are not. In short, physics does not allow for the strange fact that matter reveals itself to material objects (such as physicists).
  • ...3 more annotations...
  • then there is the mishandling of time. The physicist Lee Smolin's recent book, Time Reborn, links the crisis in physics with its failure to acknowledge the fundamental reality of time. Physics is predisposed to lose time because its mathematical gaze freezes change. Tensed time, the difference between a remembered or regretted past and an anticipated or feared future, is particularly elusive. This worried Einstein: in a famous conversation, he mourned the fact that the present tense, "now", lay "just outside of the realm of science".
  • Recent attempts to explain how the universe came out of nothing, which rely on questionable notions such as spontaneous fluctuations in a quantum vacuum, the notion of gravity as negative energy, and the inexplicable free gift of the laws of nature waiting in the wings for the moment of creation, reveal conceptual confusion beneath mathematical sophistication. They demonstrate the urgent need for a radical re-examination of the invisible frameworks within which scientific investigations are conducted.
  • we should reflect on how a scientific image of the world that relies on up to 10 dimensions of space and rests on ideas, such as fundamental particles, that have neither identity nor location, connects with our everyday experience. This should open up larger questions, such as the extent to which mathematical portraits capture the reality of our world – and what we mean by "reality".
Javier E

Opinion | The Apps on My Phone Are Stalking Me - The New York Times - 0 views

  • There is much about the future that keeps me up at night — A.I. weaponry, undetectable viral deepfakes
  • but in the last few years, one technological threat has blipped my fear radar much faster than others.That fear? Ubiquitous surveillance.
  • I am no longer sure that human civilization can undo or evade living under constant, extravagantly detailed physical and even psychic surveillance
  • ...24 more annotations...
  • as a species, we are not doing nearly enough to avoid always being watched or otherwise digitally recorded.
  • our location, your purchases, video and audio from within your home and office, your online searches and every digital wandering, biometric tracking of your face and other body parts, your heart rate and other vital signs, your every communication, recording, and perhaps your deepest thoughts or idlest dreams
  • in the future, if not already, much of this data and more will be collected and analyzed by some combination of governments and corporations, among them a handful of megacompanies whose powers nearly match those of governments
  • Over the last year, as part of Times Opinion’s Privacy Project, I’ve participated in experiments in which my devices were closely monitored in order to determine the kind of data that was being collected about me.
  • I’ve realized how blind we are to the kinds of insights tech companies are gaining about us through our gadgets. Our blindness not only keeps us glued to privacy-invading tech
  • it also means that we’ve failed to create a political culture that is in any way up to the task of limiting surveillance.
  • few of our cultural or political institutions are even much trying to tamp down the surveillance state.
  • Yet the United States and other supposedly liberty-loving Western democracies have not ruled out such a future
  • like Barack Obama before him, Trump and the Justice Department are pushing Apple to create a backdoor into the data on encrypted iPhones — they want the untrustworthy F.B.I. and any local cop to be able to see everything inside anyone’s phone.
  • the fact that both Obama and Trump agreed on the need for breaking iPhone encryption suggests how thoroughly political leaders across a wide spectrum have neglected privacy as a fundamental value worthy of protection.
  • Americans are sleepwalking into a future nearly as frightening as the one the Chinese are constructing. I choose the word “sleepwalking” deliberately, because when it comes to digital privacy, a lot of us prefer the comfortable bliss of ignorance.
  • Among other revelations: Advertising companies and data brokers are keeping insanely close tabs on smartphones’ location data, tracking users so precisely that their databases could arguably compromise national security or political liberty.
  • Tracking technologies have become cheap and widely available — for less than $100, my colleagues were able to identify people walking by surveillance cameras in Bryant Park in Manhattan.
  • The Clearview AI story suggests another reason to worry that our march into surveillance has become inexorable: Each new privacy-invading technology builds on a previous one, allowing for scary outcomes from new integrations and collections of data that few users might have anticipated.
  • The upshot: As the location-tracking apps followed me, I was able to capture the pings they sent to online servers — essentially recording their spying
  • On the map, you can see the apps are essentially stalking me. They see me drive out one morning to the gas station, then to the produce store, then to Safeway; later on I passed by a music school, stopped at a restaurant, then Whole Foods.
  • But location was only one part of the data the companies had about me; because geographic data is often combined with other personal information — including a mobile advertising ID that can help merge what you see and do online with where you go in the real world — the story these companies can tell about me is actually far more detailed than I can tell about myself.
  • I can longer pretend I’ve got nothing to worry about. Sure, I’m not a criminal — but do I want anyone to learn everything about me?
  • more to the point: Is it wise for us to let any entity learn everything about everyone?
  • The remaining uncertainty about the surveillance state is not whether we will submit to it — only how readily and completely, and how thoroughly it will warp our society.
  • Will we allow the government and corporations unrestricted access to every bit of data we ever generate, or will we decide that some kinds of collections, like the encrypted data on your phone, should be forever off limits, even when a judge has issued a warrant for it?
  • In the future, will there be room for any true secret — will society allow any unrecorded thought or communication to evade detection and commercial analysis?
  • How completely will living under surveillance numb creativity and silence radical thought?
  • Can human agency survive the possibility that some companies will know more about all of us than any of us can ever know about ourselves?
sissij

Trump Wants It Known: Grading 100 Days Is 'Ridiculous' (but His Were the Best) - The New York Times - 0 views

  • “As with so much else, Trump is a study in inconsistency,” said Robert Dallek, the presidential historian. “One minute he says his 100 days have been the best of any president, and the next minute he decries the idea of measuring a president by the 100 days.”
  • Mr. Trump has already told supporters not to believe contrary assessments, anticipating more critical evaluations by journalists, not to mention partisan attacks by Democrats.
  • If nothing else, Mr. Trump’s first 100 days have certainly been eventful.
  • ...6 more annotations...
  • Whether they have accomplished much is more a subject of debate.
  • Others were less weighty, like one officially naming a veterans’ health center in Butler County, Pa., the “Abie Abraham V.A. Clinic.”
  • To the extent that he is being held to a measurement he disdains, he has no one to blame but himself.
  • only one has even been introduced.
  • “It is hard to judge any of these other presidents after that, and I think all of them are cursing the idea that this got started,” said Doris Kearns Goodwin, author of “No Ordinary Time,” a book about Roosevelt. “That’s the one thing they might all agree on, the post-F.D.R. presidents: ‘No way; this isn’t fair.’”
  • “I don’t think the first 100 days are by themselves that important,” he said. “The first year is critically important, and the first 100 days set the tone for the first year.”
  •  
    I think there is a confirmation bias in Mr. Trump's argument. He was quoting the previous presidents to suggest that the first 100 days of presidency is not important. However, what the previous presidents meant by saying "100-days" is not a fair grading mark is because the time is too short to show anything. It's not that it is not important. I think Mr. Trump himself is not even convinced with that since he tried so hard to make his first hundred days look good. Quantity does not equal quality. --Sissi (4/25/2017)
Javier E

The Science of Snobbery: How We're Duped Into Thinking Fancy Things Are Better - The Atlantic - 0 views

  • Expert judges and amateurs alike claim to judge classical musicians based on sound. But Tsay’s research suggests that the original judges, despite their experience and expertise, judged the competition (which they heard and watched live) based on visual information, just as amateurs do.
  • just like with classical music, we do not appraise wine in the way that we expect. 
  • Priceonomics revisited this seemingly damning research: the lack of correlation between wine enjoyment and price in blind tastings, the oenology students tricked by red food dye into describing a white wine like a red, a distribution of medals at tastings equivalent to what one would expect from pure chance, the grand crus described like cheap wines and vice-versa when the bottles are switched.
  • ...26 more annotations...
  • Taste does not simply equal your taste buds. It draws on information from all our senses as well as context. As a result, food is susceptible to the same trickery as wine. Adding yellow food dye to vanilla pudding leads people to experience a lemony taste. Diners eating in the dark at a chic concept restaurant confuse veal for tuna. Branding, packaging, and price tags are equally important to enjoyment. Cheap fish is routinely passed off as its pricier cousins at seafood and sushi restaurants. 
  • Just like with wine and classical music, we often judge food based on very different criteria than what we claim. The result is that our perceptions are easily skewed in ways we don’t anticipate. 
  • What does it mean for wine that presentation so easily trumps the quality imbued by being grown on premium Napa land or years of fruitful aging? Is it comforting that the same phenomenon is found in food and classical music, or is it a strike against the authenticity of our enjoyment of them as well? How common must these manipulations be until we concede that the influence of the price tag of a bottle of wine or the visual appearance of a pianist is not a trick but actually part of the quality?
  • To answer these questions, we need to investigate the underlying mechanism that leads us to judge wine, food, and music by criteria other than what we claim to value. And that mechanism seems to be the quick, intuitive judgments our minds unconsciously make
  • this unknowability also makes it easy to be led astray when our intuition makes a mistake. We may often be able to count on the price tag or packaging of food and wine for accurate information about quality. But as we believe that we’re judging based on just the product, we fail to recognize when presentation manipulates our snap judgments.
  • Participants were just as effective when watching 6 second video clips and when comparing their ratings to ratings of teacher effectiveness as measured by actual student test performance. 
  • The power of intuitive first impressions has been demonstrated in a variety of other contexts. One experiment found that people predicted the outcome of political elections remarkably well based on silent 10 second video clips of debates - significantly outperforming political pundits and predictions made based on economic indicators.
  • In a real world case, a number of art experts successfully identified a 6th century Greek statue as a fraud. Although the statue had survived a 14 month investigation by a respected museum that included the probings of a geologist, they instantly recognized something was off. They just couldn’t explain how they knew.
  • Cases like this represent the canon behind the idea of the “adaptive unconscious,” a concept made famous by journalist Malcolm Gladwell in his book Blink. The basic idea is that we constantly, quickly, and unconsciously do the equivalent of judging a book by its cover. After all, a cover provides a lot of relevant information in a world in which we don’t have time to read every page.
  • Gladwell describes the adaptive unconscious as “a kind of giant computer that quickly and quietly processes a lot of the data we need in order to keep functioning as human beings.”
  • In a famous experiment, psychologist Nalini Ambady provided participants in an academic study with 30 second silent video clips of a college professor teaching a class and asked them to rate the effectiveness of the professor.
  • In follow up experiments, Chia-Jung Tsay found that those judging musicians’ auditions based on visual cues were not giving preference to attractive performers. Rather, they seemed to look for visual signs of relevant characteristics like passion, creativity, and uniqueness. Seeing signs of passion is valuable information. But in differentiating between elite performers, it gives an edge to someone who looks passionate over someone whose play is passionate
  • Outside of these more eccentric examples, it’s our reliance on quick judgments, and ignorance of their workings, that cause people to act on ugly, unconscious biases
  • It’s also why - from a business perspective - packaging and presentation is just as important as the good or service on offer. Why marketing is just as important as product. 
  • Gladwell ends Blink optimistically. By paying closer attention to our powers of rapid cognition, he argues, we can avoid its pitfalls and harness its powers. We can blindly audition musicians behind a screen, look at a piece of art devoid of other context, and pay particular attention to possible unconscious bias in our performance reports.
  • But Gladwell’s success in demonstrating how the many calculations our adaptive unconscious performs without our awareness undermines his hopeful message of consciously harnessing its power.
  • As a former world-class tennis player and coach of over 50 years, Braden is a perfect example of the ideas behind thin slicing. But if he can’t figure out what his unconscious is up to when he recognizes double faults, why should anyone else expect to be up to the task?
  • flawed judgment in fields like medicine and investing has more serious consequences. The fact that expertise is so tricky leads psychologist Daniel Kahneman to assert that most experts should seek the assistance of statistics and algorithms in making decisions.
  • In his book Thinking, Fast and Slow, he describes our two modes of thought: System 1, like the adaptive unconscious, is our “fast, instinctive, and emotional” intuition. System 2 is our “slower, more deliberative, and more logical” conscious thought. Kahneman believes that we often leave decisions up to System 1 and generally place far “too much confidence in human judgment” due to the pitfalls of our intuition described above.
  • Not every judgment will be made in a field that is stable and regular enough for an algorithm to help us make judgments or predictions. But in those cases, he notes, “Hundreds of studies have shown that wherever we have sufficient information to build a model, it will perform better than most people.”
  • Experts can avoid the pitfalls of intuition more easily than laypeople. But they need help too, especially as our collective confidence in expertise leads us to overconfidence in their judgments. 
  • This article has referred to the influence of price tags and context on products and experiences like wine and classical music concerts as tricks that skew our perception. But maybe we should consider them a real, actual part of the quality.
  • Losing ourselves in a universe of relativism, however, will lead us to miss out on anything new or unique. Take the example of the song “Hey Ya!” by Outkast. When the music industry heard it, they felt sure it would be a hit. When it premiered on the radio, however, listeners changed the channel. The song sounded too dissimilar from songs people liked, so they responded negatively. 
  • It took time for people to get familiar with the song and realize that they enjoyed it. Eventually “Hey Ya!” became the hit of the summer.
  • Many boorish people talking about the ethereal qualities of great wine probably can't even identify cork taint because their impressions are dominated by the price tag and the wine label. But the classic defense of wine - that you need to study it to appreciate it - is also vindicated. The open question - which is both editorial and empiric - is what it means for the industry that constant vigilance and substantial study is needed to dependably appreciate wine for the product quality alone. But the questions is relevant to the enjoyment of many other products and experiences that we enjoy in life.
  • Maybe the most important conclusion is to not only recognize the fallibility of our judgments and impressions, but to recognize when it matters, and when it doesn’t
Javier E

Buddhism Is More 'Western' Than You Think - The New York Times - 0 views

  • Not only have Buddhist thinkers for millenniums been making very much the kinds of claims that Western philosophers and psychologists make — many of these claims are looking good in light of modern Western thought.
  • In fact, in some cases Buddhist thought anticipated Western thought, grasping things about the human mind, and its habitual misperception of reality, that modern psychology is only now coming to appreciate.
  • “Things exist but they are not real.” I agree with Gopnik that this sentence seems a bit hard to unpack. But if you go look at the book it is taken from, you’ll find that the author himself, Mu Soeng, does a good job of unpacking it.
  • ...14 more annotations...
  • It turns out Soeng is explaining an idea that is central to Buddhist philosophy: “not self” — the idea that your “self,” as you intuitively conceive it, is actually an illusion. Soeng writes that the doctrine of not-self doesn’t deny an “existential personality” — it doesn’t deny that there is a you that exists; what it denies is that somewhere within you is an “abiding core,” a kind of essence-of-you that remains constant amid the flux of thoughts, feelings, perceptions and other elements that constitute your experience. So if by “you” we mean a “self” that features an enduring essence, then you aren’t real.
  • In recent decades, important aspects of the Buddhist concept of not-self have gotten support from psychology. In particular, psychology has bolstered Buddhism’s doubts about our intuition of what you might call the “C.E.O. self” — our sense that the conscious “self” is the initiator of thought and action.
  • recognizing that “you” are not in control, that you are not a C.E.O., can help give “you” more control. Or, at least, you can behave more like a C.E.O. is expected to behave: more rationally, more wisely, more reflectively; less emotionally, less rashly, less reactively.
  • Suppose that, via mindfulness meditation, you observe a feeling like anxiety or anger and, rather than let it draw you into a whole train of anxious or angry thoughts, you let it pass away. Though you experience the feeling — and in a sense experience it more fully than usual — you experience it with “non-attachment” and so evade its grip. And you now see the thoughts that accompanied it in a new light — they no longer seem like trustworthy emanations from some “I” but rather as transient notions accompanying transient feelings.
  • Brain-scan studies have produced tentative evidence that this lusting and disliking — embracing thoughts that feel good and rejecting thoughts that feel bad — lies near the heart of certain “cognitive biases.” If such evidence continues to accumulate, the Buddhist assertion that a clear view of the world involves letting go of these lusts and dislikes will have drawn a measure of support from modern science.
  • Note how, in addition to being therapeutic, this clarifies your view of the world. After all, the “anxious” or “angry” trains of thought you avoid probably aren’t objectively true. They probably involve either imagining things that haven’t happened or making subjective judgments about things that have.
  • the Buddhist idea of “not-self” grows out of the belief undergirding this mission — that the world is pervasively governed by causal laws. The reason there is no “abiding core” within us is that the ever-changing forces that impinge on us — the sights, the sounds, the smells, the tastes — are constantly setting off chain reactions inside of us.
  • Buddhism’s doubts about the distinctness and solidity of the “self” — and of other things, for that matter — rests on a recognition of the sense in which pervasive causality means pervasive fluidity.
  • Buddhism long ago generated insights that modern psychology is only now catching up to, and these go beyond doubts about the C.E.O. self.
  • psychology has lately started to let go of its once-sharp distinction between “cognitive” and “affective” parts of the mind; it has started to see that feelings are so finely intertwined with thoughts as to be part of their very coloration. This wouldn’t qualify as breaking news in Buddhist circles.
  • There’s a broader and deeper sense in which Buddhist thought is more “Western” than stereotype suggests. What, after all, is more Western than science’s emphasis on causality, on figuring out what causes what, and hoping to thus explain why all things do the things they do?
  • All we can do is clear away as many impediments to comprehension as possible. Science has a way of doing that — by insisting that entrants in its “competitive storytelling” demonstrate explanatory power in ways that are publicly observable, thus neutralizing, to the extent possible, subjective biases that might otherwise prevail.
  • Buddhism has a different way of doing it: via meditative disciplines that are designed to attack subjective biases at the source, yielding a clearer view of both the mind itself and the world beyond it.
  • The results of these two inquiries converge to a remarkable extent — an extent that can be appreciated only in light of the last few decades of progress in psychology and evolutionary science. At least, that’s my argument.
Javier E

The Coming Software Apocalypse - The Atlantic - 1 views

  • Our standard framework for thinking about engineering failures—reflected, for instance, in regulations for medical devices—was developed shortly after World War II, before the advent of software, for electromechanical systems. The idea was that you make something reliable by making its parts reliable (say, you build your engine to withstand 40,000 takeoff-and-landing cycles) and by planning for the breakdown of those parts (you have two engines). But software doesn’t break. Intrado’s faulty threshold is not like the faulty rivet that leads to the crash of an airliner. The software did exactly what it was told to do. In fact it did it perfectly. The reason it failed is that it was told to do the wrong thing.
  • Software failures are failures of understanding, and of imagination. Intrado actually had a backup router, which, had it been switched to automatically, would have restored 911 service almost immediately. But, as described in a report to the FCC, “the situation occurred at a point in the application logic that was not designed to perform any automated corrective actions.”
  • The introduction of programming languages like Fortran and C, which resemble English, and tools, known as “integrated development environments,” or IDEs, that help correct simple mistakes (like Microsoft Word’s grammar checker but for code), obscured, though did little to actually change, this basic alienation—the fact that the programmer didn’t work on a problem directly, but rather spent their days writing out instructions for a machine.
  • ...52 more annotations...
  • Code is too hard to think about. Before trying to understand the attempts themselves, then, it’s worth understanding why this might be: what it is about code that makes it so foreign to the mind, and so unlike anything that came before it.
  • Technological progress used to change the way the world looked—you could watch the roads getting paved; you could see the skylines rise. Today you can hardly tell when something is remade, because so often it is remade by code.
  • Software has enabled us to make the most intricate machines that have ever existed. And yet we have hardly noticed, because all of that complexity is packed into tiny silicon chips as millions and millions of lines of cod
  • The programmer, the renowned Dutch computer scientist Edsger Dijkstra wrote in 1988, “has to be able to think in terms of conceptual hierarchies that are much deeper than a single mind ever needed to face before.” Dijkstra meant this as a warning.
  • As programmers eagerly poured software into critical systems, they became, more and more, the linchpins of the built world—and Dijkstra thought they had perhaps overestimated themselves.
  • What made programming so difficult was that it required you to think like a computer.
  • “The problem is that software engineers don’t understand the problem they’re trying to solve, and don’t care to,” says Leveson, the MIT software-safety expert. The reason is that they’re too wrapped up in getting their code to work.
  • Though he runs a lab that studies the future of computing, he seems less interested in technology per se than in the minds of the people who use it. Like any good toolmaker, he has a way of looking at the world that is equal parts technical and humane. He graduated top of his class at the California Institute of Technology for electrical engineering,
  • “The serious problems that have happened with software have to do with requirements, not coding errors.” When you’re writing code that controls a car’s throttle, for instance, what’s important is the rules about when and how and by how much to open it. But these systems have become so complicated that hardly anyone can keep them straight in their head. “There’s 100 million lines of code in cars now,” Leveson says. “You just cannot anticipate all these things.”
  • a nearly decade-long investigation into claims of so-called unintended acceleration in Toyota cars. Toyota blamed the incidents on poorly designed floor mats, “sticky” pedals, and driver error, but outsiders suspected that faulty software might be responsible
  • software experts spend 18 months with the Toyota code, picking up where NASA left off. Barr described what they found as “spaghetti code,” programmer lingo for software that has become a tangled mess. Code turns to spaghetti when it accretes over many years, with feature after feature piling on top of, and being woven around
  • Using the same model as the Camry involved in the accident, Barr’s team demonstrated that there were actually more than 10 million ways for the onboard computer to cause unintended acceleration. They showed that as little as a single bit flip—a one in the computer’s memory becoming a zero or vice versa—could make a car run out of control. The fail-safe code that Toyota had put in place wasn’t enough to stop it
  • . In all, Toyota recalled more than 9 million cars, and paid nearly $3 billion in settlements and fines related to unintended acceleration.
  • The problem is that programmers are having a hard time keeping up with their own creations. Since the 1980s, the way programmers work and the tools they use have changed remarkably little.
  • “Visual Studio is one of the single largest pieces of software in the world,” he said. “It’s over 55 million lines of code. And one of the things that I found out in this study is more than 98 percent of it is completely irrelevant. All this work had been put into this thing, but it missed the fundamental problems that people faced. And the biggest one that I took away from it was that basically people are playing computer inside their head.” Programmers were like chess players trying to play with a blindfold on—so much of their mental energy is spent just trying to picture where the pieces are that there’s hardly any left over to think about the game itself.
  • The fact that the two of them were thinking about the same problem in the same terms, at the same time, was not a coincidence. They had both just seen the same remarkable talk, given to a group of software-engineering students in a Montreal hotel by a computer researcher named Bret Victor. The talk, which went viral when it was posted online in February 2012, seemed to be making two bold claims. The first was that the way we make software is fundamentally broken. The second was that Victor knew how to fix it.
  • This is the trouble with making things out of code, as opposed to something physical. “The complexity,” as Leveson puts it, “is invisible to the eye.”
  • in early 2012, Victor had finally landed upon the principle that seemed to thread through all of his work. (He actually called the talk “Inventing on Principle.”) The principle was this: “Creators need an immediate connection to what they’re creating.” The problem with programming was that it violated the principle. That’s why software systems were so hard to think about, and so rife with bugs: The programmer, staring at a page of text, was abstracted from whatever it was they were actually making.
  • “Our current conception of what a computer program is,” he said, is “derived straight from Fortran and ALGOL in the late ’50s. Those languages were designed for punch cards.”
  • WYSIWYG (pronounced “wizzywig”) came along. It stood for “What You See Is What You Get.”
  • Victor’s point was that programming itself should be like that. For him, the idea that people were doing important work, like designing adaptive cruise-control systems or trying to understand cancer, by staring at a text editor, was appalling.
  • With the right interface, it was almost as if you weren’t working with code at all; you were manipulating the game’s behavior directly.
  • When the audience first saw this in action, they literally gasped. They knew they weren’t looking at a kid’s game, but rather the future of their industry. Most software involved behavior that unfolded, in complex ways, over time, and Victor had shown that if you were imaginative enough, you could develop ways to see that behavior and change it, as if playing with it in your hands. One programmer who saw the talk wrote later: “Suddenly all of my tools feel obsolete.”
  • hen John Resig saw the “Inventing on Principle” talk, he scrapped his plans for the Khan Academy programming curriculum. He wanted the site’s programming exercises to work just like Victor’s demos. On the left-hand side you’d have the code, and on the right, the running program: a picture or game or simulation. If you changed the code, it’d instantly change the picture. “In an environment that is truly responsive,” Resig wrote about the approach, “you can completely change the model of how a student learns ... [They] can now immediately see the result and intuit how underlying systems inherently work without ever following an explicit explanation.” Khan Academy has become perhaps the largest computer-programming class in the world, with a million students, on average, actively using the program each month.
  • The ideas spread. The notion of liveness, of being able to see data flowing through your program instantly, made its way into flagship programming tools offered by Google and Apple. The default language for making new iPhone and Mac apps, called Swift, was developed by Apple from the ground up to support an environment, called Playgrounds, that was directly inspired by Light Table.
  • “Typically the main problem with software coding—and I’m a coder myself,” Bantegnie says, “is not the skills of the coders. The people know how to code. The problem is what to code. Because most of the requirements are kind of natural language, ambiguous, and a requirement is never extremely precise, it’s often understood differently by the guy who’s supposed to code.”
  • In a pair of later talks, “Stop Drawing Dead Fish” and “Drawing Dynamic Visualizations,” Victor went one further. He demoed two programs he’d built—the first for animators, the second for scientists trying to visualize their data—each of which took a process that used to involve writing lots of custom code and reduced it to playing around in a WYSIWYG interface.
  • Victor suggested that the same trick could be pulled for nearly every problem where code was being written today. “I’m not sure that programming has to exist at all,” he told me. “Or at least software developers.” In his mind, a software developer’s proper role was to create tools that removed the need for software developers. Only then would people with the most urgent computational problems be able to grasp those problems directly, without the intermediate muck of code.
  • Victor implored professional software developers to stop pouring their talent into tools for building apps like Snapchat and Uber. “The inconveniences of daily life are not the significant problems,” he wrote. Instead, they should focus on scientists and engineers—as he put it to me, “these people that are doing work that actually matters, and critically matters, and using really, really bad tools.”
  • Bantegnie’s company is one of the pioneers in the industrial use of model-based design, in which you no longer write code directly. Instead, you create a kind of flowchart that describes the rules your program should follow (the “model”), and the computer generates code for you based on those rules
  • In a model-based design tool, you’d represent this rule with a small diagram, as though drawing the logic out on a whiteboard, made of boxes that represent different states—like “door open,” “moving,” and “door closed”—and lines that define how you can get from one state to the other. The diagrams make the system’s rules obvious: Just by looking, you can see that the only way to get the elevator moving is to close the door, or that the only way to get the door open is to stop.
  • . In traditional programming, your task is to take complex rules and translate them into code; most of your energy is spent doing the translating, rather than thinking about the rules themselves. In the model-based approach, all you have is the rules. So that’s what you spend your time thinking about. It’s a way of focusing less on the machine and more on the problem you’re trying to get it to solve.
  • “Everyone thought I was interested in programming environments,” he said. Really he was interested in how people see and understand systems—as he puts it, in the “visual representation of dynamic behavior.” Although code had increasingly become the tool of choice for creating dynamic behavior, it remained one of the worst tools for understanding it. The point of “Inventing on Principle” was to show that you could mitigate that problem by making the connection between a system’s behavior and its code immediate.
  • On this view, software becomes unruly because the media for describing what software should do—conversations, prose descriptions, drawings on a sheet of paper—are too different from the media describing what software does do, namely, code itself.
  • for this approach to succeed, much of the work has to be done well before the project even begins. Someone first has to build a tool for developing models that are natural for people—that feel just like the notes and drawings they’d make on their own—while still being unambiguous enough for a computer to understand. They have to make a program that turns these models into real code. And finally they have to prove that the generated code will always do what it’s supposed to.
  • tice brings order and accountability to large codebases. But, Shivappa says, “it’s a very labor-intensive process.” He estimates that before they used model-based design, on a two-year-long project only two to three months was spent writing code—the rest was spent working on the documentation.
  • uch of the benefit of the model-based approach comes from being able to add requirements on the fly while still ensuring that existing ones are met; with every change, the computer can verify that your program still works. You’re free to tweak your blueprint without fear of introducing new bugs. Your code is, in FAA parlance, “correct by construction.”
  • “people are not so easily transitioning to model-based software development: They perceive it as another opportunity to lose control, even more than they have already.”
  • The bias against model-based design, sometimes known as model-driven engineering, or MDE, is in fact so ingrained that according to a recent paper, “Some even argue that there is a stronger need to investigate people’s perception of MDE than to research new MDE technologies.”
  • “Human intuition is poor at estimating the true probability of supposedly ‘extremely rare’ combinations of events in systems operating at a scale of millions of requests per second,” he wrote in a paper. “That human fallibility means that some of the more subtle, dangerous bugs turn out to be errors in design; the code faithfully implements the intended design, but the design fails to correctly handle a particular ‘rare’ scenario.”
  • Newcombe was convinced that the algorithms behind truly critical systems—systems storing a significant portion of the web’s data, for instance—ought to be not just good, but perfect. A single subtle bug could be catastrophic. But he knew how hard bugs were to find, especially as an algorithm grew more complex. You could do all the testing you wanted and you’d never find them all.
  • An algorithm written in TLA+ could in principle be proven correct. In practice, it allowed you to create a realistic model of your problem and test it not just thoroughly, but exhaustively. This was exactly what he’d been looking for: a language for writing perfect algorithms.
  • TLA+, which stands for “Temporal Logic of Actions,” is similar in spirit to model-based design: It’s a language for writing down the requirements—TLA+ calls them “specifications”—of computer programs. These specifications can then be completely verified by a computer. That is, before you write any code, you write a concise outline of your program’s logic, along with the constraints you need it to satisfy
  • Programmers are drawn to the nitty-gritty of coding because code is what makes programs go; spending time on anything else can seem like a distraction. And there is a patient joy, a meditative kind of satisfaction, to be had from puzzling out the micro-mechanics of code. But code, Lamport argues, was never meant to be a medium for thought. “It really does constrain your ability to think when you’re thinking in terms of a programming language,”
  • Code makes you miss the forest for the trees: It draws your attention to the working of individual pieces, rather than to the bigger picture of how your program fits together, or what it’s supposed to do—and whether it actually does what you think. This is why Lamport created TLA+. As with model-based design, TLA+ draws your focus to the high-level structure of a system, its essential logic, rather than to the code that implements it.
  • But TLA+ occupies just a small, far corner of the mainstream, if it can be said to take up any space there at all. Even to a seasoned engineer like Newcombe, the language read at first as bizarre and esoteric—a zoo of symbols.
  • this is a failure of education. Though programming was born in mathematics, it has since largely been divorced from it. Most programmers aren’t very fluent in the kind of math—logic and set theory, mostly—that you need to work with TLA+. “Very few programmers—and including very few teachers of programming—understand the very basic concepts and how they’re applied in practice. And they seem to think that all they need is code,” Lamport says. “The idea that there’s some higher level than the code in which you need to be able to think precisely, and that mathematics actually allows you to think precisely about it, is just completely foreign. Because they never learned it.”
  • “In the 15th century,” he said, “people used to build cathedrals without knowing calculus, and nowadays I don’t think you’d allow anyone to build a cathedral without knowing calculus. And I would hope that after some suitably long period of time, people won’t be allowed to write programs if they don’t understand these simple things.”
  • Programmers, as a species, are relentlessly pragmatic. Tools like TLA+ reek of the ivory tower. When programmers encounter “formal methods” (so called because they involve mathematical, “formally” precise descriptions of programs), their deep-seated instinct is to recoil.
  • Formal methods had an image problem. And the way to fix it wasn’t to implore programmers to change—it was to change yourself. Newcombe realized that to bring tools like TLA+ to the programming mainstream, you had to start speaking their language.
  • he presented TLA+ as a new kind of “pseudocode,” a stepping-stone to real code that allowed you to exhaustively test your algorithms—and that got you thinking precisely early on in the design process. “Engineers think in terms of debugging rather than ‘verification,’” he wrote, so he titled his internal talk on the subject to fellow Amazon engineers “Debugging Designs.” Rather than bemoan the fact that programmers see the world in code, Newcombe embraced it. He knew he’d lose them otherwise. “I’ve had a bunch of people say, ‘Now I get it,’” Newcombe says.
  • In the world of the self-driving car, software can’t be an afterthought. It can’t be built like today’s airline-reservation systems or 911 systems or stock-trading systems. Code will be put in charge of hundreds of millions of lives on the road and it has to work. That is no small task.
lucieperloff

Covid-19: How Much Herd Immunity is Enough? - The New York Times - 0 views

  • Scientists initially estimated that 60 to 70 percent of the population needed to acquire resistance to the coronavirus to banish it. Now Dr. Anthony Fauci and others are quietly shifting that number upward.
  • It gives Americans a sense of when we can hope to breathe freely again.
  • And last week, in an interview with CNBC News, he said “75, 80, 85 percent” and “75 to 80-plus percent.”
  • ...10 more annotations...
  • He is doing so, he said, partly based on new science, and partly on his gut feeling that the country is finally ready to hear what he really thinks.
  • Now that some polls are showing that many more Americans are ready, even eager, for vaccines, he said he felt he could deliver the tough message that the return to normal might take longer than anticipated.
  • We really don’t know what the real number is. I think the real range is somewhere between 70 to 90 percent.
  • not sure there will be enough voluntary acceptance of vaccines to reach that goal.
  • They also came with a warning: All answers are merely “guesstimates.”
  • Humans move around, so studying disease spread among them is far harder.
  • It took about two months to be certain that there were many asymptomatic people who had also spread the virus.
  • The more transmissible a pathogen, the more people must become immune in order to stop it.
  • Dr. Dean noted that to stop transmission in a crowded city like New York, more people would have to achieve immunity than would be necessary in a less crowded place like Montana.
  • If we can vaccinate almost all the people who are most at risk of severe outcomes, then this would become a milder disease.”
anonymous

Opinion | How to Fix the Debate Over Guns - The New York Times - 0 views

  • We can find real solutions to gun violence if we recognize the trauma it causes.
  • In the span of a week, two acts of public violence have stolen the lives of 18 people and provided a stark reminder of the mass gun violence that characterized the pre-Covid United States
  • Gun violence did not go away during 2020. Gun homicides jumped 25 percent from the year before, apparently fueled in part by a rise in intimate partner violence
  • ...12 more annotations...
  • In the U.S., people often reach for more guns as a response to mass shootings and in anticipation of needing a method of home protection, but also — as we saw in 2020 and into 2021 — in response to presidential elections, political unrest and mass-scale infectious disease.
  • Gun violence entails immediate physical trauma, but it also elicits forms of trauma that can ricochet far beyond its initial target
  • If we understand trauma as social, psychological and physical responses to experiences that cannot be assimilated into an individual’s existing understandings of themselves and the world around them, then gun trauma goes far beyond
  • Having someone taken through gun violence, surviving gun violence oneself, even hearing gunshots tears at our basic sense of safety, of security and of self
  • Research has found that surviving or being exposed to gun violence survival is associated with an increased risk of symptoms linked with PTSD (including anxiety and depression) in both urban and rural contexts, short-term decreases in reading ability, vocabulary, and impulse control, unemployment and substance use and even shifts in friendship formation
  • While gun trauma most certainly shapes the aftermath of shootings, it also shapes our day-to-day decisions and sensibilities far beyond specific acts of gun violence
  • Policies that purport to end the trauma of gun violence by increasing the punitive surveillance of individuals with mental illness, increasing police presence and surveillance of students at schools, or bringing more people into contact with the criminal justice system may ultimately create more, if different, trauma.
  • This trauma-violence cycle cannot break itself — but certainly has the power to break us.
  • Gun trauma is implicated in how guns harm us, why we turn to guns, and — to the extent that we depend on punitive criminal justice approaches to address it — how we attempt to solve the problem of gun violence.
  • We must dismantle this trauma-violence cycle, and the first step is centering gun trauma within the gun debate and addressing gun violence
  • what this might look like: the Community Justice Action Fund and Revolve Impact’s By Design campaign, which aims to “change the conversation” on gun violence by elevating leaders of color to “interrupt systems of violence and ultimately build power for communities most impacted by gun violence”
  • Approaching guns from the perspective of trauma will require some imagination — and some courage
clairemann

Flights to Nowhere and Travel After the Pandemic | Time - 0 views

  • I’ve taken to staying in bed and flying to Morocco. It’s the place I’ve been that’s the least like Brooklyn, where I have spent most of this pandemic. Trying to remember the way the air feels on your skin in an unfamiliar climate is the smallest of escapes. Maybe it’s a necessary one, now that everything within reach feels so unrelentingly familiar.
  • In our travel-starved, pandemic-addled state, people will actually pay to go to the airport, get on a plane wearing their face masks, and fly over their own country or a neighboring one and come right back. A seven-hour Qantas sightseeing flight over Australian landmarks sold out in 10 minutes.
  • I don’t think we’ll need to book a SpaceX flight to feel like we’re somewhere startling and new. For many of us, seeing a new movie in a real theater will feel like a trip. Or better yet, dancing in the sticky aisles of a dark music venue humming with people and anticipation.
  • ...1 more annotation...
  • “The metaphor of the parental scaffold is visual, intuitive, and simple: Your child is the ‘building.’ You, the parent, are the scaffold that surrounds the building. The framework of all your decisions and efforts as parents is the three pillars of your scaffold: structure, support, and encouragement. Eventually, when the building is finished and ready to stand completely on its own, the parental scaffold can come down.”
cvanderloo

Heat is a serious threat to dairy cows - we're finding innovative ways to keep them cool - 0 views

  • Severe overheating can threaten cows’ health and their ability to get pregnant and carry calves to term.
  • Dairy farmers use fans and sprayers to cool cows in their barns, but there is a substantial need for better options. Existing systems use a lot of energy and water, which is costly for farmers. And climate change is raising temperatures and stressing California’s water supplies.
  • Cows are particularly sensitive to hot weather: Their body temperature is 101.5 degrees Fahrenheit, three degrees higher than humans, and they create a large amount of heat as they break down feed in their stomachs and produce milk.
  • ...5 more annotations...
  • These are all considered signs of heat stress. Once it sets in, cows will produce less milk. They may have trouble getting and staying pregnant, and in severe cases may die.
  • These strategies help cows regulate their body temperature, but use large quantities of water and electricity. The average California dairy farm spends US$140,000 annually on utilities. Furthermore, these systems may be insufficient during extreme heat waves.
  • Our first cooling technology uses mats buried approximately 4 inches underneath the sand bedding where cows lie down. Water flows through the mats and absorbs heat from the cows through conduction.
  • The second technology uses targeted direct evaporative cooling, sometimes referred to as a “swamp cooler,” and fabric ducts to blow cool air on the cows in the areas where cows eat and rest.
  • During our first test phase, we tested all four treatments on 32 cows at UC Davis and collected data on their respiration rates, body temperature, milk yield and behavior, as well as weather, water use and energy use. Data analysis is underway. We anticipate that we will identify at least one option that will cool cows as effectively as current options, but will also save water, energy or both.
Javier E

Thieves of experience: On the rise of surveillance capitalism - 1 views

  • Harvard Business School professor emerita Shoshana Zuboff argues in her new book that the Valley’s wealth and power are predicated on an insidious, essentially pathological form of private enterprise—what she calls “surveillance capitalism.” Pioneered by Google, perfected by Facebook, and now spreading throughout the economy, surveillance capitalism uses human life as its raw material. Our everyday experiences, distilled into data, have become a privately-owned business asset used to predict and mold our behavior, whether we’re shopping or socializing, working or voting.
  • By reengineering the economy and society to their own benefit, Google and Facebook are perverting capitalism in a way that undermines personal freedom and corrodes democracy.
  • Under the Fordist model of mass production and consumption that prevailed for much of the twentieth century, industrial capitalism achieved a relatively benign balance among the contending interests of business owners, workers, and consumers. Enlightened executives understood that good pay and decent working conditions would ensure a prosperous middle class eager to buy the goods and services their companies produced. It was the product itself — made by workers, sold by companies, bought by consumers — that tied the interests of capitalism’s participants together. Economic and social equilibrium was negotiated through the product.
  • ...72 more annotations...
  • By removing the tangible product from the center of commerce, surveillance capitalism upsets the equilibrium. Whenever we use free apps and online services, it’s often said, we become the products, our attention harvested and sold to advertisers
  • this truism gets it wrong. Surveillance capitalism’s real products, vaporous but immensely valuable, are predictions about our future behavior — what we’ll look at, where we’ll go, what we’ll buy, what opinions we’ll hold — that internet companies derive from our personal data and sell to businesses, political operatives, and other bidders.
  • Unlike financial derivatives, which they in some ways resemble, these new data derivatives draw their value, parasite-like, from human experience.To the Googles and Facebooks of the world, we are neither the customer nor the product. We are the source of what Silicon Valley technologists call “data exhaust” — the informational byproducts of online activity that become the inputs to prediction algorithms
  • Another 2015 study, appearing in the Journal of Computer-Mediated Communication, showed that when people hear their phone ring but are unable to answer it, their blood pressure spikes, their pulse quickens, and their problem-solving skills decline.
  • The smartphone has become a repository of the self, recording and dispensing the words, sounds and images that define what we think, what we experience and who we are. In a 2015 Gallup survey, more than half of iPhone owners said that they couldn’t imagine life without the device.
  • So what happens to our minds when we allow a single tool such dominion over our perception and cognition?
  • Not only do our phones shape our thoughts in deep and complicated ways, but the effects persist even when we aren’t using the devices. As the brain grows dependent on the technology, the research suggests, the intellect weakens.
  • he has seen mounting evidence that using a smartphone, or even hearing one ring or vibrate, produces a welter of distractions that makes it harder to concentrate on a difficult problem or job. The division of attention impedes reasoning and performance.
  • internet companies operate in what Zuboff terms “extreme structural independence from people.” When databases displace goods as the engine of the economy, our own interests, as consumers but also as citizens, cease to be part of the negotiation. We are no longer one of the forces guiding the market’s invisible hand. We are the objects of surveillance and control.
  • Social skills and relationships seem to suffer as well.
  • In both tests, the subjects whose phones were in view posted the worst scores, while those who left their phones in a different room did the best. The students who kept their phones in their pockets or bags came out in the middle. As the phone’s proximity increased, brainpower decreased.
  • In subsequent interviews, nearly all the participants said that their phones hadn’t been a distraction—that they hadn’t even thought about the devices during the experiment. They remained oblivious even as the phones disrupted their focus and thinking.
  • The researchers recruited 520 undergraduates at UCSD and gave them two standard tests of intellectual acuity. One test gauged “available working-memory capacity,” a measure of how fully a person’s mind can focus on a particular task. The second assessed “fluid intelligence,” a person’s ability to interpret and solve an unfamiliar problem. The only variable in the experiment was the location of the subjects’ smartphones. Some of the students were asked to place their phones in front of them on their desks; others were told to stow their phones in their pockets or handbags; still others were required to leave their phones in a different room.
  • the “integration of smartphones into daily life” appears to cause a “brain drain” that can diminish such vital mental skills as “learning, logical reasoning, abstract thought, problem solving, and creativity.”
  •  Smartphones have become so entangled with our existence that, even when we’re not peering or pawing at them, they tug at our attention, diverting precious cognitive resources. Just suppressing the desire to check our phone, which we do routinely and subconsciously throughout the day, can debilitate our thinking.
  • They found that students who didn’t bring their phones to the classroom scored a full letter-grade higher on a test of the material presented than those who brought their phones. It didn’t matter whether the students who had their phones used them or not: All of them scored equally poorly.
  • A study of nearly a hundred secondary schools in the U.K., published last year in the journal Labour Economics, found that when schools ban smartphones, students’ examination scores go up substantially, with the weakest students benefiting the most.
  • Data, the novelist and critic Cynthia Ozick once wrote, is “memory without history.” Her observation points to the problem with allowing smartphones to commandeer our brains
  • Because smartphones serve as constant reminders of all the friends we could be chatting with electronically, they pull at our minds when we’re talking with people in person, leaving our conversations shallower and less satisfying.
  • In a 2013 study conducted at the University of Essex in England, 142 participants were divided into pairs and asked to converse in private for ten minutes. Half talked with a phone in the room, half without a phone present. The subjects were then given tests of affinity, trust and empathy. “The mere presence of mobile phones,” the researchers reported in the Journal of Social and Personal Relationships, “inhibited the development of interpersonal closeness and trust” and diminished “the extent to which individuals felt empathy and understanding from their partners.”
  • The evidence that our phones can get inside our heads so forcefully is unsettling. It suggests that our thoughts and feelings, far from being sequestered in our skulls, can be skewed by external forces we’re not even aware o
  •  Scientists have long known that the brain is a monitoring system as well as a thinking system. Its attention is drawn toward any object that is new, intriguing or otherwise striking — that has, in the psychological jargon, “salience.”
  • even in the history of captivating media, the smartphone stands out. It is an attention magnet unlike any our minds have had to grapple with before. Because the phone is packed with so many forms of information and so many useful and entertaining functions, it acts as what Dr. Ward calls a “supernormal stimulus,” one that can “hijack” attention whenever it is part of our surroundings — and it is always part of our surroundings.
  • Imagine combining a mailbox, a newspaper, a TV, a radio, a photo album, a public library and a boisterous party attended by everyone you know, and then compressing them all into a single, small, radiant object. That is what a smartphone represents to us. No wonder we can’t take our minds off it.
  • The irony of the smartphone is that the qualities that make it so appealing to us — its constant connection to the net, its multiplicity of apps, its responsiveness, its portability — are the very ones that give it such sway over our minds.
  • Phone makers like Apple and Samsung and app writers like Facebook, Google and Snap design their products to consume as much of our attention as possible during every one of our waking hours
  • Social media apps were designed to exploit “a vulnerability in human psychology,” former Facebook president Sean Parker said in a recent interview. “[We] understood this consciously. And we did it anyway.”
  • A quarter-century ago, when we first started going online, we took it on faith that the web would make us smarter: More information would breed sharper thinking. We now know it’s not that simple.
  • As strange as it might seem, people’s knowledge and understanding may actually dwindle as gadgets grant them easier access to online data stores
  • In a seminal 2011 study published in Science, a team of researchers — led by the Columbia University psychologist Betsy Sparrow and including the late Harvard memory expert Daniel Wegner — had a group of volunteers read forty brief, factual statements (such as “The space shuttle Columbia disintegrated during re-entry over Texas in Feb. 2003”) and then type the statements into a computer. Half the people were told that the machine would save what they typed; half were told that the statements would be erased.
  • Afterward, the researchers asked the subjects to write down as many of the statements as they could remember. Those who believed that the facts had been recorded in the computer demonstrated much weaker recall than those who assumed the facts wouldn’t be stored. Anticipating that information would be readily available in digital form seemed to reduce the mental effort that people made to remember it
  • The researchers dubbed this phenomenon the “Google effect” and noted its broad implications: “Because search engines are continually available to us, we may often be in a state of not feeling we need to encode the information internally. When we need it, we will look it up.”
  • as the pioneering psychologist and philosopher William James said in an 1892 lecture, “the art of remembering is the art of thinking.”
  • Only by encoding information in our biological memory can we weave the rich intellectual associations that form the essence of personal knowledge and give rise to critical and conceptual thinking. No matter how much information swirls around us, the less well-stocked our memory, the less we have to think with.
  • As Dr. Wegner and Dr. Ward explained in a 2013 Scientific American article, when people call up information through their devices, they often end up suffering from delusions of intelligence. They feel as though “their own mental capacities” had generated the information, not their devices. “The advent of the ‘information age’ seems to have created a generation of people who feel they know more than ever before,” the scholars concluded, even though “they may know ever less about the world around them.”
  • That insight sheds light on society’s current gullibility crisis, in which people are all too quick to credit lies and half-truths spread through social media. If your phone has sapped your powers of discernment, you’ll believe anything it tells you.
  • A second experiment conducted by the researchers produced similar results, while also revealing that the more heavily students relied on their phones in their everyday lives, the greater the cognitive penalty they suffered.
  • When we constrict our capacity for reasoning and recall or transfer those skills to a gadget, we sacrifice our ability to turn information into knowledge. We get the data but lose the meaning
  • We need to give our minds more room to think. And that means putting some distance between ourselves and our phones.
  • Google’s once-patient investors grew restive, demanding that the founders figure out a way to make money, preferably lots of it.
  • nder pressure, Page and Brin authorized the launch of an auction system for selling advertisements tied to search queries. The system was designed so that the company would get paid by an advertiser only when a user clicked on an ad. This feature gave Google a huge financial incentive to make accurate predictions about how users would respond to ads and other online content. Even tiny increases in click rates would bring big gains in income. And so the company began deploying its stores of behavioral data not for the benefit of users but to aid advertisers — and to juice its own profits. Surveillance capitalism had arrived.
  • Google’s business now hinged on what Zuboff calls “the extraction imperative.” To improve its predictions, it had to mine as much information as possible from web users. It aggressively expanded its online services to widen the scope of its surveillance.
  • Through Gmail, it secured access to the contents of people’s emails and address books. Through Google Maps, it gained a bead on people’s whereabouts and movements. Through Google Calendar, it learned what people were doing at different moments during the day and whom they were doing it with. Through Google News, it got a readout of people’s interests and political leanings. Through Google Shopping, it opened a window onto people’s wish lists,
  • The company gave all these services away for free to ensure they’d be used by as many people as possible. It knew the money lay in the data.
  • the organization grew insular and secretive. Seeking to keep the true nature of its work from the public, it adopted what its CEO at the time, Eric Schmidt, called a “hiding strategy” — a kind of corporate omerta backed up by stringent nondisclosure agreements.
  • Page and Brin further shielded themselves from outside oversight by establishing a stock structure that guaranteed their power could never be challenged, neither by investors nor by directors.
  • What’s most remarkable about the birth of surveillance capitalism is the speed and audacity with which Google overturned social conventions and norms about data and privacy. Without permission, without compensation, and with little in the way of resistance, the company seized and declared ownership over everyone’s information
  • The companies that followed Google presumed that they too had an unfettered right to collect, parse, and sell personal data in pretty much any way they pleased. In the smart homes being built today, it’s understood that any and all data will be beamed up to corporate clouds.
  • Google conducted its great data heist under the cover of novelty. The web was an exciting frontier — something new in the world — and few people understood or cared about what they were revealing as they searched and surfed. In those innocent days, data was there for the taking, and Google took it
  • Google also benefited from decisions made by lawmakers, regulators, and judges — decisions that granted internet companies free use of a vast taxpayer-funded communication infrastructure, relieved them of legal and ethical responsibility for the information and messages they distributed, and gave them carte blanche to collect and exploit user data.
  • Consider the terms-of-service agreements that govern the division of rights and the delegation of ownership online. Non-negotiable, subject to emendation and extension at the company’s whim, and requiring only a casual click to bind the user, TOS agreements are parodies of contracts, yet they have been granted legal legitimacy by the court
  • Law professors, writes Zuboff, “call these ‘contracts of adhesion’ because they impose take-it-or-leave-it conditions on users that stick to them whether they like it or not.” Fundamentally undemocratic, the ubiquitous agreements helped Google and other firms commandeer personal data as if by fiat.
  • n the choices we make as consumers and private citizens, we have always traded some of our autonomy to gain other rewards. Many people, it seems clear, experience surveillance capitalism less as a prison, where their agency is restricted in a noxious way, than as an all-inclusive resort, where their agency is restricted in a pleasing way
  • Zuboff makes a convincing case that this is a short-sighted and dangerous view — that the bargain we’ve struck with the internet giants is a Faustian one
  • but her case would have been stronger still had she more fully addressed the benefits side of the ledger.
  • there’s a piece missing. While Zuboff’s assessment of the costs that people incur under surveillance capitalism is exhaustive, she largely ignores the benefits people receive in return — convenience, customization, savings, entertainment, social connection, and so on
  • hat the industries of the future will seek to manufacture is the self.
  • Behavior modification is the thread that ties today’s search engines, social networks, and smartphone trackers to tomorrow’s facial-recognition systems, emotion-detection sensors, and artificial-intelligence bots.
  • All of Facebook’s information wrangling and algorithmic fine-tuning, she writes, “is aimed at solving one problem: how and when to intervene in the state of play that is your daily life in order to modify your behavior and thus sharply increase the predictability of your actions now, soon, and later.”
  • “The goal of everything we do is to change people’s actual behavior at scale,” a top Silicon Valley data scientist told her in an interview. “We can test how actionable our cues are for them and how profitable certain behaviors are for us.”
  • This goal, she suggests, is not limited to Facebook. It is coming to guide much of the economy, as financial and social power shifts to the surveillance capitalists
  • Combining rich information on individuals’ behavioral triggers with the ability to deliver precisely tailored and timed messages turns out to be a recipe for behavior modification on an unprecedented scale.
  • it was Facebook, with its incredibly detailed data on people’s social lives, that grasped digital media’s full potential for behavior modification. By using what it called its “social graph” to map the intentions, desires, and interactions of literally billions of individuals, it saw that it could turn its network into a worldwide Skinner box, employing psychological triggers and rewards to program not only what people see but how they react.
  • spying on the populace is not the end game. The real prize lies in figuring out ways to use the data to shape how people think and act. “The best way to predict the future is to invent it,” the computer scientist Alan Kay once observed. And the best way to predict behavior is to script it.
  • competition for personal data intensified. It was no longer enough to monitor people online; making better predictions required that surveillance be extended into homes, stores, schools, workplaces, and the public squares of cities and towns. Much of the recent innovation in the tech industry has entailed the creation of products and services designed to vacuum up data from every corner of our lives
  • “The typical complaint is that privacy is eroded, but that is misleading,” Zuboff writes. “In the larger societal pattern, privacy is not eroded but redistributed . . . . Instead of people having the rights to decide how and what they will disclose, these rights are concentrated within the domain of surveillance capitalism.” The transfer of decision rights is also a transfer of autonomy and agency, from the citizen to the corporation.
  • What we lose under this regime is something more fundamental than privacy. It’s the right to make our own decisions about privacy — to draw our own lines between those aspects of our lives we are comfortable sharing and those we are not
  • Other possible ways of organizing online markets, such as through paid subscriptions for apps and services, never even got a chance to be tested.
  • Online surveillance came to be viewed as normal and even necessary by politicians, government bureaucrats, and the general public
  • Google and other Silicon Valley companies benefited directly from the government’s new stress on digital surveillance. They earned millions through contracts to share their data collection and analysis techniques with the National Security Agenc
  • As much as the dot-com crash, the horrors of 9/11 set the stage for the rise of surveillance capitalism. Zuboff notes that, in 2000, members of the Federal Trade Commission, frustrated by internet companies’ lack of progress in adopting privacy protections, began formulating legislation to secure people’s control over their online information and severely restrict the companies’ ability to collect and store it. It seemed obvious to the regulators that ownership of personal data should by default lie in the hands of private citizens, not corporations.
  • The 9/11 attacks changed the calculus. The centralized collection and analysis of online data, on a vast scale, came to be seen as essential to national security. “The privacy provisions debated just months earlier vanished from the conversation more or less overnight,”
Javier E

The View from Nowhere: Questions and Answers » Pressthink - 2 views

  • In pro journalism, American style, the View from Nowhere is a bid for trust that advertises the viewlessness of the news producer. Frequently it places the journalist between polarized extremes, and calls that neither-nor position “impartial.” Second, it’s a means of defense against a style of criticism that is fully anticipated: charges of bias originating in partisan politics and the two-party system. Third: it’s an attempt to secure a kind of universal legitimacy that is implicitly denied to those who stake out positions or betray a point of view. American journalists have almost a lust for the View from Nowhere because they think it has more authority than any other possible stance.
  • Who gets credit for the phrase, “view from nowhere?” # A. The philosopher Thomas Nagel, who wrote a very important book with that title.
  • Q. What does it say? # A. It says that human beings are, in fact, capable of stepping back from their position to gain an enlarged understanding, which includes the more limited view they had before the step back. Think of the cinema: when the camera pulls back to reveal where a character had been standing and shows us a fuller tableau. To Nagel, objectivity is that kind of motion. We try to “transcend our particular viewpoint and develop an expanded consciousness that takes in the world more fully.” #
  • ...11 more annotations...
  • But there are limits to this motion. We can’t transcend all our starting points. No matter how far it pulls back the camera is still occupying a position. We can’t actually take the “view from nowhere,” but this doesn’t mean that objectivity is a lie or an illusion. Our ability to step back and the fact that there are limits to it– both are real. And realism demands that we acknowledge both.
  • Q. So is objectivity a myth… or not? # A. One of the many interesting things Nagel says in that book is that “objectivity is both underrated and overrated, sometimes by the same persons.” It’s underrated by those who scoff at it as a myth. It is overrated by people who think it can replace the view from somewhere or transcend the human subject. It can’t.
  • When MSNBC suspends Keith Olbermann for donating without company permission to candidates he supports– that’s dumb. When NPR forbids its “news analysts” from expressing a view on matters they are empowered to analyze– that’s dumb. When reporters have to “launder” their views by putting them in the mouths of think tank experts: dumb. When editors at the Washington Post decline even to investigate whether the size of rallies on the Mall can be reliably estimated because they want to avoid charges of “leaning one way or the other,” as one of them recently put it, that is dumb. When CNN thinks that, because it’s not MSNBC and it’s not Fox, it’s the only the “real news network” on cable, CNN is being dumb about itself.
  • Let some in the press continue on with the mask of impartiality, which has advantages for cultivating sources and soothing advertisers. Let others experiment with transparency as the basis for trust. When you click on their by-line it takes you to a disclosure page where there is a bio, a kind of mission statement, and a creative attempt to say: here’s where I’m coming from (one example) along with campaign contributions, any affiliations or memberships, and–I’m just speculating now–a list of heroes and villains, or major influences, along with an archive of the work, plus anything else that might assist the user in placing this person on the user’s mattering map.
  • if objectivity means trying to ground truth claims in verifiable facts, I am definitely for that. If it means there’s a “hard” reality out there that exists beyond any of our descriptions of it, sign me up. If objectivity is the requirement to acknowledge what is, regardless of whether we want it to be that way, then I want journalists who can be objective in that sense.
  • If it means trying to see things in that fuller perspective Thomas Nagel talked about–pulling the camera back, revealing our previous position as only one of many–I second the motion. If it means the struggle to get beyond the limited perspective that our experience and upbringing afford us… yeah, we need more of that, not less. I think there is value in acts of description that do not attempt to say whether the thing described is good or bad
  • I think we are in the midst of shift in the system by which trust is sustained in professional journalism. David Weinberger tried to capture it with his phrase: transparency is the new objectivity. My version of that: it’s easier to trust in “here’s where I’m coming from” than the View from Nowhere. These are two different ways of bidding for the confidence of the users.
  • In the newer way, the logic is different. “Look, I’m not going to pretend that I have no view. Instead, I am going to level with you about where I’m coming from on this. So factor that in when you evaluate my report. Because I’ve done the work and this is what I’ve concluded…”
  • it has unearned authority in the American press. If in doing the serious work of journalism–digging, reporting, verification, mastering a beat–you develop a view, expressing that view does not diminish your authority. It may even add to it. The View from Nowhere doesn’t know from this. It also encourages journalists to develop bad habits. Like: criticism from both sides is a sign that you’re doing something right, when you could be doing everything wrong.
  • Who gets credit for the phrase, “view from nowhere?” # A. The philosopher Thomas Nagel, who wrote a very important book with that title.
  • It says that human beings are, in fact, capable of stepping back from their position to gain an enlarged understanding, which includes the more limited view they had before the step back. Think of the cinema: when the camera pulls back to reveal where a character had been standing and shows us a fuller tableau. To Nagel, objectivity is that kind of motion. We try to “transcend our particular viewpoint and develop an expanded consciousness that takes in the world more fully.”
caelengrubb

Problems with 'the scientific method' | Science News for Students - 0 views

  • It’s a sequence of steps that take you from asking a question to arriving at a conclusion. But scientists rarely follow the steps of the scientific method as textbooks describe it.
  • “The scientific method is a myth,” asserts Gary Garber, a physics teacher at Boston University Academy.
  • It was invented by historians and philosophers of science during the last century to make sense of how science works. Unfortunately, he says, the term is usually interpreted to mean there is only one, step-by-step approach to science.
  • ...11 more annotations...
  • “There isn’t one method of ‘doing science.’”
  • In fact, he notes, there are many paths to finding out the answer to something. Which route a researcher chooses may depend on the field of science being studied. It might also depend on whether experimentation is possible, affordable — even ethical.
  • In the future, she says, students and teachers will be encouraged to think not about the scientific method, but instead about “practices of science” — or the many ways in which scientists look for answers.
  • But that one-size-fits-all approach doesn’t reflect how scientists in different fields actually “do” science,
  • In contrast, geologists, scientists who study the history of Earth as recorded in rocks, won’t necessarily do experiments
  • For example, experimental physicists are scientists who study how particles such as electrons, ions and protons behave. These scientists might perform controlled experiments, starting with clearly defined initial conditions. Then they will change one variable, or factor, at a time.
  • Geologists are still collecting evidence, “but it’s a different kind of evidence.”
  • A hypothesis is a testable idea or explanation for something. Starting with a hypothesis is a good way to do science, she acknowledges, “but it’s not the only way.”
  • “Often, we just start by saying, ‘I wonder’“ Singer says. “Maybe it gives rise to a hypothesis.” Other times, she says, you may need to first gather some data and look to see if a pattern emerges.
  • Mistakes and unexpected results can be blessings in disguise.
  • An experiment that doesn’t give the results that a scientist expected does not necessarily mean a researcher did something wrong. In fact, mistakes often point to unexpected results — and sometimes more important data — than the findings that scientists initially anticipated.
pier-paolo

Opinion | Your Brain Is Not for Thinking - The New York Times - 0 views

  • This new activity of hunting started an evolutionary arms race. Over millions of years, both predators and prey evolved more complex bodies that could sense and move more effectively to catch or elude other creatures.
  • Eventually, some creatures evolved a command center to run those complex bodies. We call it a brain.
  • Your brain’s most important job isn’t thinking; it’s running the systems of your body to keep you alive and well. According to recent findings in neuroscience, even when your brain does produce conscious thoughts and feelings, they are more in service to the needs of managing your body than you realize.
  • ...8 more annotations...
  • Much of your brain’s activity happens outside your awareness. In every moment, your brain must figure out your body’s needs for the next moment and execute a plan to fill those needs in advance.
  • The budget for your body tracks resources
  • Each action that spends resources, such as standing up, running, and learning, is like a withdrawal from your account. Actions that replenish your resources, such as eating and sleeping, are like deposits.
  • Every thought you have, every feeling of happiness or anger or awe you experience, every kindness you extend and every insult you bear or sling is part of your brain’s calculations as it anticipates and budgets your metabolic needs.
  • This view of the brain has many implications for understanding human beings. So often, for example, we conceive of ourselves in mental terms, separate from the physical
  • There is no such thing as a purely mental cause, because every mental experience has roots in the physical budgeting of your body
  • When an unpleasant thought pops into your head, like “I can’t take this craziness anymore,” ask yourself body-budgeting questions. “Did I get enough sleep last night? Am I dehydrated? Should I take a walk? Call a friend? Because I could use a deposit or two in my body budget.”
  • I’m suggesting that it’s possible to acknowledge what your brain is actually doing and take some comfort from it. Your brain is not for thinking. Everything that it conjures, from thoughts to emotions to dreams, is in the service of body budgeting. This perspective, adopted judiciously, can be a source of resilience in challenging times.
‹ Previous 21 - 40 of 65 Next › Last »
Showing 20 items per page