Skip to main content

Home/ TOK Friends/ Group items matching "Our" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
Javier E

Why it's as hard to escape an echo chamber as it is to flee a cult | Aeon Essays - 0 views

  • there are two very different phenomena at play here, each of which subvert the flow of information in very distinct ways. Let’s call them echo chambers and epistemic bubbles. Both are social structures that systematically exclude sources of information. Both exaggerate their members’ confidence in their beliefs.
  • they work in entirely different ways, and they require very different modes of intervention
  • An epistemic bubble is when you don’t hear people from the other side. An echo chamber is what happens when you don’t trust people from the other side.
  • ...90 more annotations...
  • start with epistemic bubbles
  • That omission might be purposeful
  • But that omission can also be entirely inadvertent. Even if we’re not actively trying to avoid disagreement, our Facebook friends tend to share our views and interests
  • An ‘echo chamber’ is a social structure from which other relevant voices have been actively discredited. Where an epistemic bubble merely omits contrary views, an echo chamber brings its members to actively distrust outsiders.
  • an echo chamber is something like a cult. A cult isolates its members by actively alienating them from any outside sources. Those outside are actively labelled as malignant and untrustworthy.
  • In epistemic bubbles, other voices are not heard; in echo chambers, other voices are actively undermined.
  • The way to break an echo chamber is not to wave “the facts” in the faces of its members. It is to attack the echo chamber at its root and repair that broken trust.
  • Looking to others for corroboration is a basic method for checking whether one has reasoned well or badly
  • They have been in the limelight lately, most famously in Eli Pariser’s The Filter Bubble (2011) and Cass Sunstein’s #Republic: Divided Democracy in the Age of Social Media (2017).
  • The general gist: we get much of our news from Facebook feeds and similar sorts of social media. our Facebook feed consists mostly of our friends and colleagues, the majority of whom share our own political and cultural views
  • various algorithms behind the scenes, such as those inside Google search, invisibly personalise our searches, making it more likely that we’ll see only what we want to see. These processes all impose filters on information.
  • Such filters aren’t necessarily bad. The world is overstuffed with information, and one can’t sort through it all by oneself: filters need to be outsourced.
  • That’s why we all depend on extended social networks to deliver us knowledge
  • any such informational network needs the right sort of broadness and variety to work
  • Each individual person in my network might be superbly reliable about her particular informational patch but, as an aggregate structure, my network lacks what Sanford Goldberg in his book Relying on Others (2010) calls ‘coverage-reliability’. It doesn’t deliver to me a sufficiently broad and representative coverage of all the relevant information.
  • Epistemic bubbles also threaten us with a second danger: excessive self-confidence.
  • An ‘epistemic bubble’ is an informational network from which relevant voices have been excluded by omission
  • Suppose that I believe that the Paleo diet is the greatest diet of all time. I assemble a Facebook group called ‘Great Health Facts!’ and fill it only with people who already believe that Paleo is the best diet. The fact that everybody in that group agrees with me about Paleo shouldn’t increase my confidence level one bit. They’re not mere copies – they actually might have reached their conclusions independently – but their agreement can be entirely explained by my method of selection.
  • Luckily, though, epistemic bubbles are easily shattered. We can pop an epistemic bubble simply by exposing its members to the information and arguments that they’ve missed.
  • echo chambers are a far more pernicious and robust phenomenon.
  • amieson and Cappella’s book is the first empirical study into how echo chambers function
  • echo chambers work by systematically alienating their members from all outside epistemic sources.
  • Their research centres on Rush Limbaugh, a wildly successful conservative firebrand in the United States, along with Fox News and related media
  • His constant attacks on the ‘mainstream media’ are attempts to discredit all other sources of knowledge. He systematically undermines the integrity of anybody who expresses any kind of contrary view.
  • outsiders are not simply mistaken – they are malicious, manipulative and actively working to destroy Limbaugh and his followers. The resulting worldview is one of deeply opposed force, an all-or-nothing war between good and evil
  • The result is a rather striking parallel to the techniques of emotional isolation typically practised in cult indoctrination
  • cult indoctrination involves new cult members being brought to distrust all non-cult members. This provides a social buffer against any attempts to extract the indoctrinated person from the cult.
  • The echo chamber doesn’t need any bad connectivity to function. Limbaugh’s followers have full access to outside sources of information
  • As Elijah Millgram argues in The Great Endarkenment (2015), modern knowledge depends on trusting long chains of experts. And no single person is in the position to check up on the reliability of every member of that chain
  • Their worldview can survive exposure to those outside voices because their belief system has prepared them for such intellectual onslaught.
  • exposure to contrary views could actually reinforce their views. Limbaugh might offer his followers a conspiracy theory: anybody who criticises him is doing it at the behest of a secret cabal of evil elites, which has already seized control of the mainstream media.
  • Perversely, exposure to outsiders with contrary views can thus increase echo-chamber members’ confidence in their insider sources, and hence their attachment to their worldview.
  • ‘evidential pre-emption’. What’s happening is a kind of intellectual judo, in which the power and enthusiasm of contrary voices are turned against those contrary voices through a carefully rigged internal structure of belief.
  • One might be tempted to think that the solution is just more intellectual autonomy. Echo chambers arise because we trust others too much, so the solution is to start thinking for ourselves.
  • that kind of radical intellectual autonomy is a pipe dream. If the philosophical study of knowledge has taught us anything in the past half-century, it is that we are irredeemably dependent on each other in almost every domain of knowledge
  • Limbaugh’s followers regularly read – but do not accept – mainstream and liberal news sources. They are isolated, not by selective exposure, but by changes in who they accept as authorities, experts and trusted sources.
  • we depend on a vastly complicated social structure of trust. We must trust each other, but, as the philosopher Annette Baier says, that trust makes us vulnerable. Echo chambers operate as a kind of social parasite on that vulnerability, taking advantage of our epistemic condition and social dependency.
  • I am quite confident that there are plenty of echo chambers on the political Left. More importantly, nothing about echo chambers restricts them to the arena of politics
  • The world of anti-vaccination is clearly an echo chamber, and it is one that crosses political lines. I’ve also encountered echo chambers on topics as broad as diet (Paleo!), exercise technique (CrossFit!), breastfeeding, some academic intellectual traditions, and many, many more
  • Here’s a basic check: does a community’s belief system actively undermine the trustworthiness of any outsiders who don’t subscribe to its central dogmas? Then it’s probably an echo chamber.
  • much of the recent analysis has lumped epistemic bubbles together with echo chambers into a single, unified phenomenon. But it is absolutely crucial to distinguish between the two.
  • Epistemic bubbles are rather ramshackle; they go up easily, and they collapse easily
  • Echo chambers are far more pernicious and far more robust. They can start to seem almost like living things. Their belief systems provide structural integrity, resilience and active responses to outside attacks
  • the two phenomena can also exist independently. And of the events we’re most worried about, it’s the echo-chamber effects that are really causing most of the trouble.
  • new data does, in fact, seem to show that people on Facebook actually do see posts from the other side, or that people often visit websites with opposite political affiliation.
  • their basis for evaluation – their background beliefs about whom to trust – are radically different. They are not irrational, but systematically misinformed about where to place their trust.
  • Many people have claimed that we have entered an era of ‘post-truth’.
  • Not only do some political figures seem to speak with a blatant disregard for the facts, but their supporters seem utterly unswayed by evidence. It seems, to some, that truth no longer matters.
  • This is an explanation in terms of total irrationality. To accept it, you must believe that a great number of people have lost all interest in evidence or investigation, and have fallen away from the ways of reason.
  • echo chambers offers a less damning and far more modest explanation. The apparent ‘post-truth’ attitude can be explained as the result of the manipulations of trust wrought by echo chambers.
  • We don’t have to attribute a complete disinterest in facts, evidence or reason to explain the post-truth attitude. We simply have to attribute to certain communities a vastly divergent set of trusted authorities.
  • An echo chamber doesn’t destroy their members’ interest in the truth; it merely manipulates whom they trust and changes whom they accept as trustworthy sources and institutions.
  • in many ways, echo-chamber members are following reasonable and rational procedures of enquiry. They’re engaging in critical reasoning. They’re questioning, they’re evaluating sources for themselves, they’re assessing different pathways to information. They are critically examining those who claim expertise and trustworthiness, using what they already know about the world
  • none of this weighs against the existence of echo chambers. We should not dismiss the threat of echo chambers based only on evidence about connectivity and exposure.
  • Notice how different what’s going on here is from, say, Orwellian doublespeak, a deliberately ambiguous, euphemism-filled language designed to hide the intent of the speaker.
  • echo chambers don’t trade in vague, ambiguous pseudo-speech. We should expect that echo chambers would deliver crisp, clear, unambiguous claims about who is trustworthy and who is not
  • clearly articulated conspiracy theories, and crisply worded accusations of an outside world rife with untrustworthiness and corruption.
  • Once an echo chamber starts to grip a person, its mechanisms will reinforce themselves.
  • In an epistemically healthy life, the variety of our informational sources will put an upper limit to how much we’re willing to trust any single person. Everybody’s fallible; a healthy informational network tends to discover people’s mistakes and point them out. This puts an upper ceiling on how much you can trust even your most beloved leader
  • nside an echo chamber, that upper ceiling disappears.
  • Being caught in an echo chamber is not always the result of laziness or bad faith. Imagine, for instance, that somebody has been raised and educated entirely inside an echo chamber
  • when the child finally comes into contact with the larger world – say, as a teenager – the echo chamber’s worldview is firmly in place. That teenager will distrust all sources outside her echo chamber, and she will have gotten there by following normal procedures for trust and learning.
  • It certainly seems like our teenager is behaving reasonably. She could be going about her intellectual life in perfectly good faith. She might be intellectually voracious, seeking out new sources, investigating them, and evaluating them using what she already knows.
  • The worry is that she’s intellectually trapped. Her earnest attempts at intellectual investigation are led astray by her upbringing and the social structure in which she is embedded.
  • Echo chambers might function like addiction, under certain accounts. It might be irrational to become addicted, but all it takes is a momentary lapse – once you’re addicted, your internal landscape is sufficiently rearranged such that it’s rational to continue with your addiction
  • Similarly, all it takes to enter an echo chamber is a momentary lapse of intellectual vigilance. Once you’re in, the echo chamber’s belief systems function as a trap, making future acts of intellectual vigilance only reinforce the echo chamber’s worldview.
  • There is at least one possible escape route, however. Notice that the logic of the echo chamber depends on the order in which we encounter the evidence. An echo chamber can bring our teenager to discredit outside beliefs precisely because she encountered the echo chamber’s claims first. Imagine a counterpart to our teenager who was raised outside of the echo chamber and exposed to a wide range of beliefs. our free-range counterpart would, when she encounters that same echo chamber, likely see its many flaws
  • Those caught in an echo chamber are giving far too much weight to the evidence they encounter first, just because it’s first. Rationally, they should reconsider their beliefs without that arbitrary preference. But how does one enforce such informational a-historicity?
  • The escape route is a modified version of René Descartes’s infamous method.
  • Meditations on First Philosophy (1641). He had come to realise that many of the beliefs he had acquired in his early life were false. But early beliefs lead to all sorts of other beliefs, and any early falsehoods he’d accepted had surely infected the rest of his belief system.
  • The only solution, thought Descartes, was to throw all his beliefs away and start over again from scratch.
  • He could start over, trusting nothing and no one except those things that he could be entirely certain of, and stamping out those sneaky falsehoods once and for all. Let’s call this the Cartesian epistemic reboot.
  • Notice how close Descartes’s problem is to our hapless teenager’s, and how useful the solution might be. our teenager, like Descartes, has problematic beliefs acquired in early childhood. These beliefs have infected outwards, infesting that teenager’s whole belief system. our teenager, too, needs to throw everything away, and start over again.
  • Let’s call the modernised version of Descartes’s methodology the social-epistemic reboot.
  • when she starts from scratch, we won’t demand that she trust only what she’s absolutely certain of, nor will we demand that she go it alone
  • For the social reboot, she can proceed, after throwing everything away, in an utterly mundane way – trusting her senses, trusting others. But she must begin afresh socially – she must reconsider all possible sources of information with a presumptively equanimous eye. She must take the posture of a cognitive newborn, open and equally trusting to all outside sources
  • we’re not asking people to change their basic methods for learning about the world. They are permitted to trust, and trust freely. But after the social reboot, that trust will not be narrowly confined and deeply conditioned by the particular people they happened to be raised by.
  • Such a profound deep-cleanse of one’s whole belief system seems to be what’s actually required to escape. Look at the many stories of people leaving cults and echo chambers
  • Take, for example, the story of Derek Black in Florida – raised by a neo-Nazi father, and groomed from childhood to be a neo-Nazi leader. Black left the movement by, basically, performing a social reboot. He completely abandoned everything he’d believed in, and spent years building a new belief system from scratch. He immersed himself broadly and open-mindedly in everything he’d missed – pop culture, Arabic literature, the mainstream media, rap – all with an overall attitude of generosity and trust.
  • It was the project of years and a major act of self-reconstruction, but those extraordinary lengths might just be what’s actually required to undo the effects of an echo-chambered upbringing.
  • we need to attack the root, the systems of discredit themselves, and restore trust in some outside voices.
  • Stories of actual escapes from echo chambers often turn on particular encounters – moments when the echo-chambered individual starts to trust somebody on the outside.
  • Black’s is case in point. By high school, he was already something of a star on neo-Nazi media, with his own radio talk-show. He went on to college, openly neo-Nazi, and was shunned by almost every other student in his community college. But then Matthew Stevenson, a Jewish fellow undergraduate, started inviting Black to Stevenson’s Shabbat dinners. In Black’s telling, Stevenson was unfailingly kind, open and generous, and slowly earned Black’s trust. This was the seed, says Black, that led to a massive intellectual upheaval – a slow-dawning realisation of the depths to which he had been misled
  • Similarly, accounts of people leaving echo-chambered homophobia rarely involve them encountering some institutionally reported fact. Rather, they tend to revolve around personal encounters – a child, a family member, a close friend coming out.
  • hese encounters matter because a personal connection comes with a substantial store of trust.
  • We don’t simply trust people as educated experts in a field – we rely on their goodwill. And this is why trust, rather than mere reliability, is the key concept
  • goodwill is a general feature of a person’s character. If I demonstrate goodwill in action, then you have some reason to think that I also have goodwill in matters of thought and knowledge.
  • f one can demonstrate goodwill to an echo-chambered member – as Stevenson did with Black – then perhaps one can start to pierce that echo chamber.
  • the path I’m describing is a winding, narrow and fragile one. There is no guarantee that such trust can be established, and no clear path to its being established systematically.
  • what we’ve found here isn’t an escape route at all. It depends on the intervention of another. This path is not even one an echo-chamber member can trigger on her own; it is only a whisper-thin hope for rescue from the outside.
Javier E

The Flight From Conversation - NYTimes.com - 0 views

  • we have sacrificed conversation for mere connection.
  • the little devices most of us carry around are so powerful that they change not only what we do, but also who we are.
  • A businessman laments that he no longer has colleagues at work. He doesn’t stop by to talk; he doesn’t call. He says that he doesn’t want to interrupt them. He says they’re “too busy on their e-mail.”
  • ...19 more annotations...
  • We want to customize our lives. We want to move in and out of where we are because the thing we value most is control over where we focus our attention. We have gotten used to the idea of being in a tribe of one, loyal to our own party.
  • We are tempted to think that our little “sips” of online connection add up to a big gulp of real conversation. But they don’t.
  • “Someday, someday, but certainly not now, I’d like to learn how to have a conversation.”
  • We can’t get enough of one another if we can use technology to keep one another at distances we can control: not too close, not too far, just right. I think of it as a Goldilocks effect. Texting and e-mail and posting let us present the self we want to be. This means we can edit. And if we wish to, we can delete. Or retouch: the voice, the flesh, the face, the body. Not too much, not too little — just right.
  • Human relationships are rich; they’re messy and demanding. We have learned the habit of cleaning them up with technology.
  • I have often heard the sentiment “No one is listening to me.” I believe this feeling helps explain why it is so appealing to have a Facebook page or a Twitter feed — each provides so many automatic listeners. And it helps explain why — against all reason — so many of us are willing to talk to machines that seem to care about us. Researchers around the world are busy inventing sociable robots, designed to be companions to the elderly, to children, to all of us.
  • Connecting in sips may work for gathering discrete bits of information or for saying, “I am thinking about you.” Or even for saying, “I love you.” But connecting in sips doesn’t work as well when it comes to understanding and knowing one another. In conversation we tend to one another.
  • We can attend to tone and nuance. In conversation, we are called upon to see things from another’s point of view.
  • I’m the one who doesn’t want to be interrupted. I think I should. But I’d rather just do things on my BlackBerry.
  • And we use conversation with others to learn to converse with ourselves. So our flight from conversation can mean diminished chances to learn skills of self-reflection
  • we have little motivation to say something truly self-reflective. Self-reflection in conversation requires trust. It’s hard to do anything with 3,000 Facebook friends except connect.
  • we seem almost willing to dispense with people altogether. Serious people muse about the future of computer programs as psychiatrists. A high school sophomore confides to me that he wishes he could talk to an artificial intelligence program instead of his dad about dating; he says the A.I. would have so much more in its database. Indeed, many people tell me they hope that as Siri, the digital assistant on Apple’s iPhone, becomes more advanced, “she” will be more and more like a best friend — one who will listen when others won’t.
  • FACE-TO-FACE conversation unfolds slowly. It teaches patience. When we communicate on our digital devices, we learn different habits. As we ramp up the volume and velocity of online connections, we start to expect faster answers. To get these, we ask one another simpler questions; we dumb down our communications, even on the most important matters.
  • WE expect more from technology and less from one another and seem increasingly drawn to technologies that provide the illusion of companionship without the demands of relationship. Always-on/always-on-you devices provide three powerful fantasies: that we will always be heard; that we can put our attention wherever we want it to be; and that we never have to be alone. Indeed our new devices have turned being alone into a problem that can be solved.
  • When people are alone, even for a few moments, they fidget and reach for a device. Here connection works like a symptom, not a cure, and our constant, reflexive impulse to connect shapes a new way of being.
  • Think of it as “I share, therefore I am.” We use technology to define ourselves by sharing our thoughts and feelings as we’re having them. We used to think, “I have a feeling; I want to make a call.” Now our impulse is, “I want to have a feeling; I need to send a text.”
  • Lacking the capacity for solitude, we turn to other people but don’t experience them as they are. It is as though we use them, need them as spare parts to support our increasingly fragile selves.
  • If we are unable to be alone, we are far more likely to be lonely. If we don’t teach our children to be alone, they will know only how to be lonely.
  • I am a partisan for conversation. To make room for it, I see some first, deliberate steps. At home, we can create sacred spaces: the kitchen, the dining room. We can make our cars “device-free zones.”
Emily Freilich

All Can Be Lost: The Risk of Putting Our Knowledge in the Hands of Machines - Nicholas Carr - The Atlantic - 0 views

  • We rely on computers to fly our planes, find our cancers, design our buildings, audit our businesses. That's all well and good. But what happens when the computer fails?
  • On the evening of February 12, 2009, a Continental Connection commuter flight made its way through blustery weather between Newark, New Jersey, and Buffalo, New York.
  • The Q400 was well into its approach to the Buffalo airport, its landing gear down, its wing flaps out, when the pilot’s control yoke began to shudder noisily, a signal that the plane was losing lift and risked going into an aerodynamic stall. The autopilot disconnected, and the captain took over the controls. He reacted quickly, but he did precisely the wrong thing: he jerked back on the yoke, lifting the plane’s nose and reducing its airspeed, instead of pushing the yoke forward to gain velocity.
  • ...43 more annotations...
  • The crash, which killed all 49 people on board as well as one person on the ground, should never have happened.
  • aptain’s response to the stall warning, the investigators reported, “should have been automatic, but his improper flight control inputs were inconsistent with his training” and instead revealed “startle and confusion.
  • Automation has become so sophisticated that on a typical passenger flight, a human pilot holds the controls for a grand total of just three minutes.
  • We humans have been handing off chores, both physical and mental, to tools since the invention of the lever, the wheel, and the counting bead.
  • And that, many aviation and automation experts have concluded, is a problem. Overuse of automation erodes pilots’ expertise and dulls their reflexes,
  • No one doubts that autopilot has contributed to improvements in flight safety over the years. It reduces pilot fatigue and provides advance warnings of problems, and it can keep a plane airborne should the crew become disabled. But the steady overall decline in plane crashes masks the recent arrival of “a spectacularly new type of accident,”
  • “We’re forgetting how to fly.”
  • The experience of airlines should give us pause. It reveals that automation, for all its benefits, can take a toll on the performance and talents of those who rely on it. The implications go well beyond safety. Because automation alters how we act, how we learn, and what we know, it has an ethical dimension. The choices we make, or fail to make, about which tasks we hand off to machines shape our lives and the place we make for ourselves in the world.
  • What pilots spend a lot of time doing is monitoring screens and keying in data. They’ve become, it’s not much of an exaggeration to say, computer operators.
  • Examples of complacency and bias have been well documented in high-risk situations—on flight decks and battlefields, in factory control rooms—but recent studies suggest that the problems can bedevil anyone working with a computer
  • That may leave the person operating the computer to play the role of a high-tech clerk—entering data, monitoring outputs, and watching for failures. Rather than opening new frontiers of thought and action, software ends up narrowing our focus.
  • A labor-saving device doesn’t just provide a substitute for some isolated component of a job or other activity. It alters the character of the entire task, including the roles, attitudes, and skills of the people taking part.
  • when we work with computers, we often fall victim to two cognitive ailments—complacency and bias—that can undercut our performance and lead to mistakes. Automation complacency occurs when a computer lulls us into a false sense of security. Confident that the machine will work flawlessly and handle any problem that crops up, we allow our attention to drift.
  • Automation bias occurs when we place too much faith in the accuracy of the information coming through our monitors. our trust in the software becomes so strong that we ignore or discount other information sources, including our own eyes and ears
  • Automation is different now. Computers can be programmed to perform complex activities in which a succession of tightly coordinated tasks is carried out through an evaluation of many variables. Many software programs take on intellectual work—observing and sensing, analyzing and judging, even making decisions—that until recently was considered the preserve of humans.
  • Automation turns us from actors into observers. Instead of manipulating the yoke, we watch the screen. That shift may make our lives easier, but it can also inhibit the development of expertise.
  • Since the late 1970s, psychologists have been documenting a phenomenon called the “generation effect.” It was first observed in studies of vocabulary, which revealed that people remember words much better when they actively call them to mind—when they generate them—than when they simply read them.
  • When you engage actively in a task, you set off intricate mental processes that allow you to retain more knowledge. You learn more and remember more. When you repeat the same task over a long period, your brain constructs specialized neural circuits dedicated to the activit
  • What looks like instinct is hard-won skill, skill that requires exactly the kind of struggle that modern software seeks to alleviate.
  • In many businesses, managers and other professionals have come to depend on decision-support systems to analyze information and suggest courses of action. Accountants, for example, use the systems in corporate audits. The applications speed the work, but some signs suggest that as the software becomes more capable, the accountants become less so.
  • You can put limits on the scope of automation, making sure that people working with computers perform challenging tasks rather than merely observing.
  • Experts used to assume that there were limits to the ability of programmers to automate complicated tasks, particularly those involving sensory perception, pattern recognition, and conceptual knowledge
  • Who needs humans, anyway? That question, in one rhetorical form or another, comes up frequently in discussions of automation. If computers’ abilities are expanding so quickly and if people, by comparison, seem slow, clumsy, and error-prone, why not build immaculately self-contained systems that perform flawlessly without any human oversight or intervention? Why not take the human factor out of the equation?
  • The cure for imperfect automation is total automation.
  • That idea is seductive, but no machine is infallible. Sooner or later, even the most advanced technology will break down, misfire, or, in the case of a computerized system, encounter circumstances that its designers never anticipated. As automation technologies become more complex, relying on interdependencies among algorithms, databases, sensors, and mechanical parts, the potential sources of failure multiply. They also become harder to detect.
  • conundrum of computer automation.
  • Because many system designers assume that human operators are “unreliable and inefficient,” at least when compared with a computer, they strive to give the operators as small a role as possible.
  • People end up functioning as mere monitors, passive watchers of screens. That’s a job that humans, with our notoriously wandering minds, are especially bad at
  • people have trouble maintaining their attention on a stable display of information for more than half an hour. “This means,” Bainbridge observed, “that it is humanly impossible to carry out the basic function of monitoring for unlikely abnormalities.”
  • a person’s skills “deteriorate when they are not used,” even an experienced operator will eventually begin to act like an inexperienced one if restricted to just watching.
  • You can program software to shift control back to human operators at frequent but irregular intervals; knowing that they may need to take command at any moment keeps people engaged, promoting situational awareness and learning.
  • What’s most astonishing, and unsettling, about computer automation is that it’s still in its early stages.
  • most software applications don’t foster learning and engagement. In fact, they have the opposite effect. That’s because taking the steps necessary to promote the development and maintenance of expertise almost always entails a sacrifice of speed and productivity.
  • Learning requires inefficiency. Businesses, which seek to maximize productivity and profit, would rarely accept such a trade-off. Individuals, too, almost always seek efficiency and convenience.
  • Abstract concerns about the fate of human talent can’t compete with the allure of saving time and money.
  • The small island of Igloolik, off the coast of the Melville Peninsula in the Nunavut territory of northern Canada, is a bewildering place in the winter.
  • , Inuit hunters have for some 4,000 years ventured out from their homes on the island and traveled across miles of ice and tundra to search for game. The hunters’ ability to navigate vast stretches of the barren Arctic terrain, where landmarks are few, snow formations are in constant flux, and trails disappear overnight, has amazed explorers and scientists for centuries. The Inuit’s extraordinary way-finding skills are born not of technological prowess—they long eschewed maps and compasses—but of a profound understanding of winds, snowdrift patterns, animal behavior, stars, and tides.
  • The Igloolik hunters have begun to rely on computer-generated maps to get around. Adoption of GPS technology has been particularly strong among younger Inuit, and it’s not hard to understand why.
  • But as GPS devices have proliferated on Igloolik, reports of serious accidents during hunts have spread. A hunter who hasn’t developed way-finding skills can easily become lost, particularly if his GPS receiver fails.
  • The routes so meticulously plotted on satellite maps can also give hunters tunnel vision, leading them onto thin ice or into other hazards a skilled navigator would avoid.
  • An Inuit on a GPS-equipped snowmobile is not so different from a suburban commuter in a GPS-equipped SUV: as he devotes his attention to the instructions coming from the computer, he loses sight of his surroundings. He travels “blindfolded,” as Aporta puts it
  • A unique talent that has distinguished a people for centuries may evaporate in a generation.
  • Computer automation severs the ends from the means. It makes getting what we want easier, but it distances us from the work of knowing. As we transform ourselves into creatures of the screen, we face an existential question: Does our essence still lie in what we know, or are we now content to be defined by what we want?
  •  
    Automation increases efficiency and speed of tasks, but decreases the individual's knowledge of a task and decrease's a human's ability to learn. 
Javier E

Overstimulation Nation - Slack Tide by Matt Labash - 0 views

  • The local radio jock said to me, “You must think all of this is pretty silly”. He motioned towards the crowd and then to a rollercoaster directly beside us that came screeching at our heads every 95 seconds. But I said, “No. In a century people are going to look back on right now as a sort of magic era, a charmed time of peace and prosperity and freedom from fear, as something that can never happen again, no matter how much they wish it would.”
  • telling the truth always liberates us, even if it scares the hell out of us simultaneously
  • Bad things have always happened in this world. That’s nothing new. And bad things will continue to have their uninterrupted run, right until the end of time.  But the “freedom from fear” Coupland speaks of is largely a function of not wallowing in it all the live-long day, which  our trusty bad-news delivery systems are pretty good about making us do. They give us the illusion of constant movement, even if our only destination is backwards, prompting us to forever double down on fear, and agitation, and mutual suspicion, while steeping us in our own soul sickness.
  • ...7 more annotations...
  • It’s a trap, which maybe seeking out a little more deliberate boredom – also known as stillness - could help us avoid
  • Thomas Merton, whose praises I have sung in these pages before, framed it:
  • being bored might be a good start for healing what ails us.
  • But the purity of our conscience has a natural proportion with the depth of our being and the quality of our acts: and when our activity is habitually disordered, our malformed conscience can think of nothing better to tell us than to multiply the *quantity* of our acts, without perfecting their quality. And so we go from bad to worse, exhaust ourselves, empty our whole life of all content, and fall into despair
  • There are times, then, when in order to keep ourselves in existence at all we simply have to sit back for a while and do nothing. And for a man who has let himself be drawn completely out of himself by his activity, nothing is more difficult than to sit still and rest, doing nothing at all. The very act of resting is the hardest and most courageous act he can perform: and often it is quite beyond his power.
  • Our being is not to be enriched merely by activity or experience as such. Everything depends on the *quality* of Our acts and Our experiences. A multitude of badly performed actions and of experiences only half lived exhausts and depletes Our being. By doing things badly we make Ourselves less real. This growing unreality cannot help but make us unhappy and fill us with a sense of guilt
  • even with all the excitement, I couldn’t sustain any. I was bored by the excitement. Or rather, I craved boredom, finding all the excitement dull in a not-this-shitshow-again sort of way. For the last decade or so, we’ve been too over-excited, over-provoked, and overstimulated.
sissij

Is Empathy Overrated? | Big Think - 0 views

  • Empathy seems to be a quality you can never overdo. It’s like a megavitamin of emotionally relating: the more you display, the better a human you are.
  • In his last book, Just Babies, he argued humans are born moral, no religion required.
  • Telling someone empathy is overrated is akin to stating puppies are useless and ugly.
  • ...6 more annotations...
  • Empathy is the act of coming to experience the world as you think someone else does … If your suffering makes me suffer, if I feel what you feel, that’s empathy in the sense that I’m interested in here.
  • For example, donating to foreign charities ups our dopamine intake—we feel better because we’re making a difference (which, of course, can make it more about how we feel than who we’re helping).
  • Yet it’s not in our biological inheritance to offer unchecked empathy. Bloom points to our tribal nature as evidence. We’re going to care more for those closest to us, such as family and friends, then Cambodian orphans.
  • Anyone who thinks that it’s important for a therapist to feel depressed or anxious while dealing with depressed or anxious people is missing the point of therapy.
  • Bloom then discusses the difference between what Binghamton professor and Asian Studies scholar Charles Goodman describes as “sentimental compassion” and “great compassion.” The first is similar to empathy, which leads to imbalances in relationships and one’s own psychological state. Simply put, it’s exhausting.
  • Empathy is going to be a buzzword for some time to come. It feeds into our social nature, which Bloom sees nothing wrong with.
  •  
    I found this article very interesting as it talks about how empathy as a emotion is sometimes bad for us. I really like the point when the author mention that the empathy is not in our biological inheritance because our tribal nature is to care more for those closest to us. It is very interesting to think how our modern society shapes our emotions and behavior, and how empathy is gradually becoming our nature. --Sissi (2/22/2017)
Javier E

Why It's OK to Let Apps Make You a Better Person - Evan Selinger - Technology - The Atlantic - 0 views

  • one theme emerges from the media coverage of people's relationships with our current set of technologies: Consumers want digital willpower. App designers in touch with the latest trends in behavioral modification--nudging, the quantified self, and gamification--and good old-fashioned financial incentive manipulation, are tackling weakness of will. They're harnessing the power of payouts, cognitive biases, social networking, and biofeedback. The quantified self becomes the programmable self.
  • the trend still has multiple interesting dimensions
  • Individuals are turning ever more aspects of their lives into managerial problems that require technological solutions. We have access to an ever-increasing array of free and inexpensive technologies that harness incredible computational power that effectively allows us to self-police behavior everywhere we go. As pervasiveness expands, so does trust.
  • ...20 more annotations...
  • Some embrace networked, data-driven lives and are comfortable volunteering embarrassing, real time information about what we're doing, whom we're doing it with, and how we feel about our monitored activities.
  • Put it all together and we can see that our conception of what it means to be human has become "design space." We're now Humanity 2.0, primed for optimization through commercial upgrades. And today's apps are more harbinger than endpoint.
  • philosophers have had much to say about the enticing and seemingly inevitable dispersion of technological mental prosthetic that promise to substitute or enhance some of our motivational powers.
  • beyond the practical issues lie a constellation of central ethical concerns.
  • they should cause us to pause as we think about a possible future that significantly increases the scale and effectiveness of willpower-enhancing apps. Let's call this hypothetical future Digital Willpower World and characterize the ethical traps we're about to discuss as potential general pitfalls
  • it is antithetical to the ideal of " resolute choice." Some may find the norm overly perfectionist, Spartan, or puritanical. However, it is not uncommon for folks to defend the idea that mature adults should strive to develop internal willpower strong enough to avoid external temptations, whatever they are, and wherever they are encountered.
  • In part, resolute choosing is prized out of concern for consistency, as some worry that lapse of willpower in any context indicates a generally weak character.
  • Fragmented selves behave one way while under the influence of digital willpower, but another when making decisions without such assistance. In these instances, inconsistent preferences are exhibited and we risk underestimating the extent of our technological dependency.
  • It simply means that when it comes to digital willpower, we should be on our guard to avoid confusing situational with integrated behaviors.
  • the problem of inauthenticity, a staple of the neuroethics debates, might arise. People might start asking themselves: Has the problem of fragmentation gone away only because devices are choreographing our behavior so powerfully that we are no longer in touch with our so-called real selves -- the selves who used to exist before Digital Willpower World was formed?
  • Infantalized subjects are morally lazy, quick to have others take responsibility for their welfare. They do not view the capacity to assume personal responsibility for selecting means and ends as a fundamental life goal that validates the effort required to remain committed to the ongoing project of maintaining willpower and self-control.
  • Michael Sandel's Atlantic essay, "The Case Against Perfection." He notes that technological enhancement can diminish people's sense of achievement when their accomplishments become attributable to human-technology systems and not an individual's use of human agency.
  • Borgmann worries that this environment, which habituates us to be on auto-pilot and delegate deliberation, threatens to harm the powers of reason, the most central component of willpower (according to the rationalist tradition).
  • In several books, including Technology and the Character of Contemporary Life, he expresses concern about technologies that seem to enhance willpower but only do so through distraction. Borgmann's paradigmatic example of the non-distracted, focally centered person is a serious runner. This person finds the practice of running maximally fulfilling, replete with the rewarding "flow" that can only comes when mind/body and means/ends are unified, while skill gets pushed to the limit.
  • Perhaps the very conception of a resolute self was flawed. What if, as psychologist Roy Baumeister suggests, willpower is more "staple of folk psychology" than real way of thinking about our brain processes?
  • novel approaches suggest the will is a flexible mesh of different capacities and cognitive mechanisms that can expand and contract, depending on the agent's particular setting and needs. Contrary to the traditional view that identifies the unified and cognitively transparent self as the source of willed actions, the new picture embraces a rather diffused, extended, and opaque self who is often guided by irrational trains of thought. What actually keeps the self and its will together are the given boundaries offered by biology, a coherent self narrative created by shared memories and experiences, and society. If this view of the will as an expa
  • nding and contracting system with porous and dynamic boundaries is correct, then it might seem that the new motivating technologies and devices can only increase our reach and further empower our willing selves.
  • "It's a mistake to think of the will as some interior faculty that belongs to an individual--the thing that pushes the motor control processes that cause my action," Gallagher says. "Rather, the will is both embodied and embedded: social and physical environment enhance or impoverish our ability to decide and carry out our intentions; often our intentions themselves are shaped by social and physical aspects of the environment."
  • It makes perfect sense to think of the will as something that can be supported or assisted by technology. Technologies, like environments and institutions can facilitate action or block it. Imagine I have the inclination to go to a concert. If I can get my ticket by pressing some buttons on my iPhone, I find myself going to the concert. If I have to fill out an application form and carry it to a location several miles away and wait in line to pick up my ticket, then forget it.
  • Perhaps the best way forward is to put a digital spin on the Socratic dictum of knowing myself and submit to the new freedom: the freedom of consuming digital willpower to guide me past the sirens.
Javier E

Big Think Interview With Nicholas Carr | Nicholas Carr | Big Think - 0 views

  • Neurologically, how does our brain adapt itself to new technologies? Nicholas Carr: A couple of types of adaptations take place in your brain. One is a strengthening of the synaptical connections between the neurons involved in using that instrument, in using that tool. And basically these are chemical – neural chemical changes. So you know, cells in our brain communicate by transmitting electrical signals between them and those electrical signals are actually activated by the exchange of chemicals, neurotransmitters in our synapses. And so when you begin to use a tool, for instance, you have much stronger electrochemical signals being processed in those – through those synaptical connections. And then the second, and even more interesting adaptation is in actual physical changes,anatomical changes. Your neurons, you may grow new neurons that are then recruited into these circuits or your existing neurons may grow new synaptical terminals. And again, that also serves to strengthen the activity in those, in those particular pathways that are being used – new pathways. On the other hand, you know, the brain likes to be efficient and so even as its strengthening the pathways you’re exercising, it’s pulling – it’s weakening the connections in other ways between the cells that supported old ways of thinking or working or behaving, or whatever that you’re not exercising so much.
  • And it was only in around the year 800 or 900 that we saw the introduction of word spaces. And suddenly reading became, in a sense, easier and suddenly you had to arrival of silent reading, which changed the act of reading from just transcription of speech to something that every individual did on their own. And suddenly you had this whole deal of the silent solitary reader who was improving their mind, expanding their horizons, and so forth. And when Guttenberg invented the printing press around 1450, what that served to do was take this new very attentive, very deep form of reading, which had been limited to just, you know, monasteries and universities, and by making books much cheaper and much more available, spread that way of reading out to a much larger mass of audience. And so we saw, for the last 500 years or so, one of the central facts of culture was deep solitary reading.
  • What the book does as a technology is shield us from distraction. The only thinggoing on is the, you know, the progression of words and sentences across page after page and so suddenly we see this immersive kind of very attentive thinking, whether you are paying attention to a story or to an argument, or whatever. And what we know about the brain is the brain adapts to these types of tools.
  • ...12 more annotations...
  • we adapt to the environment of the internet, which is an environment of kind of constant immersion and information and constant distractions, interruptions, juggling lots of messages, lots of bits of information.
  • Because it’s no longer just a matter of personal choice, of personal discipline, though obviously those things are always important, but what we’re seeing and we see this over and over again in the history of technology, is that the technology – the technology of the web, the technology of digital media, gets entwined very, very deeply into social processes, into expectations. So more and more, for instance in our work lives. You know, if our boss and all our colleagues are constantly exchanging messages, constantly checking email on their Blackberry or iPhone or their Droid or whatever, then it becomes very difficult to say, I’m not going to be as connected because you feel like you’re career is going to take a hit.
  • With the arrival – with the transfer now of text more and more onto screens, we see, I think, a new and in some ways more primitive way of reading. In order to take in information off a screen, when you are also being bombarded with all sort of other information and when there links in the text where you have to think even for just a fraction of a second, you know, do I click on this link or not. Suddenly reading again becomes a more cognitively intensive act, the way it was back when there were no spaces between words.
  • If all your friends are planning their social lives through texts and Facebook and Twitter and so forth, then to back away from that means to feel socially isolated. And of course for all people, particularly for young people, there’s kind of nothing worse than feeling socially isolated, that your friends are you know, having these conversations and you’re not involved. So it’s easy to say the solution, which is to, you know, becomes a little bit more disconnected. What’s hard it actually doing that.
  • if you want to change your brain, you change your habits. You change your habits of thinking. And that means, you know, setting aside time to engage in more contemplative, more reflective ways of thinking and that means, you know, setting aside time to engage in more contemplative, more reflective ways of thinking, to be – to screen out distractions. And that means retreating from digital media and from the web and from Smart Phones and texting and Facebook and Tweeting and everything else.
  • The Thinker was, you know, in a contemplative pose and was concentrating deeply, and wasn’t you know, multi-tasking. And because that is something that, until recently anyway, people always thought was the deepest and most distinctly human way of thinking.
  • we may end up finding that those are actually the most valuable ways of thinking that are available to us as human beings.
  • the ability to pay attention also is very important for our ability to build memories, to transfer information from our short-term memory to our long-term memory. And only when we do that do we weave new information into everything else we have stored in our brains. All the other facts we’ve learned, all the other experiences we’ve had, emotions we’ve felt. And that’s how you build, I think, a rich intellect and a rich intellectual life.
  • On the other hand, there is a cost. We lose – we begin to lose the facilities that we don’t exercise. So adaptation has both a very, very positive side, but also a potentially negative side because ultimately our brain is qualitatively neutral. It doesn’t pare what it’s strengthening or what it’s weakening, it just responds to the way we’re exercising our mind.
  • the book in some ways is the most interesting from our own present standpoint, particularly when we want to think about the way the internet is changing us. It’s interesting to think about how the book changed us.
  • So we become, after the arrival of the printing press in general, more attentive more attuned to contemplative ways of thinking. And that’s a very unnatural way of using our mind. You know, paying attention, filtering out distractions.
  • what we lose is the ability to pay deep attention to one thing for a sustained period of time, to filter out distractions.
Javier E

Reasons for Reason - NYTimes.com - 0 views

  • Rick Perry’s recent vocal dismissals of evolution, and his confident assertion that “God is how we got here” reflect an obvious divide in our culture.
  • underneath this divide is a deeper one. Really divisive disagreements are typically not just over the facts. They are also about the best way to support our views of the facts. Call this a disagreement in epistemic principle. our epistemic principles tell us what is rational to believe, what sources of information to trust.
  • I suspect that for most people, scientific evidence (or its lack) has nothing to do with it. Their belief in creationism is instead a reflection of a deeply held epistemic principle: that, at least on some topics, scripture is a more reliable source of information than science.  For others, including myself, this is never the case.
  • ...17 more annotations...
  • appealing to another method won’t help either — for unless that method can be shown to be reliable, using it to determine the reliability of the first method answers nothing.
  • Every one of our beliefs is produced by some method or source, be it humble (like memory) or complex (like technologically assisted science). But why think our methods, whatever they are, are trustworthy or reliable for getting at the truth? If I challenge one of your methods, you can’t just appeal to the same method to show that it is reliable. That would be circular
  • How do we rationally defend our most fundamental epistemic principles? Like many of the best philosophical mysteries, this a problem that can seem both unanswerable and yet extremely important to solve.
  • it seems to suggest that in the end, all “rational” explanations end up grounding out on something arbitrary. It all just comes down to what you happen to believe, what you feel in your gut, your faith.  Human beings have historically found this to be a very seductive idea,
  • this is precisely the situation we seem to be headed towards in the United States. We live isolated in our separate bubbles of information culled from sources that only reinforce our prejudices and never challenge our basic assumptions. No wonder that — as in the debates over evolution, or what to include in textbooks illustrate — we so often fail to reach agreement over the history and physical structure of the world itself. No wonder joint action grinds to a halt. When you can’t agree on your principles of evidence and rationality, you can’t agree on the facts. And if you can’t agree on the facts, you can hardly agree on what to do in the face of the facts.
  • We can’t decide on what counts as a legitimate reason to doubt my epistemic principles unless we’ve already settled on our principles—and that is the very issue in question.
  • The problem that skepticism about reason raises is not about whether I have good evidence by my principles for my principles. Presumably I do.[1] The problem is whether I can give a more objective defense of them. That is, whether I can give reasons for them that can be appreciated from what Hume called a “common point of view” — reasons that can “move some universal principle of the human frame, and touch a string, to which all mankind have an accord and symphony.”[2]
  • Any way you go, it seems you must admit you can give no reason for trusting your methods, and hence can give no reason to defend your most fundamental epistemic principles.
  • So one reason we should take the project of defending our epistemic principles seriously is that the ideal of civility demands it.
  • there is also another, even deeper, reason. We need to justify our epistemic principles from a common point of view because we need shared epistemic principles in order to even have a common point of view. Without a common background of standards against which we measure what counts as a reliable source of information, or a reliable method of inquiry, and what doesn’t, we won’t be able to agree on the facts, let alone values.
  • democracies aren’t simply organizing a struggle for power between competing interests; democratic politics isn’t war by other means. Democracies are, or should be, spaces of reasons.
  • we need an epistemic common currency because we often have to decide, jointly, what to do in the face of disagreement.
  • Sometimes we can accomplish this, in a democratic society, by voting. But we can’t decide every issue that way
  • We need some forms of common currency before we get to the voting booth.
  • Even if, as the skeptic says, we can’t defend the truth of our principles without circularity, we might still be able to show that some are better than others. Observation and experiment, for example, aren’t just good because they are reliable means to the truth. They are valuable because almost everyone can appeal to them. They have roots in our natural instincts, as Hume might have said.
  • that is one reason we need to resist skepticism about reason: we need to be able to give reasons for why some standards of reasons — some epistemic principles — should be part of that currency and some not.
  • Reasons for Reason By MICHAEL P. LYNCH
Javier E

The Irrational Consumer: Why Economics Is Dead Wrong About How We Make Choices - Derek Thompson - The Atlantic - 4 views

  • Atlantic.displayRandomElement('#header li.business .sponsored-dropdown-item'); Derek Thompson - Derek Thompson is a senior editor at The Atlantic, where he oversees business coverage for the website. More Derek has also written for Slate, BusinessWeek, and the Daily Beast. He has appeared as a guest on radio and television networks, including NPR, the BBC, CNBC, and MSNBC. All Posts RSS feed Share Share on facebook Share on linkedin Share on twitter « Previous Thompson Email Print Close function plusOneCallback () { $(document).trigger('share'); } $(document).ready(function() { var iframeUrl = "\/ad\/thanks-iframe\/TheAtlanticOnline\/channel_business;src=blog;by=derek-thompson;title=the-irrational-consumer-why-economics-is-dead-wrong-about-how-we-make-choices;pos=sharing;sz=640x480,336x280,300x250"; var toolsClicked = false; $('#toolsTop').click(function() { toolsClicked = 'top'; }); $('#toolsBottom').click(function() { toolsClicked = 'bottom'; }); $('#thanksForSharing a.hide').click(function() { $('#thanksForSharing').hide(); }); var onShareClickHandler = function() { var top = parseInt($(this).css('top').replace(/px/, ''), 10); toolsClicked = (top > 600) ? 'bottom' : 'top'; }; var onIframeReady = function(iframe) { var win = iframe.contentWindow; // Don't show the box if there's no ad in it if (win.$('.ad').children().length == 1) { return; } var visibleAds = win.$('.ad').filter(function() { return !($(this).css('display') == 'none'); }); if (visibleAds.length == 0) { // Ad is hidden, so don't show return; } if (win.$('.ad').hasClass('adNotLoaded')) { // Ad failed to load so don't show return; } $('#thanksForSharing').css('display', 'block'); var top; if(toolsClicked == 'bottom' && $('#toolsBottom').length) { top = $('#toolsBottom')[0].offsetTop + $('#toolsBottom').height() - 310; } else { top = $('#toolsTop')[0].offsetTop + $('#toolsTop').height() + 10; } $('#thanksForSharing').css('left', (-$('#toolsTop').offset().left + 60) + 'px'); $('#thanksForSharing').css('top', top + 'px'); }; var onShare = function() { // Close "Share successful!" AddThis plugin popup if (window._atw && window._atw.clb && $('#at15s:visible').length) { _atw.clb(); } if (iframeUrl == null) { return; } $('#thanksForSharingIframe').attr('src', "\/ad\/thanks-iframe\/TheAtlanticOnline\/channel_business;src=blog;by=derek-thompson;title=the-irrational-consumer-why-economics-is-dead-wrong-about-how-we-make-choices;pos=sharing;sz=640x480,336x280,300x250"); $('#thanksForSharingIframe').load(function() { var iframe = this; var win = iframe.contentWindow; if (win.loaded) { onIframeReady(iframe); } else { win.$(iframe.contentDocument).ready(function() { onIframeReady(iframe); }) } }); }; if (window.addthis) { addthis.addEventListener('addthis.ready', function() { $('.articleTools .share').mouseover(function() { $('#at15s').unbind('click', onShareClickHandler); $('#at15s').bind('click', onShareClickHandler); }); }); addthis.addEventListener('addthis.menu.share', function(evt) { onShare(); }); } // This 'share' event is used for testing, so one can call // $(document).trigger('share') to get the thank you for // sharing box to appear. $(document).bind('share', function(event) { onShare(); }); if (!window.FB || (window.FB && !window.FB._apiKey)) { // Hook into the fbAsyncInit function and register our listener there var oldFbAsyncInit = (window.fbAsyncInit) ? window.fbAsyncInit : (function() { }); window.fbAsyncInit = function() { oldFbAsyncInit(); FB.Event.subscribe('edge.create', function(response) { // to hide the facebook comments box $('#facebookLike span.fb_edge_comment_widget').hide(); onShare(); }); }; } else if (window.FB) { FB.Event.subscribe('edge.create', function(response) { // to hide the facebook comments box $('#facebookLike span.fb_edge_comment_widget').hide(); onShare(); }); } }); The Irrational Consumer: Why Economics Is Dead Wrong About How We Make Choices By Derek Thompson he
  • First, making a choice is physically exhausting, literally, so that somebody forced to make a number of decisions in a row is likely to get lazy and dumb.
  • Second, having too many choices can make us less likely to come to a conclusion. In a famous study of the so-called "paradox of choice", psychologists Mark Lepper and Sheena Iyengar found that customers presented with six jam varieties were more likely to buy one than customers offered a choice of 24.
  • ...7 more annotations...
  • Many of our mistakes stem from a central "availability bias." our brains are computers, and we like to access recently opened files, even though many decisions require a deep body of information that might require some searching. Cheap example: We remember the first, last, and peak moments of certain experiences.
  • The third check against the theory of the rational consumer is the fact that we're social animals. We let our friends and family and tribes do our thinking for us
  • neurologists are finding that many of the biases behavioral economists perceive in decision-making start in our brains. "Brain studies indicate that organisms seem to be on a hedonic treadmill, quickly habituating to homeostasis," McFadden writes. In other words, perhaps our preference for the status quo isn't just figuratively our heads, but also literally sculpted by the hand of evolution inside of our brains.
  • The popular psychological theory of "hyperbolic discounting" says people don't properly evaluate rewards over time. The theory seeks to explain why many groups -- nappers, procrastinators, Congress -- take rewards now and pain later, over and over again. But neurology suggests that it hardly makes sense to speak of "the brain," in the singular, because it's two very different parts of the brain that process choices for now and later. The choice to delay gratification is mostly processed in the frontal system. But studies show that the choice to do something immediately gratifying is processed in a different system, the limbic system, which is more viscerally connected to our behavior, our "reward pathways," and our feelings of pain and pleasure.
  • the final message is that neither the physiology of pleasure nor the methods we use to make choices are as simple or as single-minded as the classical economists thought. A lot of behavior is consistent with pursuit of self-interest, but in novel or ambiguous decision-making environments there is a good chance that our habits will fail us and inconsistencies in the way we process information will undo us.
  • Our brains seem to operate like committees, assigning some tasks to the limbic system, others to the frontal system. The "switchboard" does not seem to achieve complete, consistent communication between different parts of the brain. Pleasure and pain are experienced in the limbic system, but not on one fixed "utility" or "self-interest" scale. Pleasure and pain have distinct neural pathways, and these pathways adapt quickly to homeostasis, with sensation coming from changes rather than levels
  • Social networks are sources of information, on what products are available, what their features are, and how your friends like them. If the information is accurate, this should help you make better choices. On the other hand, it also makes it easier for you to follow the crowd rather than engaging in the due diligence of collecting and evaluating your own information and playing it against your own preferences
Javier E

There Is More to Us Than Just Our Brains - The New York Times - 0 views

  • we are less like data processing machines and more like soft-bodied mollusks, picking up cues from within and without and transforming ourselves accordingly.
  • Still, we “insist that the brain is the sole locus of thinking, a cordoned-off space where cognition happens, much as the workings of my laptop are sealed inside its aluminum case,”
  • We get constant messages about what’s going on inside our bodies, sensations we can either attend to or ignore. And we belong to tribes that cosset and guide us
  • ...14 more annotations...
  • we’re networked organisms who move around in shifting surroundings, environments that have the power to transform our thinking
  • Annie Murphy Paul’s new book, “The Extended Mind,” which exhorts us to use our entire bodies, our surroundings and our relationships to “think outside the brain.”
  • In 2011, she published “Origins,” which focused on all the ways we are shaped by the environment, before birth and minute to minute thereafter.
  • “In the nature-nurture dynamic, nurture begins at the time of conception. The food the mother eats, the air she breathes, the water she drinks, the stress or trauma she experiences — all may affect her child for better or worse, over the decades to come.”
  • a down-to-earth take on the science of epigenetics — how environmental signals become catalysts for gene expression
  • the parallel to this latest book is that the boundaries we commonly assume to be fixed are actually squishy. The moment of a child’s birth, her I.Q. scores or fMRI snapshots of what’s going on inside her brain — all are encroached upon and influenced by outside forces.
  • awareness of our internal signals, such as exactly when our hearts beat, or how cold and clammy our hands are, can boost our performance at the poker table or in the financial markets, and even improve our pillow talk
  • “Though we typically think of the brain as telling the body what to do, just as much does the body guide the brain with an array of subtle nudges and prods. One psychologist has called this guide our ‘somatic rudder,’
  • The “body scan” aspect of mindfulness meditation that has been deployed by the behavioral medicine pioneer Jon Kabat-Zinn may help people lower their heart rates and blood pressure,
  • techniques that help us pinpoint their signals can foster well-being
  • Tania Singer has shown how the neural circuitry underlying compassion is strengthened by meditation practice
  • our thoughts “are powerfully shaped by the way we move our bodies.” Gestures help us understand spatial concepts; indeed, “without gesture as an aid, students may fail to understand spatial ideas at all,”
  • looking out on grassy expanses near loose clumps of trees and a source of water helps us solve problems. “Passive attention,” she writes, is “effortless: diffuse and unfocused, it floats from object to object, topic to topic. This is the kind of attention evoked by nature, with its murmuring sounds and fluid motions; psychologists working in the tradition of James call this state of mind ‘soft fascination.’”
  • The chapters on the ways natural and built spaces reflect universal preferences and enhance the thinking process felt like a respite
karenmcgregor

Solving the Puzzle: Network Design Assignment Helpers Unleashed - 0 views

Welcome to https://www.computernetworkassignmenthelp.com, where we unravel the complexities of network design assignments and bring you a team of expert network design assignment helpers ready to a...

#networkdesignassignmenthelper #assignmenthelpservices #onlinelearning #elearning #student #education technology knowledge education

started by karenmcgregor on 08 Dec 23 no follow-up yet
karenmcgregor

Empower Your Studies with a Trusted CCNA Assignment Helper: Navigating the Path to Netw... - 2 views

Are you a student immersed in the complexities of CCNA coursework, searching for a reliable CCNA assignment helper to lighten your academic load? Look no further! At computernetworkassignmenthelp.c...

#domyccnaassignment #ccna #ccnaassignmenthelp #paytodomyccnaassignment #education

started by karenmcgregor on 05 Dec 23 no follow-up yet
sissij

Sleeping Wipes Out Certain Memories - And That's a Good Thing, Reveal Studies | Big Think - 0 views

  • But what is its evolutionary purpose – what kind of changes do our brains undergo when we sleep?
  • suggest our brains undergo a pruning cycle while we rest.
  • Its important to note these studies are still in their early stages. The tests were done on mice.
  • ...3 more annotations...
  • letting us forget the less relevant information while strengthening memories that may be important.
  • However, modern humans don't abide by a natural sleep cycle anymore – we look at our phones before bed and expose ourselves to things that cause our brains to think sleep is not on the menu.
  • they might not require a chemical crutch to get some rest.
  •  
    This article shows that how unreliable our memory is. Every night when we go to sleep, our memory is edited and our brain would delete some irrelevant things. So our memory is not a primary source and I think the words of witnesses on the court can only be a reference, not a direct evidence. Also, in this article, the author states the uncertainties and limits of the experiment, showing that the result of the experiment in this stage can only serves as a suggestion, not a direct evidence. --Sissi (2/7/2017)
Javier E

What Have We Learned, If Anything? by Tony Judt | The New York Review of Books - 0 views

  • During the Nineties, and again in the wake of September 11, 2001, I was struck more than once by a perverse contemporary insistence on not understanding the context of our present dilemmas, at home and abroad; on not listening with greater care to some of the wiser heads of earlier decades; on seeking actively to forget rather than remember, to deny continuity and proclaim novelty on every possible occasion. We have become stridently insistent that the past has little of interest to teach us. ours, we assert, is a new world; its risks and opportunities are without precedent.
  • the twentieth century that we have chosen to commemorate is curiously out of focus. The overwhelming majority of places of official twentieth-century memory are either avowedly nostalgo-triumphalist—praising famous men and celebrating famous victories—or else, and increasingly, they are opportunities for the recollection of selective suffering.
  • The problem with this lapidary representation of the last century as a uniquely horrible time from which we have now, thankfully, emerged is not the description—it was in many ways a truly awful era, an age of brutality and mass suffering perhaps unequaled in the historical record. The problem is the message: that all of that is now behind us, that its meaning is clear, and that we may now advance—unencumbered by past errors—into a different and better era.
  • ...19 more annotations...
  • Today, the “common” interpretation of the recent past is thus composed of the manifold fragments of separate pasts, each of them (Jewish, Polish, Serb, Armenian, German, Asian-American, Palestinian, Irish, homosexual…) marked by its own distinctive and assertive victimhood.
  • The resulting mosaic does not bind us to a shared past, it separates us from it. Whatever the shortcomings of the national narratives once taught in school, however selective their focus and instrumental their message, they had at least the advantage of providing a nation with past references for present experience. Traditional history, as taught to generations of schoolchildren and college students, gave the present a meaning by reference to the past: today’s names, places, inscriptions, ideas, and allusions could be slotted into a memorized narrative of yesterday. In our time, however, this process has gone into reverse. The past now acquires meaning only by reference to our many and often contrasting present concerns.
  • the United States thus has no modern memory of combat or loss remotely comparable to that of the armed forces of other countries. But it is civilian casualties that leave the most enduring mark on national memory and here the contrast is piquant indeed
  • Today, the opposite applies. Most people in the world outside of sub-Saharan Africa have access to a near infinity of data. But in the absence of any common culture beyond a small elite, and not always even there, the fragmented information and ideas that people select or encounter are determined by a multiplicity of tastes, affinities, and interests. As the years pass, each one of us has less in common with the fast-multiplying worlds of our contemporaries, not to speak of the world of our forebears.
  • What is significant about the present age of transformations is the unique insouciance with which we have abandoned not merely the practices of the past but their very memory. A world just recently lost is already half forgotten.
  • In the US, at least, we have forgotten the meaning of war. There is a reason for this. I
  • Until the last decades of the twentieth century most people in the world had limited access to information; but—thanks to national education, state-controlled radio and television, and a common print culture—within any one state or nation or community people were all likely to know many of the same things.
  • it was precisely that claim, that “it’s torture, and therefore it’s no good,” which until very recently distinguished democracies from dictatorships. We pride ourselves on having defeated the “evil empire” of the Soviets. Indeed so. But perhaps we should read again the memoirs of those who suffered at the hands of that empire—the memoirs of Eugen Loebl, Artur London, Jo Langer, Lena Constante, and countless others—and then compare the degrading abuses they suffered with the treatments approved and authorized by President Bush and the US Congress. Are they so very different?
  • As a consequence, the United States today is the only advanced democracy where public figures glorify and exalt the military, a sentiment familiar in Europe before 1945 but quite unknown today
  • the complacent neoconservative claim that war and conflict are things Americans understand—in contrast to naive Europeans with their pacifistic fantasies—seems to me exactly wrong: it is Europeans (along with Asians and Africans) who understand war all too well. Most Americans have been fortunate enough to live in blissful ignorance of its true significance.
  • That same contrast may account for the distinctive quality of much American writing on the cold war and its outcome. In European accounts of the fall of communism, from both sides of the former Iron Curtain, the dominant sentiment is one of relief at the closing of a long, unhappy chapter. Here in the US, however, the story is typically recorded in a triumphalist key.5
  • For many American commentators and policymakers the message of the twentieth century is that war works. Hence the widespread enthusiasm for our war on Iraq in 2003 (despite strong opposition to it in most other countries). For Washington, war remains an option—on that occasion the first option. For the rest of the developed world it has become a last resort.6
  • Ignorance of twentieth-century history does not just contribute to a regrettable enthusiasm for armed conflict. It also leads to a misidentification of the enemy.
  • This abstracting of foes and threats from their context—this ease with which we have talked ourselves into believing that we are at war with “Islamofascists,” “extremists” from a strange culture, who dwell in some distant “Islamistan,” who hate us for who we are and seek to destroy “our way of life”—is a sure sign that we have forgotten the lesson of the twentieth century: the ease with which war and fear and dogma can bring us to demonize others, deny them a common humanity or the protection of our laws, and do unspeakable things to them.
  • How else are we to explain our present indulgence for the practice of torture? For indulge it we assuredly do.
  • “But what would I have achieved by proclaiming my opposition to torture?” he replied. “I have never met anyone who is in favor of torture.”8 Well, times have changed. In the US today there are many respectable, thinking people who favor torture—under the appropriate circumstances and when applied to those who merit it.
  • American civilian losses (excluding the merchant navy) in both world wars amounted to less than 2,000 dead.
  • We are slipping down a slope. The sophistic distinctions we draw today in our war on terror—between the rule of law and “exceptional” circumstances, between citizens (who have rights and legal protections) and noncitizens to whom anything can be done, between normal people and “terrorists,” between “us” and “them”—are not new. The twentieth century saw them all invoked. They are the selfsame distinctions that licensed the worst horrors of the recent past: internment camps, deportation, torture, and murder—those very crimes that prompt us to murmur “never again.” So what exactly is it that we think we have learned from the past? Of what possible use is our self-righteous cult of memory and memorials if the United States can build its very own internment camp and torture people there?
  • We need to learn again—or perhaps for the first time—how war brutalizes and degrades winners and losers alike and what happens to us when, having heedlessly waged war for no good reason, we are encouraged to inflate and demonize our enemies in order to justify that war’s indefinite continuance.
Javier E

Our Dangerous Inability to Agree on What is TRUE | Risk: Reason and Reality | Big Think - 1 views

  • Given that human cognition is never the product of pure dispassionate reason, but a subjective interpretation of the facts based on our feelings and biases and instincts, when can we ever say that we know who is right and who is wrong, about anything? When can we declare a fact so established that it’s fair to say, without being called arrogant, that those who deny this truth don’t just disagree…that they’re just plain wrong
  • This isn’t about matters of faith, or questions of ultimately unknowable things which by definition can not be established by fact. This is a question about what is knowable, and provable by careful objective scientific inquiry, a process which includes challenging skepticism rigorously applied precisely to establish what, beyond any reasonable doubt, is in fact true. The way evolution has been established
  • With enough careful investigation and scrupulously challenged evidence, we can establish knowable truths that are not just the product of our subjective motivated reasoning. We can apply our powers of reason and our ability to objectively analyze the facts and get beyond the point where what we 'know' is just an interpretation of the evidence through the subconscious filters of who we trust and our biases and instincts. We can get to the point where if someone wants to continue believe that the sun revolves around the earth, or that vaccines cause autism, or that evolution is a deceit, it is no longer arrogant - though it may still be provocative - to call those people wrong.
  • ...6 more annotations...
  • here is a truth with which I hope we can all agree. Our subjective system of cognition can be dangerous. It can produce perceptions that conflict with the evidence, what I call The Perception Gap, which can in turn produce profound harm.
  • The Perception Gap can lead to disagreements that create destructive and violent social conflict, to dangerous personal choices that feel safe but aren’t, and to policies more consistent with how we feel than what is in fact in our best interest. The Perception Gap may in fact be potentially more dangerous than any individual risk we face.
  • We need to recognize the greater threat that our subjective system of cognition can pose, and in the name of our own safety and the welfare of the society on which we depend, do our very best to rise above it or, when we can’t, account for this very real danger in the policies we adopt.
  • we have an obligation to confront our own ideological priors. We have an obligation to challenge ourselves, to push ourselves, to be suspicious of conclusions that are too convenient, to be sure that we're getting it right.
  • subjective cognition is built-in, subconscious, beyond free will, and unavoidably leads to different interpretations of the same facts.
  • Views that have more to do with competing tribal biases than objective interpretations of the evidence create destructive and violent conflict.
Javier E

The Science of Why We Don't Believe Science | Mother Jones - 2 views

  • an array of new discoveries in psychology and neuroscience has further demonstrated how our preexisting beliefs, far more than any new facts, can skew our thoughts and even color what we consider our most dispassionate and logical conclusions. This tendency toward so-called "motivated reasoning" helps explain why we find groups so polarized over matters where the evidence is so unequivocal: climate change, vaccines, "death panels," the birthplace and religion of the president (PDF), and much else. It would seem that expecting people to be convinced by the facts flies in the face of, you know, the facts.
  • The theory of motivated reasoning builds on a key insight of modern neuroscience (PDF): Reasoning is actually suffused with emotion (or what researchers often call "affect"). Not only are the two inseparable, but our positive or negative feelings about people, things, and ideas arise much more rapidly than our conscious thoughts, in a matter of milliseconds—fast enough to detect with an EEG device, but long before we're aware of it. That shouldn't be surprising: Evolution required us to react very quickly to stimuli in our environment. It's a "basic human survival skill," explains political scientist Arthur Lupia of the University of Michigan. We push threatening information away; we pull friendly information close. We apply fight-or-flight reflexes not only to predators, but to data itself.
  • reasoning comes later, works slower—and even then, it doesn't take place in an emotional vacuum. Rather, our quick-fire emotions can set us on a course of thinking that's highly biased, especially on topics we care a great deal about.
  • ...5 more annotations...
  • Consider a person who has heard about a scientific discovery that deeply challenges her belief in divine creation—a new hominid, say, that confirms our evolutionary origins. What happens next, explains political scientist Charles Taber of Stony Brook University, is a subconscious negative response to the new information—and that response, in turn, guides the type of memories and associations formed in the conscious mind. "They retrieve thoughts that are consistent with their previous beliefs," says Taber, "and that will lead them to build an argument and challenge what they're hearing."
  • In other words, when we think we're reasoning, we may instead be rationalizing. Or to use an analogy offered by University of Virginia psychologist Jonathan Haidt: We may think we're being scientists, but we're actually being lawyers
  • Our "reasoning" is a means to a predetermined end—winning Our "case"—and is shot through with biases. They include "confirmation bias," in which we give greater heed to evidence and arguments that bolster Our beliefs, and "disconfirmation bias," in which we expend disproportionate energy trying to debunk or refute views and arguments that we find uncongenial.
  • That's not to suggest that we aren't also motivated to perceive the world accurately—we are. Or that we never change our minds—we do. It's just that we have other important goals b
  • esides accuracy—including identity affirmation and protecting one's sense of self—and often those make us highly resistant to changing our beliefs when the facts say we should.
Javier E

Can We Improve? - The New York Times - 1 views

  • are we capable of substantial moral improvement? Could we someday be much better ethically than we are now? Is it likely that members of our species could become, on average, more generous or more honest, less self-deceptive or less self-interested?
  • I’d like to focus here on a more recent moment: 19th-century America, where the great optimism and idealism of a rapidly rising nation was tempered by a withering realism.
  • Emerson thought that “the Spirit who led us hither” would help perfect us; others have believed the agent of improvement to be evolution, or the inevitable progress of civilization. More recent advocates of our perfectibility might focus on genetic or neurological interventions, or — as in Ray Kurzweil’s “When Singularity Is Near” — information technologies.
  • ...10 more annotations...
  • One reason that a profound moral improvement of humankind is hard to envision is that it seems difficult to pull ourselves up morally by our own bootstraps; our attempts at improvement are going to be made by the unimproved
  • People and societies occasionally improve, managing to enfranchise marginalized groups, for example, or reduce violence, but also often degenerate into war, oppression or xenophobia. It is difficult to improve and easy to convince yourself that you have improved, until the next personality crisis, the next bad decision, the next war, the next outbreak of racism, the next “crisis” in educatio
  • It’s difficult to teach your children what you yourself do not know, and it’s difficult to be good enough actually to teach your children to be good.
  • Plans for our improvement have resulted in progress here and there, but they’ve also led to many disasters of oppression, many wars and genocides.
  • One thing that Twain is saying is that many forms of evil — envy, for example, or elaborate dishonesty — appear on earth only with human beings and are found wherever we are. Creatures like us can’t see clearly what we’d be making progress toward.
  • His story “The Imp of the Perverse” shows another sort of reason that humans find it difficult to improve. The narrator asserts that a basic human impulse is to act wrongly on purpose, or even to do things because we know they’re wrong: “We act, for the reason that we should not,” the narrator declares. This is one reason that human action tends to undermine itself; our desires are contradictory.
  • Perhaps, then if we cannot improve systematically, we can improve inadvertently — or even by sheer perversity
  • As to evolution, it, too, is as likely to end in our extinction as our flourishing; it has of course extinguished most of the species to which it has given rise, and it does not clearly entail that every or any species gets better in any dimension over time
  • Our technologies may, as Kurzweil believes, allow us to transcend Our finitude. On the other hand, they may end in Our or even the planet’s total destruction.
  • “I have no faith in human perfectibility. I think that human exertion will have no appreciable effect on humanity. Man is … not more happy — nor more wise, than he was 6,000 years ago.”
  •  
    are we capable of substantial moral improvement? Could we someday be much better ethically than we are now? Is it likely that members of our species could become, on average, more generous or more honest, less self-deceptive or less self-interested?
sissij

How Does Expectation Affect Perception - 3 views

  • One important fact is that the brain works in some ways like television transmission, in that it processes stable backgrounds without much attention and moving parts more intensely and differently.
  • Recent research in babies shows that they respond most to unexpected events and use these to evaluate the environment and learn.
  • But, the over arching analysis of visual signals depends on what is expected.
  • ...7 more annotations...
  • Picture of bright light causes eye pupils to react, as if a real light.
  • Good hitters in baseball view the ball as larger.
  • Large people judge the absolute measurement of a doorway as more narrow than others will.
  • Words and thoughts alter sensory information:
  • She kicked the ball” or “grasped the subject” stimulates the leg or arm brain regions related to kicking or grasping.
  • Experienced observers of ballet or classical Indian dance who have never danced, when watching a dance stimulate specific muscles of the dance.
  • The brain has many interacting pathways and loops that create expectations with different probabilities from our previous experiences.
  •  
    I found this article very interesting because it explains some aspects of how our expectation can influence our perception. In this article, language is also mentioned that different vocabulary can alter our perception. I think this can be related to the definition of words we talked about recently. I think this article suggests that the definition of a word is the result of our expectation as we often define things differently in our favor if no clear definition is stated. This relationship can also be reversed as we use definitions to describe and organize our expectation. --Sissi (11/16/2016)
Javier E

Why these friendly robots can't be good friends to our kids - The Washington Post - 0 views

  • before adding a sociable robot to the holiday gift list, parents may want to pause to consider what they would be inviting into their homes. These machines are seductive and offer the wrong payoff: the illusion of companionship without the demands of friendship, the illusion of connection without the reciprocity of a mutual relationship. And interacting with these empathy machines may get in the way of children’s ability to develop a capacity for empathy themselves.
  • In our study, the children were so invested in their relationships with Kismet and Cog that they insisted on understanding the robots as living beings, even when the roboticists explained how the machines worked or when the robots were temporarily broken.
  • The children took the robots’ behavior to signify feelings. When the robots interacted with them, the children interpreted this as evidence that the robots liked them. And when the robots didn’t work on cue, the children likewise took it personally. Their relationships with the robots affected their state of mind and self-esteem.
  • ...14 more annotations...
  • We were led to wonder whether a broken robot can break a child.
  • Kids are central to the sociable-robot project, because its agenda is to make people more comfortable with robots in roles normally reserved for humans, and robotics companies know that children are vulnerable consumers who can bring the whole family along.
  • In October, Mattel scrapped plans for Aristotle — a kind of Alexa for the nursery, designed to accompany children as they progress from lullabies and bedtime stories through high school homework — after lawmakers and child advocacy groups argued that the data the device collected about children could be misused by Mattel, marketers, hackers and other third parties. I was part of that campaign: There is something deeply unsettling about encouraging children to confide in machines that are in turn sharing their conversations with countless others.
  • Recently, I opened my MIT mail and found a “call for subjects” for a study involving sociable robots that will engage children in conversation to “elicit empathy.” What will these children be empathizing with, exactly? Empathy is a capacity that allows us to put ourselves in the place of others, to know what they are feeling. Robots, however, have no emotions to share
  • What they can do is push our buttons. When they make eye contact and gesture toward us, they predispose us to view them as thinking and caring. They are designed to be cute, to provoke a nurturing response. And when it comes to sociable AI, nurturance is the killer app: We nurture what we love, and we love what we nurture. If a computational object or robot asks for our help, asks us to teach it or tend to it, we attach. That is our human vulnerability.
  • digital companions don’t understand our emotional lives. They present themselves as empathy machines, but they are missing the essential equipment: They have not known the arc of a life. They have not been born; they don’t know pain, or mortality, or fear. Simulated thinking may be thinking, but simulated feeling is never feeling, and simulated love is never love.
  • Breazeal’s position is this: People have relationships with many classes of things. They have relationships with children and with adults, with animals and with machines. People, even very little people, are good at this. Now, we are going to add robots to the list of things with which we can have relationships. More powerful than with pets. Less powerful than with people. We’ll figure it out.
  • The nature of the attachments to dolls and sociable machines is different. When children play with dolls, they project thoughts and emotions onto them. A girl who has broken her mother’s crystal will put her Barbies into detention and use them to work on her feelings of guilt. The dolls take the role she needs them to take.
  • Sociable machines, by contrast, have their own agenda. Playing with robots is not about the psychology of projection but the psychology of engagement. Children try to meet the robot’s needs, to understand the robot’s unique nature and wants. There is an attempt to build a mutual relationship.
  • Some people might consider that a good thing: encouraging children to think beyond their own needs and goals. Except the whole commercial program is an exercise in emotional deception.
  • when we offer these robots as pretend friends to our children, it’s not so clear they can wink with us. We embark on an experiment in which our children are the human subjects.
  • it is hard to imagine what those “right types” of ties might be. These robots can’t be in a two-way relationship with a child. They are machines whose art is to put children in a position of pretend empathy. And if we put our children in that position, we shouldn’t expect them to understand what empathy is. If we give them pretend relationships, we shouldn’t expect them to learn how real relationships — messy relationships — work. On the contrary. They will learn something superficial and inauthentic, but mistake it for real connection.
  • In the process, we can forget what is most central to our humanity: truly understanding each other.
  • For so long, we dreamed of artificial intelligence offering us not only instrumental help but the simple salvations of conversation and care. But now that our fantasy is becoming reality, it is time to confront the emotional downside of living with the robots of our dreams.
Javier E

The Equality Conundrum | The New Yorker - 0 views

  • The philosopher Ronald Dworkin considered this type of parental conundrum in an essay called “What Is Equality?,” from 1981. The parents in such a family, he wrote, confront a trade-off between two worthy egalitarian goals. One goal, “equality of resources,” might be achieved by dividing the inheritance evenly, but it has the downside of failing to recognize important differences among the parties involved.
  • Another goal, “equality of welfare,” tries to take account of those differences by means of twisty calculations.
  • Take the first path, and you willfully ignore meaningful facts about your children. Take the second, and you risk dividing the inheritance both unevenly and incorrectly.
  • ...33 more annotations...
  • In 2014, the Pew Research Center asked Americans to rank the “greatest dangers in the world.” A plurality put inequality first, ahead of “religious and ethnic hatred,” nuclear weapons, and environmental degradation. And yet people don’t agree about what, exactly, “equality” means.
  • One side argues that the city should guarantee procedural equality: it should insure that all students and families are equally informed about and encouraged to study for the entrance exam. The other side argues for a more direct, representation-based form of equality: it would jettison the exam, adopting a new admissions system designed to produce student bodies reflective of the city’s demography
  • In the past year, for example, New York City residents have found themselves in a debate over the city’s élite public high schools
  • The complexities of egalitarianism are especially frustrating because inequalities are so easy to grasp. C.E.O.s, on average, make almost three hundred times what their employees make; billionaire donors shape our politics; automation favors owners over workers; urban economies grow while rural areas stagnate; the best health care goes to the richest.
  • It’s not just about money. Tocqueville, writing in 1835, noted that our “ordinary practices of life” were egalitarian, too: we behaved as if there weren’t many differences among us. Today, there are “premiere” lines for popcorn at the movies and five tiers of Uber;
  • Inequality is everywhere, and unignorable. We’ve diagnosed the disease. Why can’t we agree on a cure?
  • In a book based on those lectures, “One Another’s Equals: The Basis of Human Equality,” Waldron points out that people are also marked by differences of skill, experience, creativity, and virtue. Given such consequential differences, he asks, in what sense are people “equal”?
  • According to the Declaration of Independence, it is “self-evident” that all men are created equal. But, from a certain perspective, it’s our inequality that’s self-evident.
  • More than twenty per cent of Americans, according to a 2015 poll, agree: they believe that the statement “All men are created equal” is false.
  • In Waldron’s view, though, it’s not a binary choice; it’s possible to see people as equal and unequal simultaneously. A society can sort its members into various categories—lawful and criminal, brilliant and not—while also allowing some principle of basic equality to circumscribe its judgments and, in some contexts, override them
  • Egalitarians like Dworkin and Waldron call this principle “deep equality.” It’s because of deep equality that even those people who acquire additional, justified worth through their actions—heroes, senators, pop stars—can still be considered fundamentally no better than anyone else.
  • In the course of his search, he explores centuries of intellectual history. Many thinkers, from Cicero to Locke, have argued that our ability to reason is what makes us equals.
  • Other thinkers, including Immanuel Kant, have cited our moral sense.
  • Some philosophers, such as Jeremy Bentham, have suggested that it’s our capacity to suffer that equalizes us
  • Waldron finds none of these arguments totally persuasive.
  • In various religious traditions, he observes, equality flows not just from broad assurances that we are all made in God’s image but from some sense that everyone is the protagonist in a saga of error, realization, and redemption: we’re equal because God cares about how things turn out for each of us.
  • Waldron himself is taken by Hannah Arendt’s related concept of “natality,” the notion that what each of us share is having been born as a “newcomer,” entering into history with “the capacity of beginning something anew, that is, of acting.”
  • equality may be not a self-evident fact about human beings but a human-made social construction that we must choose to put into practice.
  • In the end, Waldron concludes that there is no “small polished unitary soul-like substance” that makes us equal; there’s only a patchwork of arguments for our deep equality, collectively compelling but individually limited.
  • Equality is a composite idea—a nexus of complementary and competing intuitions.
  • The blurry nature of equality makes it hard to solve egalitarian dilemmas from first principles. In each situation, we must feel our way forward, reconciling our conflicting intuitions about what “equal” means.
  • The communities that have the easiest time doing that tend to have some clearly defined, shared purpose. Sprinters competing in a hundred-metre dash have varied endowments and train in different conditions; from a certain perspective, those differences make every race unfair.
  • By embracing an agreed-upon theory of equality before the race, the sprinters can find collective meaning in the ranked inequalities that emerge when it ends
  • Perhaps because necessity is so demanding, our egalitarian commitments tend to rest on a different principle: luck.
  • “Some people are blessed with good luck, some are cursed with bad luck, and it is the responsibility of society—all of us regarded collectively—to alter the distribution of goods and evils that arises from the jumble of lotteries that constitutes human life as we know it.” Anderson, in an influential coinage, calls this outlook “luck egalitarianism.”
  • This sort of artisanal egalitarianism is comparatively easy to arrange. Mass-producing it is what’s hard. A whole society can’t get together in a room to hash things out. Instead, consensus must coalesce slowly around broad egalitarian principles.
  • No principle is perfect; each contains hidden dangers that emerge with time. Many people, in contemplating the division of goods, invoke the principle of necessity: the idea that our first priority should be the equal fulfillment of fundamental needs. The hidden danger here becomes apparent once we go past a certain point of subsistence.
  • a core problem that bedevils egalitarianism—what philosophers call “the problem of expensive tastes.”
  • The problem—what feels like a necessity to one person seems like a luxury to another—is familiar to anyone who’s argued with a foodie spouse or roommate about the grocery bil
  • The problem is so insistent that a whole body of political philosophy—“prioritarianism”—is devoted to the challenge of sorting people with needs from people with wants
  • the line shifts as the years pass. Medical procedures that seem optional today become necessities tomorrow; educational attainments that were once unusual, such as college degrees, become increasingly indispensable with time
  • Some thinkers try to tame the problem of expensive tastes by asking what a “normal” or “typical” person might find necessary. But it’s easy to define “typical” too narrowly, letting unfair assumptions influence our judgment
  • an odd feature of our social contract: if you’re fired from your job, unemployment benefits help keep you afloat, while if you stop working to have a child you must deal with the loss of income yourself. This contradiction, she writes, reveals an assumption that “the desire to procreate is just another expensive taste”; it reflects, she argues, the sexist presumption that “atomistic egoism and self-sufficiency” are the human norm. The word “necessity” suggests the idea of a bare minimum. In fact, it sets a high bar. Clearing it may require rethinking how society functions.
‹ Previous 21 - 40 of 2125 Next › Last »
Showing 20 items per page