Skip to main content

Home/ TOK Friends/ Group items matching "Instinctive" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
41More

George Packer: Is Amazon Bad for Books? : The New Yorker - 0 views

  • Amazon is a global superstore, like Walmart. It’s also a hardware manufacturer, like Apple, and a utility, like Con Edison, and a video distributor, like Netflix, and a book publisher, like Random House, and a production studio, like Paramount, and a literary magazine, like The Paris Review, and a grocery deliverer, like FreshDirect, and someday it might be a package service, like U.P.S. Its founder and chief executive, Jeff Bezos, also owns a major newspaper, the Washington Post. All these streams and tributaries make Amazon something radically new in the history of American business
  • Amazon is not just the “Everything Store,” to quote the title of Brad Stone’s rich chronicle of Bezos and his company; it’s more like the Everything. What remains constant is ambition, and the search for new things to be ambitious about.
  • It wasn’t a love of books that led him to start an online bookstore. “It was totally based on the property of books as a product,” Shel Kaphan, Bezos’s former deputy, says. Books are easy to ship and hard to break, and there was a major distribution warehouse in Oregon. Crucially, there are far too many books, in and out of print, to sell even a fraction of them at a physical store. The vast selection made possible by the Internet gave Amazon its initial advantage, and a wedge into selling everything else.
  • ...38 more annotations...
  • it’s impossible to know for sure, but, according to one publisher’s estimate, book sales in the U.S. now make up no more than seven per cent of the company’s roughly seventy-five billion dollars in annual revenue.
  • A monopoly is dangerous because it concentrates so much economic power, but in the book business the prospect of a single owner of both the means of production and the modes of distribution is especially worrisome: it would give Amazon more control over the exchange of ideas than any company in U.S. history.
  • “The key to understanding Amazon is the hiring process,” one former employee said. “You’re not hired to do a particular job—you’re hired to be an Amazonian. Lots of managers had to take the Myers-Briggs personality tests. Eighty per cent of them came in two or three similar categories, and Bezos is the same: introverted, detail-oriented, engineer-type personality. Not musicians, designers, salesmen. The vast majority fall within the same personality type—people who graduate at the top of their class at M.I.T. and have no idea what to say to a woman in a bar.”
  • According to Marcus, Amazon executives considered publishing people “antediluvian losers with rotary phones and inventory systems designed in 1968 and warehouses full of crap.” Publishers kept no data on customers, making their bets on books a matter of instinct rather than metrics. They were full of inefficiences, starting with overpriced Manhattan offices.
  • For a smaller house, Amazon’s total discount can go as high as sixty per cent, which cuts deeply into already slim profit margins. Because Amazon manages its inventory so well, it often buys books from small publishers with the understanding that it can’t return them, for an even deeper discount
  • According to one insider, around 2008—when the company was selling far more than books, and was making twenty billion dollars a year in revenue, more than the combined sales of all other American bookstores—Amazon began thinking of content as central to its business. Authors started to be considered among the company’s most important customers. By then, Amazon had lost much of the market in selling music and videos to Apple and Netflix, and its relations with publishers were deteriorating
  • In its drive for profitability, Amazon did not raise retail prices; it simply squeezed its suppliers harder, much as Walmart had done with manufacturers. Amazon demanded ever-larger co-op fees and better shipping terms; publishers knew that they would stop being favored by the site’s recommendation algorithms if they didn’t comply. Eventually, they all did.
  • Brad Stone describes one campaign to pressure the most vulnerable publishers for better terms: internally, it was known as the Gazelle Project, after Bezos suggested “that Amazon should approach these small publishers the way a cheetah would pursue a sickly gazelle.”
  • ithout dropping co-op fees entirely, Amazon simplified its system: publishers were asked to hand over a percentage of their previous year’s sales on the site, as “marketing development funds.”
  • The figure keeps rising, though less for the giant pachyderms than for the sickly gazelles. According to the marketing executive, the larger houses, which used to pay two or three per cent of their net sales through Amazon, now relinquish five to seven per cent of gross sales, pushing Amazon’s percentage discount on books into the mid-fifties. Random House currently gives Amazon an effective discount of around fifty-three per cent.
  • In December, 1999, at the height of the dot-com mania, Time named Bezos its Person of the Year. “Amazon isn’t about technology or even commerce,” the breathless cover article announced. “Amazon is, like every other site on the Web, a content play.” Yet this was the moment, Marcus said, when “content” people were “on the way out.”
  • By 2010, Amazon controlled ninety per cent of the market in digital books—a dominance that almost no company, in any industry, could claim. Its prohibitively low prices warded off competition
  • In 2004, he set up a lab in Silicon Valley that would build Amazon’s first piece of consumer hardware: a device for reading digital books. According to Stone’s book, Bezos told the executive running the project, “Proceed as if your goal is to put everyone selling physical books out of a job.”
  • Lately, digital titles have levelled off at about thirty per cent of book sales.
  • The literary agent Andrew Wylie (whose firm represents me) says, “What Bezos wants is to drag the retail price down as low as he can get it—a dollar-ninety-nine, even ninety-nine cents. That’s the Apple play—‘What we want is traffic through our device, and we’ll do anything to get there.’ ” If customers grew used to paying just a few dollars for an e-book, how long before publishers would have to slash the cover price of all their titles?
  • As Apple and the publishers see it, the ruling ignored the context of the case: when the key events occurred, Amazon effectively had a monopoly in digital books and was selling them so cheaply that it resembled predatory pricing—a barrier to entry for potential competitors. Since then, Amazon’s share of the e-book market has dropped, levelling off at about sixty-five per cent, with the rest going largely to Apple and to Barnes & Noble, which sells the Nook e-reader. In other words, before the feds stepped in, the agency model introduced competition to the market
  • But the court’s decision reflected a trend in legal thinking among liberals and conservatives alike, going back to the seventies, that looks at antitrust cases from the perspective of consumers, not producers: what matters is lowering prices, even if that goal comes at the expense of competition. Barry Lynn, a market-policy expert at the New America Foundation, said, “It’s one of the main factors that’s led to massive consolidation.”
  • Publishers sometimes pass on this cost to authors, by redefining royalties as a percentage of the publisher’s receipts, not of the book’s list price. Recently, publishers say, Amazon began demanding an additional payment, amounting to approximately one per cent of net sales
  • brick-and-mortar retailers employ forty-seven people for every ten million dollars in revenue earned; Amazon employs fourteen.
  • Since the arrival of the Kindle, the tension between Amazon and the publishers has become an open battle. The conflict reflects not only business antagonism amid technological change but a division between the two coasts, with different cultural styles and a philosophical disagreement about what techies call “disruption.”
  • Bezos told Charlie Rose, “Amazon is not happening to bookselling. The future is happening to bookselling.”
  • n Grandinetti’s view, the Kindle “has helped the book business make a more orderly transition to a mixed print and digital world than perhaps any other medium.” Compared with people who work in music, movies, and newspapers, he said, authors are well positioned to thrive. The old print world of scarcity—with a limited number of publishers and editors selecting which manuscripts to publish, and a limited number of bookstores selecting which titles to carry—is yielding to a world of digital abundance. Grandinetti told me that, in these new circumstances, a publisher’s job “is to build a megaphone.”
  • it offers an extremely popular self-publishing platform. Authors become Amazon partners, earning up to seventy per cent in royalties, as opposed to the fifteen per cent that authors typically make on hardcovers. Bezos touts the biggest successes, such as Theresa Ragan, whose self-published thrillers and romances have been downloaded hundreds of thousands of times. But one survey found that half of all self-published authors make less than five hundred dollars a year.
  • The business term for all this clear-cutting is “disintermediation”: the elimination of the “gatekeepers,” as Bezos calls the professionals who get in the customer’s way. There’s a populist inflection to Amazon’s propaganda, an argument against élitist institutions and for “the democratization of the means of production”—a common line of thought in the West Coast tech world
  • “Book publishing is a very human business, and Amazon is driven by algorithms and scale,” Sargent told me. When a house gets behind a new book, “well over two hundred people are pushing your book all over the place, handing it to people, talking about it. A mass of humans, all in one place, generating tremendous energy—that’s the magic potion of publishing. . . . That’s pretty hard to replicate in Amazon’s publishing world, where they have hundreds of thousands of titles.”
  • By producing its own original work, Amazon can sell more devices and sign up more Prime members—a major source of revenue. While the company was building the
  • Like the publishing venture, Amazon Studios set out to make the old “gatekeepers”—in this case, Hollywood agents and executives—obsolete. “We let the data drive what to put in front of customers,” Carr told the Wall Street Journal. “We don’t have tastemakers deciding what our customers should read, listen to, and watch.”
  • book publishers have been consolidating for several decades, under the ownership of media conglomerates like News Corporation, which squeeze them for profits, or holding companies such as Rivergroup, which strip them to service debt. The effect of all this corporatization, as with the replacement of independent booksellers by superstores, has been to privilege the blockbuster.
  • The combination of ceaseless innovation and low-wage drudgery makes Amazon the epitome of a successful New Economy company. It’s hiring as fast as it can—nearly thirty thousand employees last year.
  • the long-term outlook is discouraging. This is partly because Americans don’t read as many books as they used to—they are too busy doing other things with their devices—but also because of the relentless downward pressure on prices that Amazon enforces.
  • he digital market is awash with millions of barely edited titles, most of it dreck, while r
  • Amazon believes that its approach encourages ever more people to tell their stories to ever more people, and turns writers into entrepreneurs; the price per unit might be cheap, but the higher number of units sold, and the accompanying royalties, will make authors wealthier
  • In Friedman’s view, selling digital books at low prices will democratize reading: “What do you want as an author—to sell books to as few people as possible for as much as possible, or for as little as possible to as many readers as possible?”
  • The real talent, the people who are writers because they happen to be really good at writing—they aren’t going to be able to afford to do it.”
  • Seven-figure bidding wars still break out over potential blockbusters, even though these battles often turn out to be follies. The quest for publishing profits in an economy of scarcity drives the money toward a few big books. So does the gradual disappearance of book reviewers and knowledgeable booksellers, whose enthusiasm might have rescued a book from drowning in obscurity. When consumers are overwhelmed with choices, some experts argue, they all tend to buy the same well-known thing.
  • These trends point toward what the literary agent called “the rich getting richer, the poor getting poorer.” A few brand names at the top, a mass of unwashed titles down below, the middle hollowed out: the book business in the age of Amazon mirrors the widening inequality of the broader economy.
  • “If they did, in my opinion they would save the industry. They’d lose thirty per cent of their sales, but they would have an additional thirty per cent for every copy they sold, because they’d be selling directly to consumers. The industry thinks of itself as Procter & Gamble*. What gave publishers the idea that this was some big goddam business? It’s not—it’s a tiny little business, selling to a bunch of odd people who read.”
  • Bezos is right: gatekeepers are inherently élitist, and some of them have been weakened, in no small part, because of their complacency and short-term thinking. But gatekeepers are also barriers against the complete commercialization of ideas, allowing new talent the time to develop and learn to tell difficult truths. When the last gatekeeper but one is gone, will Amazon care whether a book is any good? ♦
9More

How we're herded by language | Sarah Bakewell | Comment is free | The Guardian - 0 views

  • "the Middle East is a powder keg, and today the fuse is getting shorte
  • "armchair isolationists"
  • "America's poodle"
  • ...6 more annotations...
  • Yet meanings shift.
  • contributors trace the present meaning of eager, obedient lackey back to at least 1907, when Lloyd George called the House of Lords the Earl of Balfour's poodle
  • "Das also war des Pudels Kern!" – "So that was the poodle's core!" – which became a German catchphrase
  • Once you start noticing the metaphors in everything you say, you realise how central they are to human ways of grasping the world
  • This is why Kerry's armchair works: if you sit down, you are not stepping up to the plate.
  • This is also the reason why talk of military "strikes" is significant. The term is more metaphorical than it may sound, and calls to mind carefully aimed knock-out punches or lightning bolts. We are more likely to think of a sharp, effective blow than with "bomb", which brings to mind explosions, injuries, mess. Bombs imply a down and outward movement, with things pounded to bits. Strikes imply an into and through movement, which sounds nicer. Our response is physical and instinctive, just as with the up/down distinction.
16More

The New York Times > Magazine > In the Magazine: Faith, Certainty and the Presidency of... - 0 views

  • The Delaware senator was, in fact, hearing what Bush's top deputies -- from cabinet members like Paul O'Neill, Christine Todd Whitman and Colin Powell to generals fighting in Iraq -- have been told for years when they requested explanations for many of the president's decisions, policies that often seemed to collide with accepted facts. The president would say that he relied on his ''gut'' or his ''instinct'' to guide the ship of state, and then he ''prayed over it.''
  • What underlies Bush's certainty? And can it be assessed in the temporal realm of informed consent?
  • Top officials, from cabinet members on down, were often told when they would speak in Bush's presence, for how long and on what topic. The president would listen without betraying any reaction. Sometimes there would be cross-discussions -- Powell and Rumsfeld, for instance, briefly parrying on an issue -- but the president would rarely prod anyone with direct, informed questions.
  • ...13 more annotations...
  • This is one key feature of the faith-based presidency: open dialogue, based on facts, is not seen as something of inherent value. It may, in fact, create doubt, which undercuts faith. It could result in a loss of confidence in the decision-maker and, just as important, by the decision-maker.
  • has spent a lot of time trying to size up the president. ''Most successful people are good at identifying, very early, their strengths and weaknesses, at knowing themselves,'' he told me not long ago. ''For most of us average Joes, that meant we've relied on strengths but had to work on our weakness -- to lift them to adequacy -- otherwise they might bring us down. I don't think the president really had to do that, because he always had someone there -- his family or friends -- to bail him out. I don't think, on balance, that has served him well for the moment he's in now as president. He never seems to have worked on his weaknesses.''
  • Details vary, but here's the gist of what I understand took place. George W., drunk at a party, crudely insulted a friend of his mother's. George senior and Barbara blew up. Words were exchanged along the lines of something having to be done. George senior, then the vice president, dialed up his friend, Billy Graham, who came to the compound and spent several days with George W. in probing exchanges and walks on the beach. George W. was soon born again. He stopped drinking, attended Bible study and wrestled with issues of fervent faith. A man who was lost was saved.
  • Rubenstein described that time to a convention of pension managers in Los Angeles last year, recalling that Malek approached him and said: ''There is a guy who would like to be on the board. He's kind of down on his luck a bit. Needs a job. . . . Needs some board positions.'' Though Rubenstein didn't think George W. Bush, then in his mid-40's, ''added much value,'' he put him on the Caterair board. ''Came to all the meetings,'' Rubenstein told the conventioneers. ''Told a lot of jokes. Not that many clean ones. And after a while I kind of said to him, after about three years: 'You know, I'm not sure this is really for you. Maybe you should do something else. Because I don't think you're adding that much value to the board. You don't know that much about the company.' He said: 'Well, I think I'm getting out of this business anyway. And I don't really like it that much. So I'm probably going to resign from the board.' And I said thanks. Didn't think I'd ever see him again.''
  • challenges -- from either Powell or his opposite number as the top official in domestic policy, Paul O'Neill -- were trials that Bush had less and less patience for as the months passed. He made that clear to his top lieutenants. Gradually, Bush lost what Richard Perle, who would later head a largely private-sector group under Bush called the Defense Policy Board Advisory Committee, had described as his open posture during foreign-policy tutorials prior to the 2000 campaign. (''He had the confidence to ask questions that revealed he didn't know very much,'' Perle said.) By midyear 2001, a stand-and-deliver rhythm was established. Meetings, large and small, started to take on a scripted quality.
  • That a deep Christian faith illuminated the personal journey of George W. Bush is common knowledge. But faith has also shaped his presidency in profound, nonreligious ways. The president has demanded unquestioning faith from his followers, his staff, his senior aides and his kindred in the Republican Party. Once he makes a decision -- often swiftly, based on a creed or moral position -- he expects complete faith in its rightness.
  • A cluster of particularly vivid qualities was shaping George W. Bush's White House through the summer of 2001: a disdain for contemplation or deliberation, an embrace of decisiveness, a retreat from empiricism, a sometimes bullying impatience with doubters and even friendly questioners.
  • By summer's end that first year, Vice President Dick Cheney had stopped talking in meetings he attended with Bush. They would talk privately, or at their weekly lunch. The president was spending a lot of time outside the White House, often at the ranch, in the presence of only the most trustworthy confidants.
  • ''When I was first with Bush in Austin, what I saw was a self-help Methodist, very open, seeking,'' Wallis says now. ''What I started to see at this point was the man that would emerge over the next year -- a messianic American Calvinist. He doesn't want to hear from anyone who doubts him.''
  • , I had a meeting with a senior adviser to Bush. He expressed the White House's displeasure, and then he told me something that at the time I didn't fully comprehend -- but which I now believe gets to the very heart of the Bush presidency.
  • The aide said that guys like me were ''in what we call the reality-based community,'' which he defined as people who ''believe that solutions emerge from your judicious study of discernible reality.'' I nodded and murmured something about enlightenment principles and empiricism. He cut me off. ''That's not the way the world really works anymore,'' he continued. ''We're an empire now, and when we act, we create our own reality. And while you're studying that reality -- judiciously, as you will -- we'll act again, creating other new realities, which you can study too, and that's how things will sort out. We're history's actors . . . and you, all of you, will be left to just study what we do.''
  • ''If you operate in a certain way -- by saying this is how I want to justify what I've already decided to do, and I don't care how you pull it off -- you guarantee that you'll get faulty, one-sided information,'' Paul O'Neill, who was asked to resign his post of treasury secretary in December 2002, said when we had dinner a few weeks ago. ''You don't have to issue an edict, or twist arms, or be overt.''
  • George W. Bush and his team have constructed a high-performance electoral engine. The soul of this new machine is the support of millions of likely voters, who judge his worth based on intangibles -- character, certainty, fortitude and godliness -- rather than on what he says or does.
47More

All Can Be Lost: The Risk of Putting Our Knowledge in the Hands of Machines - Nicholas ... - 0 views

  • We rely on computers to fly our planes, find our cancers, design our buildings, audit our businesses. That's all well and good. But what happens when the computer fails?
  • On the evening of February 12, 2009, a Continental Connection commuter flight made its way through blustery weather between Newark, New Jersey, and Buffalo, New York.
  • The Q400 was well into its approach to the Buffalo airport, its landing gear down, its wing flaps out, when the pilot’s control yoke began to shudder noisily, a signal that the plane was losing lift and risked going into an aerodynamic stall. The autopilot disconnected, and the captain took over the controls. He reacted quickly, but he did precisely the wrong thing: he jerked back on the yoke, lifting the plane’s nose and reducing its airspeed, instead of pushing the yoke forward to gain velocity.
  • ...43 more annotations...
  • The crash, which killed all 49 people on board as well as one person on the ground, should never have happened.
  • aptain’s response to the stall warning, the investigators reported, “should have been automatic, but his improper flight control inputs were inconsistent with his training” and instead revealed “startle and confusion.
  • Automation has become so sophisticated that on a typical passenger flight, a human pilot holds the controls for a grand total of just three minutes.
  • We humans have been handing off chores, both physical and mental, to tools since the invention of the lever, the wheel, and the counting bead.
  • And that, many aviation and automation experts have concluded, is a problem. Overuse of automation erodes pilots’ expertise and dulls their reflexes,
  • No one doubts that autopilot has contributed to improvements in flight safety over the years. It reduces pilot fatigue and provides advance warnings of problems, and it can keep a plane airborne should the crew become disabled. But the steady overall decline in plane crashes masks the recent arrival of “a spectacularly new type of accident,”
  • “We’re forgetting how to fly.”
  • The experience of airlines should give us pause. It reveals that automation, for all its benefits, can take a toll on the performance and talents of those who rely on it. The implications go well beyond safety. Because automation alters how we act, how we learn, and what we know, it has an ethical dimension. The choices we make, or fail to make, about which tasks we hand off to machines shape our lives and the place we make for ourselves in the world.
  • What pilots spend a lot of time doing is monitoring screens and keying in data. They’ve become, it’s not much of an exaggeration to say, computer operators.
  • Examples of complacency and bias have been well documented in high-risk situations—on flight decks and battlefields, in factory control rooms—but recent studies suggest that the problems can bedevil anyone working with a computer
  • That may leave the person operating the computer to play the role of a high-tech clerk—entering data, monitoring outputs, and watching for failures. Rather than opening new frontiers of thought and action, software ends up narrowing our focus.
  • A labor-saving device doesn’t just provide a substitute for some isolated component of a job or other activity. It alters the character of the entire task, including the roles, attitudes, and skills of the people taking part.
  • when we work with computers, we often fall victim to two cognitive ailments—complacency and bias—that can undercut our performance and lead to mistakes. Automation complacency occurs when a computer lulls us into a false sense of security. Confident that the machine will work flawlessly and handle any problem that crops up, we allow our attention to drift.
  • Automation bias occurs when we place too much faith in the accuracy of the information coming through our monitors. Our trust in the software becomes so strong that we ignore or discount other information sources, including our own eyes and ears
  • Automation is different now. Computers can be programmed to perform complex activities in which a succession of tightly coordinated tasks is carried out through an evaluation of many variables. Many software programs take on intellectual work—observing and sensing, analyzing and judging, even making decisions—that until recently was considered the preserve of humans.
  • Automation turns us from actors into observers. Instead of manipulating the yoke, we watch the screen. That shift may make our lives easier, but it can also inhibit the development of expertise.
  • Since the late 1970s, psychologists have been documenting a phenomenon called the “generation effect.” It was first observed in studies of vocabulary, which revealed that people remember words much better when they actively call them to mind—when they generate them—than when they simply read them.
  • When you engage actively in a task, you set off intricate mental processes that allow you to retain more knowledge. You learn more and remember more. When you repeat the same task over a long period, your brain constructs specialized neural circuits dedicated to the activit
  • What looks like instinct is hard-won skill, skill that requires exactly the kind of struggle that modern software seeks to alleviate.
  • In many businesses, managers and other professionals have come to depend on decision-support systems to analyze information and suggest courses of action. Accountants, for example, use the systems in corporate audits. The applications speed the work, but some signs suggest that as the software becomes more capable, the accountants become less so.
  • You can put limits on the scope of automation, making sure that people working with computers perform challenging tasks rather than merely observing.
  • Experts used to assume that there were limits to the ability of programmers to automate complicated tasks, particularly those involving sensory perception, pattern recognition, and conceptual knowledge
  • Who needs humans, anyway? That question, in one rhetorical form or another, comes up frequently in discussions of automation. If computers’ abilities are expanding so quickly and if people, by comparison, seem slow, clumsy, and error-prone, why not build immaculately self-contained systems that perform flawlessly without any human oversight or intervention? Why not take the human factor out of the equation?
  • The cure for imperfect automation is total automation.
  • That idea is seductive, but no machine is infallible. Sooner or later, even the most advanced technology will break down, misfire, or, in the case of a computerized system, encounter circumstances that its designers never anticipated. As automation technologies become more complex, relying on interdependencies among algorithms, databases, sensors, and mechanical parts, the potential sources of failure multiply. They also become harder to detect.
  • conundrum of computer automation.
  • Because many system designers assume that human operators are “unreliable and inefficient,” at least when compared with a computer, they strive to give the operators as small a role as possible.
  • People end up functioning as mere monitors, passive watchers of screens. That’s a job that humans, with our notoriously wandering minds, are especially bad at
  • people have trouble maintaining their attention on a stable display of information for more than half an hour. “This means,” Bainbridge observed, “that it is humanly impossible to carry out the basic function of monitoring for unlikely abnormalities.”
  • a person’s skills “deteriorate when they are not used,” even an experienced operator will eventually begin to act like an inexperienced one if restricted to just watching.
  • You can program software to shift control back to human operators at frequent but irregular intervals; knowing that they may need to take command at any moment keeps people engaged, promoting situational awareness and learning.
  • What’s most astonishing, and unsettling, about computer automation is that it’s still in its early stages.
  • most software applications don’t foster learning and engagement. In fact, they have the opposite effect. That’s because taking the steps necessary to promote the development and maintenance of expertise almost always entails a sacrifice of speed and productivity.
  • Learning requires inefficiency. Businesses, which seek to maximize productivity and profit, would rarely accept such a trade-off. Individuals, too, almost always seek efficiency and convenience.
  • Abstract concerns about the fate of human talent can’t compete with the allure of saving time and money.
  • The small island of Igloolik, off the coast of the Melville Peninsula in the Nunavut territory of northern Canada, is a bewildering place in the winter.
  • , Inuit hunters have for some 4,000 years ventured out from their homes on the island and traveled across miles of ice and tundra to search for game. The hunters’ ability to navigate vast stretches of the barren Arctic terrain, where landmarks are few, snow formations are in constant flux, and trails disappear overnight, has amazed explorers and scientists for centuries. The Inuit’s extraordinary way-finding skills are born not of technological prowess—they long eschewed maps and compasses—but of a profound understanding of winds, snowdrift patterns, animal behavior, stars, and tides.
  • The Igloolik hunters have begun to rely on computer-generated maps to get around. Adoption of GPS technology has been particularly strong among younger Inuit, and it’s not hard to understand why.
  • But as GPS devices have proliferated on Igloolik, reports of serious accidents during hunts have spread. A hunter who hasn’t developed way-finding skills can easily become lost, particularly if his GPS receiver fails.
  • The routes so meticulously plotted on satellite maps can also give hunters tunnel vision, leading them onto thin ice or into other hazards a skilled navigator would avoid.
  • An Inuit on a GPS-equipped snowmobile is not so different from a suburban commuter in a GPS-equipped SUV: as he devotes his attention to the instructions coming from the computer, he loses sight of his surroundings. He travels “blindfolded,” as Aporta puts it
  • A unique talent that has distinguished a people for centuries may evaporate in a generation.
  • Computer automation severs the ends from the means. It makes getting what we want easier, but it distances us from the work of knowing. As we transform ourselves into creatures of the screen, we face an existential question: Does our essence still lie in what we know, or are we now content to be defined by what we want?
  •  
    Automation increases efficiency and speed of tasks, but decreases the individual's knowledge of a task and decrease's a human's ability to learn. 
7More

Humans, Version 3.0 § SEEDMAGAZINE.COM - 0 views

  • Where are we humans going, as a species? If science fiction is any guide, we will genetically evolve like in X-Men, become genetically engineered as in Gattaca, or become cybernetically enhanced like General Grievous in Star Wars.
  • There is, however, another avenue for human evolution, one mostly unappreciated in both science and fiction. It is this unheralded mechanism that will usher in the next stage of human, giving future people exquisite powers we do not currently possess, powers worthy of natural selection itself. And, importantly, it doesn’t require us to transform into cyborgs or bio-engineered lab rats. It merely relies on our natural bodies and brains functioning as they have for millions of years. This mystery mechanism of human transformation is neuronal recycling, coined by neuroscientist Stanislas Dehaene, wherein the brain’s innate capabilities are harnessed for altogether novel functions.
  • The root of these misconceptions is the radical underappreciation of the design engineered by natural selection into the powers implemented by our bodies and brains, something central to my 2009 book, The Vision Revolution. For example, optical illusions (such as the Hering) are not examples of the brain’s poor hardware design, but, rather, consequences of intricate evolutionary software for generating perceptions that correct for neural latencies in normal circumstances.
  • ...4 more annotations...
  • Like all animal brains, human brains are not general-purpose universal learning machines, but, instead, are intricately structured suites of instincts optimized for the environments in which they evolved. To harness our brains, we want to let the brain’s brilliant mechanisms run as intended—i.e., not to be twisted. Rather, the strategy is to twist Y into a shape that the brain does know how to process.
  • there is a very good reason to be optimistic that the next stage of human will come via the form of adaptive harnessing, rather than direct technological enhancement: It has already happened. We have already been transformed via harnessing beyond what we once were. We’re already Human 2.0, not the Human 1.0, or Homo sapiens, that natural selection made us. We Human 2.0’s have, among many powers, three that are central to who we take ourselves to be today: writing, speech, and music (the latter perhaps being the pinnacle of the arts). Yet these three capabilities, despite having all the hallmarks of design, were not a result of natural selection, nor were they the result of genetic engineering or cybernetic enhancement to our brains. Instead, and as I argue in both The Vision Revolution and my forthcoming Harnessed, these are powers we acquired by virtue of harnessing, or neuronal recycling.
  • Although the step from Human 1.0 to 2.0 was via cultural selection, not via explicit human designers, does the transformation to Human 3.0 need to be entirely due to a process like cultural evolution, or might we have any hope of purposely guiding our transformation? When considering our future, that’s probably the most relevant question we should be asking ourselves.
  • One of my reasons for optimism is that nature-harnessing technologies (like writing, speech, and music) must mimic fundamental ecological features in nature, and that is a much easier task for scientists to tackle than emulating the exhorbitantly complex mechanisms of the brain
20More

Reasons for Reason - NYTimes.com - 0 views

  • Rick Perry’s recent vocal dismissals of evolution, and his confident assertion that “God is how we got here” reflect an obvious divide in our culture.
  • underneath this divide is a deeper one. Really divisive disagreements are typically not just over the facts. They are also about the best way to support our views of the facts. Call this a disagreement in epistemic principle. Our epistemic principles tell us what is rational to believe, what sources of information to trust.
  • I suspect that for most people, scientific evidence (or its lack) has nothing to do with it. Their belief in creationism is instead a reflection of a deeply held epistemic principle: that, at least on some topics, scripture is a more reliable source of information than science.  For others, including myself, this is never the case.
  • ...17 more annotations...
  • appealing to another method won’t help either — for unless that method can be shown to be reliable, using it to determine the reliability of the first method answers nothing.
  • Every one of our beliefs is produced by some method or source, be it humble (like memory) or complex (like technologically assisted science). But why think our methods, whatever they are, are trustworthy or reliable for getting at the truth? If I challenge one of your methods, you can’t just appeal to the same method to show that it is reliable. That would be circular
  • How do we rationally defend our most fundamental epistemic principles? Like many of the best philosophical mysteries, this a problem that can seem both unanswerable and yet extremely important to solve.
  • it seems to suggest that in the end, all “rational” explanations end up grounding out on something arbitrary. It all just comes down to what you happen to believe, what you feel in your gut, your faith.  Human beings have historically found this to be a very seductive idea,
  • this is precisely the situation we seem to be headed towards in the United States. We live isolated in our separate bubbles of information culled from sources that only reinforce our prejudices and never challenge our basic assumptions. No wonder that — as in the debates over evolution, or what to include in textbooks illustrate — we so often fail to reach agreement over the history and physical structure of the world itself. No wonder joint action grinds to a halt. When you can’t agree on your principles of evidence and rationality, you can’t agree on the facts. And if you can’t agree on the facts, you can hardly agree on what to do in the face of the facts.
  • We can’t decide on what counts as a legitimate reason to doubt my epistemic principles unless we’ve already settled on our principles—and that is the very issue in question.
  • The problem that skepticism about reason raises is not about whether I have good evidence by my principles for my principles. Presumably I do.[1] The problem is whether I can give a more objective defense of them. That is, whether I can give reasons for them that can be appreciated from what Hume called a “common point of view” — reasons that can “move some universal principle of the human frame, and touch a string, to which all mankind have an accord and symphony.”[2]
  • Any way you go, it seems you must admit you can give no reason for trusting your methods, and hence can give no reason to defend your most fundamental epistemic principles.
  • So one reason we should take the project of defending our epistemic principles seriously is that the ideal of civility demands it.
  • there is also another, even deeper, reason. We need to justify our epistemic principles from a common point of view because we need shared epistemic principles in order to even have a common point of view. Without a common background of standards against which we measure what counts as a reliable source of information, or a reliable method of inquiry, and what doesn’t, we won’t be able to agree on the facts, let alone values.
  • democracies aren’t simply organizing a struggle for power between competing interests; democratic politics isn’t war by other means. Democracies are, or should be, spaces of reasons.
  • we need an epistemic common currency because we often have to decide, jointly, what to do in the face of disagreement.
  • Sometimes we can accomplish this, in a democratic society, by voting. But we can’t decide every issue that way
  • We need some forms of common currency before we get to the voting booth.
  • Even if, as the skeptic says, we can’t defend the truth of our principles without circularity, we might still be able to show that some are better than others. Observation and experiment, for example, aren’t just good because they are reliable means to the truth. They are valuable because almost everyone can appeal to them. They have roots in our natural instincts, as Hume might have said.
  • that is one reason we need to resist skepticism about reason: we need to be able to give reasons for why some standards of reasons — some epistemic principles — should be part of that currency and some not.
  • Reasons for Reason By MICHAEL P. LYNCH
11More

The Backfire Effect « You Are Not So Smart - 0 views

  • corrections tended to increase the strength of the participants’ misconceptions if those corrections contradicted their ideologies. People on opposing sides of the political spectrum read the same articles and then the same corrections, and when new evidence was interpreted as threatening to their beliefs, they doubled down. The corrections backfired.
  • Once something is added to your collection of beliefs, you protect it from harm. You do it instinctively and unconsciously when confronted with attitude-inconsistent information. Just as confirmation bias shields you when you actively seek information, the backfire effect defends you when the information seeks you, when it blindsides you. Coming or going, you stick to your beliefs instead of questioning them. When someone tries to correct you, tries to dilute your misconceptions, it backfires and strengthens them instead. Over time, the backfire effect helps make you less skeptical of those things which allow you to continue seeing your beliefs and attitudes as true and proper.
  • Psychologists call stories like these narrative scripts, stories that tell you what you want to hear, stories which confirm your beliefs and give you permission to continue feeling as you already do. If believing in welfare queens protects your ideology, you accept it and move on.
  • ...8 more annotations...
  • Contradictory evidence strengthens the position of the believer. It is seen as part of the conspiracy, and missing evidence is dismissed as part of the coverup.
  • Most online battles follow a similar pattern, each side launching attacks and pulling evidence from deep inside the web to back up their positions until, out of frustration, one party resorts to an all-out ad hominem nuclear strike
  • you can never win an argument online. When you start to pull out facts and figures, hyperlinks and quotes, you are actually making the opponent feel as though they are even more sure of their position than before you started the debate. As they match your fervor, the same thing happens in your skull. The backfire effect pushes both of you deeper into your original beliefs.
  • you spend much more time considering information you disagree with than you do information you accept. Information which lines up with what you already believe passes through the mind like a vapor, but when you come across something which threatens your beliefs, something which conflicts with your preconceived notions of how the world works, you seize up and take notice. Some psychologists speculate there is an evolutionary explanation. Your ancestors paid more attention and spent more time thinking about negative stimuli than positive because bad things required a response
  • when your beliefs are challenged, you pore over the data, picking it apart, searching for weakness. The cognitive dissonance locks up the gears of your mind until you deal with it. In the process you form more neural connections, build new memories and put out effort – once you finally move on, your original convictions are stronger than ever.
  • The backfire effect is constantly shaping your beliefs and memory, keeping you consistently leaning one way or the other through a process psychologists call biased assimilation.
  • They then separated subjects into two groups; one group said they believed homosexuality was a mental illness and one did not. Each group then read the fake studies full of pretend facts and figures suggesting their worldview was wrong. On either side of the issue, after reading studies which did not support their beliefs, most people didn’t report an epiphany, a realization they’ve been wrong all these years. Instead, they said the issue was something science couldn’t understand. When asked about other topics later on, like spanking or astrology, these same people said they no longer trusted research to determine the truth. Rather than shed their belief and face facts, they rejected science altogether.
  • As social media and advertising progresses, confirmation bias and the backfire effect will become more and more difficult to overcome. You will have more opportunities to pick and choose the kind of information which gets into your head along with the kinds of outlets you trust to give you that information. In addition, advertisers will continue to adapt, not only generating ads based on what they know about you, but creating advertising strategies on the fly based on what has and has not worked on you so far. The media of the future may be delivered based not only on your preferences, but on how you vote, where you grew up, your mood, the time of day or year – every element of you which can be quantified. In a world where everything comes to you on demand, your beliefs may never be challenged.
4More

The Book Bench: Is Self-Knowledge Overrated? : The New Yorker - 1 views

  • It’s impossible to overstate the influence of Kahneman and Tversky. Like Darwin, they helped to dismantle a longstanding myth of human exceptionalism. Although we’d always seen ourselves as rational creatures—this was our Promethean gift—it turns out that human reason is rather feeble, easily overwhelmed by ancient instincts and lazy biases. The mind is a deeply flawed machine.
  • there is a subtle optimism lurking in all of Kahneman’s work: it is the hope that self-awareness is a form of salvation, that if we know about our mental mistakes, we can avoid them. One day, we will learn to equally weigh losses and gains; science can help us escape from the cycle of human error. As Kahneman and Tversky noted in the final sentence of their classic 1974 paper, “A better understanding of these heuristics and of the biases to which they lead could improve judgments and decisions in situations of uncertainty.” Unfortunately, such hopes appear to be unfounded. Self-knowledge isn’t a cure for irrationality; even when we know why we stumble, we still find a way to fall.
  • self-knowledge is surprisingly useless. Teaching people about the hazards of multitasking doesn’t lead to less texting in the car; learning about the weakness of the will doesn’t increase the success of diets; knowing that most people are overconfident about the future doesn’t make us more realistic. The problem isn’t that we’re stupid—it’s that we’re so damn stubborn
  • ...1 more annotation...
  • Kahneman has given us a new set of labels for our shortcomings. But his greatest legacy, perhaps, is also his bleakest: By categorizing our cognitive flaws, documenting not just our errors but also their embarrassing predictability, he has revealed the hollowness of a very ancient aspiration. Knowing thyself is not enough. Not even close.
7More

You Can't Take It With You, but You Still Want More - NYTimes.com - 0 views

  • So say scholars behind research, published in the journal Psychological Science in June, that shows a deeply rooted instinct to earn more than can possibly be consumed, even when this imbalance makes us unhappy
  • Nonetheless, the researchers note that productivity rates have risen, which theoretically lets many people be just as comfortable as previous generations while working less. Yet they choose not to.
  • In the first phase, subjects sat for five minutes in front of a computer wearing a headset, and had the choice of listening to pleasant music or to obnoxious-sounding white noi
  • ...4 more annotations...
  • All were told that there would be a second phase to the experiment, also lasting five minutes, in which they could eat the chocolate they earned. But they were told they would forfeit any chocolate they couldn’t consume, and they were asked how much they expected to be able to eat.
  • On average, people in the high-earner group predicted that they could consume 3.75 chocolates. But when it came time to “earn” chocolates, they accumulated well beyond their estimate
  • “We introduce the concept of ‘mindless accumulation,’ ” said one of the paper’s authors, Christopher Hsee, a professor of behavioral science and marketing at the University of Chicago Booth School of Busi
  • The impulse seemed less pronounced, even mixed, with the low earners. They earned less chocolate than they predicted they could eat. But the high earners and the low earners listened to about the same amount of obnoxious noise in the five-minute period, which Dr. Hsee said strongly suggested that both groups were driven by the same thing: not by how much they need, but by how much work they could withstand.
13More

Why Elders Smile - NYTimes.com - 1 views

  • When researchers ask people to assess their own well-being, people in their 20s rate themselves highly. Then there’s a decline as people get sadder in middle age, bottoming out around age 50. But then happiness levels shoot up, so that old people are happier than young people. The people who rate themselves most highly are those ages 82 to 85.
  • Older people are more relaxed, on average. They are spared some of the burden of thinking about the future. As a result, they get more pleasure out of present, ordinary activities.
  • I’d rather think that elder happiness is an accomplishment, not a condition, that people get better at living through effort, by mastering specific skills. I’d like to think that people get steadily better at handling life’s challenges. In middle age, they are confronted by stressful challenges they can’t control, like having teenage children. But, in old age, they have more control over the challenges they will tackle and they get even better at addressing them.
  • ...10 more annotations...
  • Aristotle teaches us that being a good person is not mainly about learning moral rules and following them. It is about performing social roles well — being a good parent or teacher or lawyer or friend.
  • First, there’s bifocalism, the ability to see the same situation from multiple perspectives.
  • “Anyone who has worn bifocal lenses knows that it takes time to learn to shift smoothly between perspectives and to combine them in a single field of vision. The same is true of deliberation. It is difficult to be compassionate, and often just as difficult to be detached, but what is most difficult of all is to be both at once.”
  • Only with experience can a person learn to see a fraught situation both close up, with emotional intensity, and far away, with detached perspective.
  • Then there’s lightness, the ability to be at ease with the downsides of life.
  • while older people lose memory they also learn that most setbacks are not the end of the world. Anxiety is the biggest waste in life. If you know that you’ll recover, you can save time and get on with it sooner.
  • Then there is the ability to balance tensions. In “Practical Wisdom,” Barry Schwartz and Kenneth Sharpe argue that performing many social roles means balancing competing demands. A doctor has to be honest but also kind. A teacher has to instruct but also inspire.
  • You can’t find the right balance in each context by memorizing a rule book. This form of wisdom can only be earned by acquiring a repertoire of similar experiences.
  • Finally, experienced heads have intuitive awareness of the landscape of reality, a feel for what other people are thinking and feeling, an instinct for how events will flow.
  • a lifetime of intellectual effort can lead to empathy and pattern awareness. “What I have lost with age in my capacity for hard mental work,” Goldberg writes, “I seem to have gained in my capacity for instantaneous, almost unfairly easy insight.”
21More

A Meditation on the Art of Not Trying - NYTimes.com - 0 views

  • It’s the default prescription for any tense situation: a blind date, a speech, a job interview, the first dinner with the potential in-laws. Relax. Act natural. Just be yourself. But when you’re nervous, how can you be yourself?
  • Edward Slingerland. He has developed, quite deliberately, a theory of spontaneity based on millenniums of Asian philosophy and decades of research by psychologists and neuroscientists.
  • He calls it the paradox of wu wei, the Chinese term for “effortless action.”
  • ...18 more annotations...
  • Wu wei is integral to romance, religion, politics and commerce. It’s why some leaders have charisma and why business executives insist on a drunken dinner before sealing a deal.
  • the quest for wu wei has been going on ever since humans began living in groups larger than hunter-gathering clans. Unable to rely on the bonds of kinship, the first urban settlements survived by developing shared values, typically through religion, that enabled people to trust one another’s virtue and to cooperate for the common good.
  • But there was always the danger that someone was faking it and would make a perfectly rational decision to put his own interest first if he had a chance to shirk his duty.
  • To be trusted, it wasn’t enough just to be a sensible, law-abiding citizen, and it wasn’t even enough to dutifully strive to be virtuous. You had to demonstrate that your virtue was so intrinsic that it came to you effortlessly.
  • the discovery in 1993 of bamboo strips in a tomb in the village of Guodian in central China. The texts on the bamboo, composed more than three centuries before Christ, emphasize that following rules and fulfilling obligations are not enough to maintain social order.
  • These texts tell aspiring politicians that they must have an instinctive sense of their duties to their superiors: “If you try to be filial, this not true filiality; if you try to be obedient, this is not true obedience. You cannot try, but you also cannot not try.”
  • is that authentic wu wei? Not according to the rival school of Taoists that arose around the same time as Confucianism, in the fifth century B.C. It was guided by the Tao Te Ching, “The Classic of the Way and Virtue,” which took a direct shot at Confucius: “The worst kind of Virtue never stops striving for Virtue, and so never achieves Virtue.”
  • Through willpower and the rigorous adherence to rules, traditions and rituals, the Confucian “gentleman” was supposed to learn proper behavior so thoroughly that it would eventually become second nature to him.
  • Taoists did not strive. Instead of following the rigid training and rituals required by Confucius, they sought to liberate the natural virtue within. They went with the flow. They disdained traditional music in favor of a funkier new style with a beat. They emphasized personal meditation instead of formal scholarship.
  • Variations of this debate would take place among Zen Buddhist, Hindu and Christian philosophers, and continue today among psychologists and neuroscientists arguing how much of morality and behavior is guided by rational choices or by unconscious feelings.
  • “Psychological science suggests that the ancient Chinese philosophers were genuinely on to something,” says Jonathan Schooler, a psychologist at the University of California, Santa Barbara. “Particularly when one has developed proficiency in an area, it is often better to simply go with the flow. Paralysis through analysis and overthinking are very real pitfalls that the art of wu wei was designed to avoid.”
  • Before signing a big deal, businesspeople often insist on getting to know potential partners at a boozy meal because alcohol makes it difficult to fake feelings.
  • Some people, like politicians and salespeople, can get pretty good at faking spontaneity, but we’re constantly looking for ways to expose them.
  • However wu wei is attained, there’s no debate about the charismatic effect it creates. It conveys an authenticity that makes you attractive, whether you’re addressing a crowd or talking to one person.
  • what’s the best strategy for wu wei — trying or not trying? Dr. Slingerland recommends a combination. Conscious effort is necessary to learn a skill, and the Confucian emphasis on following rituals is in accord with psychological research showing we have a limited amount of willpower. Training yourself to follow rules automatically can be liberating, because it conserves cognitive energy for other tasks.
  • He likes the compromise approach of Mencius, a Chinese philosopher in the fourth century B.C. who combined the Confucian and Taoist approaches: Try, but not too hard.
  • “But in many domains actual success requires the ability to transcend our training and relax completely into what we are doing, or simply forget ourselves as agents.”
  • The sprouts were Mencius’ conception of wu wei: Something natural that requires gentle cultivation. You plant the seeds and water the sprouts, but at some point you need to let nature take its course. Just let the sprouts be themselves.
8More

Ghost Illusion Created in the Lab | Neuroscience News Research Articles | Neuroscience ... - 0 views

  • Ghosts exist only in the mind, and scientists know just where to find them, an EPFL study suggests
  • In their experiment, Blanke’s team interfered with the sensorimotor input of participants in such a way that their brains no longer identified such signals as belonging to their own body, but instead interpreted them as th
  • ose of someone else.
  • ...3 more annotations...
  • The researchers first analyzed the brains of 12 patients with neurological disorders – mostly epilepsy – who have experienced this kind of “apparition.” MRI analysis of the patients’s brains revealed interference with three cortical regions: the insular cortex, parietal-frontal cortex, and the temporo-parietal cortex.
  • The participants were unaware of the experiment’s purpose.
  • Instinctively, several subjects reported a strong “feeling of a presence,” even counting up to four “ghosts” where none existed.
  •  
    Scientist performed an experiment creating an illusion of a ghost. This relates to the sense perception idea. 
  •  
    Scientist performed an experiment creating an illusion of a ghost. This relates to the sense perception idea. 
6More

Art Is Vital - James Hamblin - The Atlantic - 1 views

  • If you ask Americans if liberal arts are important, Gardner continued, they say yes. But in terms of budgets, what gets cut first is not “core subjects” or even athletics.
  • “came about in a frame of increased emphasis on test scores and utility—the market economy becoming a marketing society. Everything is about what you’re going to get,” in readily quantifiable terms.
  • Woetzel's vision is “to give kids the tools to become adults who are creative, adaptable, and collaborative, expressive—capable of having their eyes and ears and senses alive.”
  • ...3 more annotations...
  • “We’re not talking about making sure that everybody has private music lessons,” Woetzel said. “We’re talking about a way of educating that involves artistic sensibilities—artistic habits of mind. The ability to re-assess and to imagine. To be in a science class and not think it’s about memorization entirely,” but to imagine its applications.
  • “People still don’t get it,” Woodard said. “They think it’s play time. They think it’s touchy feel-y. But it’s undeniable what music, painting, [and] movement do to the brain. It becomes more receptive to scientific ideas.”
  • “You cannot be an innovator in any category,” Woodard said, “unless that creative instinct is exercised.”
11More

Are scientists blocking their own progress? - The Washington Post - 1 views

  • Max Planck won a Nobel prize for his revolutionary work in quantum mechanics, but it was his interest in the philosophy of science that led to what is now called “Planck’s Principle.” Planck argued that science was an evolving system of thought which changes slowly over time, fueled by the deaths of old ideas. As he wrote in his 1968 autobiography: “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.”
  • Is our understanding of the world based in pure objective reason, or are the theories that underpin it shaped by generational biases? Do our most famous thinkers actually block new ideas from gaining ground?
  • A new paper published by the National Bureau of Economic Research suggests that fame does play a significant role in deciding when and whether new scientific ideas can gain traction. When a prominent scientist dies, the paper’s authors found, the number of articles published by his or her collaborators tends to fall “precipitously” in the years following the death — those supporters tend not to continue advocating for a once-famous scientist’s ideas once the scientist is gone.
  • ...8 more annotations...
  • the number of research articles written by other scientists — including those with opposing ideas — increases by 8 percent on average, implying that the work of these scientists had been stifled before, but that after the death of a ubiquitous figure, the field becomes more open to new ideas. The study also found that these new articles are less likely to cite previous research and are more likely to be cited by others in the field. Death signifies a changing of the guard
  • Our instinct is often to view science as a concrete tower, growing ever upward and built upon the immovable foundations of earlier pioneers.  Sir Isaac Newton famously characterized this as “standing on the shoulders of giants.”
  • Mid-20th century philosopher Thomas Kuhn was among the first to come to this conclusion, in his 1962 book “The Structure of Scientific Revolutions.” He argued that scientific theories appeared in punctuated “paradigm shifts,” in which the underlying assumptions of a field are questioned and eventually overthrown
  • Kuhn’s book was, to some extent, a paradigm shift in its own right. According to his logic, commonly held notions in science were bound to change and become outdated. What we believe today will tomorrow be revised, rewritten — and in the most extreme cases ridiculed.
  • the journal Nature earlier this year said scientific data is prone to bias because researchers design experiments and make observations in ways that support hypotheses
  • equally as important are simple shifts in perspective. It only takes one researcher seeing an accepted scientific model in a new light for a solidified paradigm to enter what Kuhn called a “crisis phase” and beg for alternative explanations
  • The NBER study shows that those who questioned consensus ought to be given the opportunity to make their case, not ignored, silenced or pushed to the back of the line.
  • We’re likely to see these “paradigm shifts” happen at a much faster rate as data and research become easier to share worldwide. For some, this reality might seem chaotic; for the truly curious, it is exhilarating. The result may be a more democratic version of science — one in which the progress of ideas doesn’t have to wait until the funeral of a great mind.
19More

BBC - Future - The surprising downsides of being clever - 0 views

  • If ignorance is bliss, does a high IQ equal misery? Popular opinion would have it so. We tend to think of geniuses as being plagued by existential angst, frustration, and loneliness. Think of Virginia Woolf, Alan Turing, or Lisa Simpson – lone stars, isolated even as they burn their brightest. As Ernest Hemingway wrote: “Happiness in intelligent people is the rarest thing I know.”
  • Combing California’s schools for the creme de la creme, he selected 1,500 pupils with an IQ of 140 or more – 80 of whom had IQs above 170. Together, they became known as the “Termites”, and the highs and lows of their lives are still being studied to this day.
  • Termites’ average salary was twice that of the average white-collar job. But not all the group met Terman’s expectations – there were many who pursued more “humble” professions such as police officers, seafarers, and typists. For this reason, Terman concluded that “intellect and achievement are far from perfectly correlated”. Nor did their smarts endow personal happiness. Over the course of their lives, levels of divorce, alcoholism and suicide were about the same as the national average.
  • ...16 more annotations...
  • One possibility is that knowledge of your talents becomes something of a ball and chain. Indeed, during the 1990s, the surviving Termites were asked to look back at the events in their 80-year lifespan. Rather than basking in their successes, many reported that they had been plagued by the sense that they had somehow failed to live up to their youthful expectations.
  • The most notable, and sad, case concerns the maths prodigy Sufiah Yusof. Enrolled at Oxford University aged 12, she dropped out of her course before taking her finals and started waitressing. She later worked as a call girl, entertaining clients with her ability to recite equations during sexual acts.
  • Another common complaint, often heard in student bars and internet forums, is that smarter people somehow have a clearer vision of the world’s failings. Whereas the rest of us are blinkered from existential angst, smarter people lay awake agonising over the human condition or other people’s folly.
  • MacEwan University in Canada found that those with the higher IQ did indeed feel more anxiety throughout the day. Interestingly, most worries were mundane, day-to-day concerns, though; the high-IQ students were far more likely to be replaying an awkward conversation, than asking the “big questions”. “It’s not that their worries were more profound, but they are just worrying more often about more things,” says Penney. “If something negative happened, they thought about it more.”
  • seemed to correlate with verbal intelligence – the kind tested by word games in IQ tests, compared to prowess at spatial puzzles (which, in fact, seemed to reduce the risk of anxiety). He speculates that greater eloquence might also make you more likely to verbalise anxieties and ruminate over them. It’s not necessarily a disadvantage, though. “Maybe they were problem-solving a bit more than most people,” he says – which might help them to learn from their mistakes.
  • The harsh truth, however, is that greater intelligence does not equate to wiser decisions; in fact, in some cases it might make your choices a little more foolish.
  • we need to turn our minds to an age-old concept: “wisdom”. His approach is more scientific that it might at first sound. “The concept of wisdom has an ethereal quality to it,” he admits. “But if you look at the lay definition of wisdom, many people would agree it’s the idea of someone who can make good unbiased judgement.”
  • “my-side bias” – our tendency to be highly selective in the information we collect so that it reinforces our previous attitudes. The more enlightened approach would be to leave your assumptions at the door as you build your argument – but Stanovich found that smarter people are almost no more likely to do so than people with distinctly average IQs.
  • People who ace standard cognitive tests are in fact slightly more likely to have a “bias blind spot”. That is, they are less able to see their own flaws, even when though they are quite capable of criticising the foibles of others. And they have a greater tendency to fall for the “gambler’s fallacy”
  • A tendency to rely on gut instincts rather than rational thought might also explain why a surprisingly high number of Mensa members believe in the paranormal; or why someone with an IQ of 140 is about twice as likely to max out their credit card.
  • “The people pushing the anti-vaccination meme on parents and spreading misinformation on websites are generally of more than average intelligence and education.” Clearly, clever people can be dangerously, and foolishly, misguided.
  • spent the last decade building tests for rationality, and he has found that fair, unbiased decision-making is largely independent of IQ.
  • Crucially, Grossmann found that IQ was not related to any of these measures, and certainly didn’t predict greater wisdom. “People who are very sharp may generate, very quickly, arguments [for] why their claims are the correct ones – but may do it in a very biased fashion.”
  • employers may well begin to start testing these abilities in place of IQ; Google has already announced that it plans to screen candidates for qualities like intellectual humility, rather than sheer cognitive prowess.
  • He points out that we often find it easier to leave our biases behind when we consider other people, rather than ourselves. Along these lines, he has found that simply talking through your problems in the third person (“he” or “she”, rather than “I”) helps create the necessary emotional distance, reducing your prejudices and leading to wiser arguments.
  • If you’ve been able to rest on the laurels of your intelligence all your life, it could be very hard to accept that it has been blinding your judgement. As Socrates had it: the wisest person really may be the one who can admit he knows nothing.
55More

Why Silicon Valley can't fix itself | News | The Guardian - 1 views

  • After decades of rarely apologising for anything, Silicon Valley suddenly seems to be apologising for everything. They are sorry about the trolls. They are sorry about the bots. They are sorry about the fake news and the Russians, and the cartoons that are terrifying your kids on YouTube. But they are especially sorry about our brains.
  • Sean Parker, the former president of Facebook – who was played by Justin Timberlake in The Social Network – has publicly lamented the “unintended consequences” of the platform he helped create: “God only knows what it’s doing to our children’s brains.”
  • Parker, Rosenstein and the other insiders now talking about the harms of smartphones and social media belong to an informal yet influential current of tech critics emerging within Silicon Valley. You could call them the “tech humanists”. Amid rising public concern about the power of the industry, they argue that the primary problem with its products is that they threaten our health and our humanity.
  • ...52 more annotations...
  • It is clear that these products are designed to be maximally addictive, in order to harvest as much of our attention as they can. Tech humanists say this business model is both unhealthy and inhumane – that it damages our psychological well-being and conditions us to behave in ways that diminish our humanity
  • The main solution that they propose is better design. By redesigning technology to be less addictive and less manipulative, they believe we can make it healthier – we can realign technology with our humanity and build products that don’t “hijack” our minds.
  • its most prominent spokesman is executive director Tristan Harris, a former “design ethicist” at Google who has been hailed by the Atlantic magazine as “the closest thing Silicon Valley has to a conscience”. Harris has spent years trying to persuade the industry of the dangers of tech addiction.
  • In February, Pierre Omidyar, the billionaire founder of eBay, launched a related initiative: the Tech and Society Solutions Lab, which aims to “maximise the tech industry’s contributions to a healthy society”.
  • the tech humanists are making a bid to become tech’s loyal opposition. They are using their insider credentials to promote a particular diagnosis of where tech went wrong and of how to get it back on track
  • The real reason tech humanism matters is because some of the most powerful people in the industry are starting to speak its idiom. Snap CEO Evan Spiegel has warned about social media’s role in encouraging “mindless scrambles for friends or unworthy distractions”,
  • In short, the effort to humanise computing produced the very situation that the tech humanists now consider dehumanising: a wilderness of screens where digital devices chase every last instant of our attention.
  • After years of ignoring their critics, industry leaders are finally acknowledging that problems exist. Tech humanists deserve credit for drawing attention to one of those problems – the manipulative design decisions made by Silicon Valley.
  • these decisions are only symptoms of a larger issue: the fact that the digital infrastructures that increasingly shape our personal, social and civic lives are owned and controlled by a few billionaires
  • Because it ignores the question of power, the tech-humanist diagnosis is incomplete – and could even help the industry evade meaningful reform
  • Taken up by leaders such as Zuckerberg, tech humanism is likely to result in only superficial changes
  • they will not address the origin of that anger. If anything, they will make Silicon Valley even more powerful.
  • To the litany of problems caused by “technology that extracts attention and erodes society”, the text asserts that “humane design is the solution”. Drawing on the rhetoric of the “design thinking” philosophy that has long suffused Silicon Valley, the website explains that humane design “starts by understanding our most vulnerable human instincts so we can design compassionately”
  • this language is not foreign to Silicon Valley. On the contrary, “humanising” technology has long been its central ambition and the source of its power. It was precisely by developing a “humanised” form of computing that entrepreneurs such as Steve Jobs brought computing into millions of users’ everyday lives
  • Facebook had a new priority: maximising “time well spent” on the platform, rather than total time spent. By “time well spent”, Zuckerberg means time spent interacting with “friends” rather than businesses, brands or media sources. He said the News Feed algorithm was already prioritising these “more meaningful” activities.
  • Tech humanists say they want to align humanity and technology. But this project is based on a deep misunderstanding of the relationship between humanity and technology: namely, the fantasy that these two entities could ever exist in separation.
  • They believe we can use better design to make technology serve human nature rather than exploit and corrupt it. But this idea is drawn from the same tradition that created the world that tech humanists believe is distracting and damaging us.
  • The story of our species began when we began to make tools
  • All of which is to say: humanity and technology are not only entangled, they constantly change together.
  • This is not just a metaphor. Recent research suggests that the human hand evolved to manipulate the stone tools that our ancestors used
  • The ways our bodies and brains change in conjunction with the tools we make have long inspired anxieties that “we” are losing some essential qualities
  • Yet as we lose certain capacities, we gain new ones.
  • The nature of human nature is that it changes. It can not, therefore, serve as a stable basis for evaluating the impact of technology
  • Yet the assumption that it doesn’t change serves a useful purpose. Treating human nature as something static, pure and essential elevates the speaker into a position of power. Claiming to tell us who we are, they tell us how we should be.
  • Messaging, for instance, is considered the strongest signal. It’s reasonable to assume that you’re closer to somebody you exchange messages with than somebody whose post you once liked.
  • Harris and his fellow tech humanists also frequently invoke the language of public health. The Center for Humane Technology’s Roger McNamee has gone so far as to call public health “the root of the whole thing”, and Harris has compared using Snapchat to smoking cigarettes
  • The public-health framing casts the tech humanists in a paternalistic role. Resolving a public health crisis requires public health expertise. It also precludes the possibility of democratic debate. You don’t put the question of how to treat a disease up for a vote – you call a doctor.
  • They also remain confined to the personal level, aiming to redesign how the individual user interacts with technology rather than tackling the industry’s structural failures. Tech humanism fails to address the root cause of the tech backlash: the fact that a small handful of corporations own our digital lives and strip-mine them for profit.
  • This is a fundamentally political and collective issue. But by framing the problem in terms of health and humanity, and the solution in terms of design, the tech humanists personalise and depoliticise it.
  • Far from challenging Silicon Valley, tech humanism offers Silicon Valley a useful way to pacify public concerns without surrendering any of its enormous wealth and power.
  • these principles could make Facebook even more profitable and powerful, by opening up new business opportunities. That seems to be exactly what Facebook has planned.
  • reported that total time spent on the platform had dropped by around 5%, or about 50m hours per day. But, Zuckerberg said, this was by design: in particular, it was in response to tweaks to the News Feed that prioritised “meaningful” interactions with “friends” rather than consuming “public content” like video and news. This would ensure that “Facebook isn’t just fun, but also good for people’s well-being”
  • Zuckerberg said he expected those changes would continue to decrease total time spent – but “the time you do spend on Facebook will be more valuable”. This may describe what users find valuable – but it also refers to what Facebook finds valuable
  • not all data is created equal. One of the most valuable sources of data to Facebook is used to inform a metric called “coefficient”. This measures the strength of a connection between two users – Zuckerberg once called it “an index for each relationship”
  • Facebook records every interaction you have with another user – from liking a friend’s post or viewing their profile, to sending them a message. These activities provide Facebook with a sense of how close you are to another person, and different activities are weighted differently.
  • Holding humanity and technology separate clears the way for a small group of humans to determine the proper alignment between them
  • Why is coefficient so valuable? Because Facebook uses it to create a Facebook they think you will like: it guides algorithmic decisions about what content you see and the order in which you see it. It also helps improve ad targeting, by showing you ads for things liked by friends with whom you often interact
  • emphasising time well spent means creating a Facebook that prioritises data-rich personal interactions that Facebook can use to make a more engaging platform.
  • “time well spent” means Facebook can monetise more efficiently. It can prioritise the intensity of data extraction over its extensiveness. This is a wise business move, disguised as a concession to critics
  • industrialists had to find ways to make the time of the worker more valuable – to extract more money from each moment rather than adding more moments. They did this by making industrial production more efficient: developing new technologies and techniques that squeezed more value out of the worker and stretched that value further than ever before.
  • there is another way of thinking about how to live with technology – one that is both truer to the history of our species and useful for building a more democratic future. This tradition does not address “humanity” in the abstract, but as distinct human beings, whose capacities are shaped by the tools they use.
  • It sees us as hybrids of animal and machine – as “cyborgs”, to quote the biologist and philosopher of science Donna Haraway.
  • The cyborg way of thinking, by contrast, tells us that our species is essentially technological. We change as we change our tools, and our tools change us. But even though our continuous co-evolution with our machines is inevitable, the way it unfolds is not. Rather, it is determined by who owns and runs those machines. It is a question of power
  • The various scandals that have stoked the tech backlash all share a single source. Surveillance, fake news and the miserable working conditions in Amazon’s warehouses are profitable. If they were not, they would not exist. They are symptoms of a profound democratic deficit inflicted by a system that prioritises the wealth of the few over the needs and desires of the many.
  • If being technological is a feature of being human, then the power to shape how we live with technology should be a fundamental human right
  • The decisions that most affect our technological lives are far too important to be left to Mark Zuckerberg, rich investors or a handful of “humane designers”. They should be made by everyone, together.
  • Rather than trying to humanise technology, then, we should be trying to democratise it. We should be demanding that society as a whole gets to decide how we live with technology
  • What does this mean in practice? First, it requires limiting and eroding Silicon Valley’s power.
  • Antitrust laws and tax policy offer useful ways to claw back the fortunes Big Tech has built on common resources
  • democratic governments should be making rules about how those firms are allowed to behave – rules that restrict how they can collect and use our personal data, for instance, like the General Data Protection Regulation
  • This means developing publicly and co-operatively owned alternatives that empower workers, users and citizens to determine how they are run.
  • we might demand that tech firms pay for the privilege of extracting our data, so that we can collectively benefit from a resource we collectively create.
5More

'Nothing on this page is real': How lies become truth in online America - The Washingto... - 0 views

  • “Share if you’re outraged!” his posts often read, and thousands of people on Facebook had clicked “like” and then “share,” most of whom did not recognize his posts as satire. Instead, Blair’s page had become one of the most popular on Facebook among Trump-supporting conservatives over 55.
  • “Nothing on this page is real,” read one of the 14 disclaimers on Blair’s site, and yet in the America of 2018 his stories had become real, reinforcing people’s biases, spreading onto Macedonian and Russian fake news sites, amassing an audience of as many 6 million visitors each month who thought his posts were factual
  • “No matter how racist, how bigoted, how offensive, how obviously fake we get, people keep coming back,” Blair once wrote, on his own personal Facebook page. “Where is the edge? Is there ever a point where people realize they’re being fed garbage and decide to return to reality?”
  • ...2 more annotations...
  • Chapian didn’t believe everything she read online, but she was also distrustful of mainstream fact-checkers and reported news. It sometimes felt to her like real facts had become indiscernible — that the truth was often somewhere in between. What she trusted most was her own ability to think critically and discern the truth, and increasingly her instincts aligned with the online community where she spent most of her time.
  • Her number of likes and shares on Facebook increased each year until she was sometimes awakening to check her news feed in the middle of the night, liking and commenting on dozens of posts each day. She felt as if she was being let in on a series of dark revelations about the United States, and it was her responsibility to see and to share them.
29More

Can truth survive this president? An honest investigation. - The Washington Post - 0 views

  • in the summer of 2002, long before “fake news” or “post-truth” infected the vernacular, one of President George W. Bush’s top advisers mocked a journalist for being part of the “reality-based community.” Seeking answers in reality was for suckers, the unnamed adviser explained. “We’re an empire now, and when we act, we create our own reality.”
  • This was the hubris and idealism of a post-Cold War, pre-Iraq War superpower: If you exert enough pressure, events will bend to your will.
  • the deceit emanating from the White House today is lazier, more cynical. It is not born of grand strategy or ideology; it is impulsive and self-serving. It is not arrogant, but shameless.
  • ...26 more annotations...
  • Bush wanted to remake the world. President Trump, by contrast, just wants to make it up as he goes along
  • Through all their debates over who is to blame for imperiling truth (whether Trump, postmodernism, social media or Fox News), as well as the consequences (invariably dire) and the solutions (usually vague), a few conclusions materialize, should you choose to believe them.
  • There is a pattern and logic behind the dishonesty of Trump and his surrogates; however, it’s less multidimensional chess than the simple subordination of reality to political and personal ambition
  • Trump’s untruth sells best precisely when feelings and instincts overpower facts, when America becomes a safe space for fabrication.
  • Rand Corp. scholars Jennifer Kavanagh and Michael D. Rich point to the Gilded Age, the Roaring Twenties and the rise of television in the mid-20th century as recent periods of what they call “Truth Decay” — marked by growing disagreement over facts and interpretation of data; a blurring of lines between opinion, fact and personal experience; and diminishing trust in once-respected sources of information.
  • In eras of truth decay, “competing narratives emerge, tribalism within the U.S. electorate increases, and political paralysis and dysfunction grow,”
  • Once you add the silos of social media as well as deeply polarized politics and deteriorating civic education, it becomes “nearly impossible to have the types of meaningful policy debates that form the foundation of democracy.”
  • To interpret our era’s debasement of language, Kakutani reflects perceptively on the World War II-era works of Victor Klemperer, who showed how the Nazis used “words as ‘tiny doses of arsenic’ to poison and subvert the German culture,” and of Stefan Zweig, whose memoir “The World of Yesterday” highlights how ordinary Germans failed to grasp the sudden erosion of their freedoms.
  • Kakutani calls out lefty academics who for decades preached postmodernism and social constructivism, which argued that truth is not universal but a reflection of relative power, structural forces and personal vantage points.
  • postmodernists rejected Enlightenment ideals as “vestiges of old patriarchal and imperialist thinking,” Kakutani writes, paving the way for today’s violence against fact in politics and science.
  • “dumbed-down corollaries” of postmodernist thought have been hijacked by Trump’s defenders, who use them to explain away his lies, inconsistencies and broken promises.
  • intelligent-design proponents and later climate deniers drew from postmodernism to undermine public perceptions of evolution and climate change. “Even if right-wing politicians and other science deniers were not reading Derrida and Foucault, the germ of the idea made its way to them: science does not have a monopoly on the truth,
  • McIntyre quotes at length from mea culpas by postmodernist and social constructivist writers agonizing over what their theories have wrought, shocked that conservatives would use them for nefarious purposes
  • pro-Trump troll and conspiracy theorist Mike Cernovich , who helped popularize the “Pizzagate” lie, has forthrightly cited his unlikely influences. “Look, I read postmodernist theory in college,” Cernovich told the New Yorker in 2016. “If everything is a narrative, then we need alternatives to the dominant narrative. I don’t seem like a guy who reads [Jacques] Lacan, do I?
  • When truth becomes malleable and contestable regardless of evidence, a mere tussle of manufactured narratives, it becomes less about conveying facts than about picking sides, particularly in politics.
  • In “On Truth,” Cambridge University philosopher Simon Blackburn writes that truth is attainable, if at all, “only at the vanishing end points of enquiry,” adding that, “instead of ‘facts first’ we may do better if we think of ‘enquiry first,’ with the notion of fact modestly waiting to be invited to the feast afterward.
  • He is concerned, but not overwhelmingly so, about the survival of truth under Trump. “Outside the fevered world of politics, truth has a secure enough foothold,” Blackburn writes. “Perjury is still a serious crime, and we still hope that our pilots and surgeons know their way about.
  • Kavanaugh and Rich offer similar consolation: “Facts and data have become more important in most other fields, with political and civil discourse being striking exceptions. Thus, it is hard to argue that the world is truly ‘post-fact.’ ”
  • McIntyre argues persuasively that our methods of ascertaining truth — not just the facts themselves — are under attack, too, and that this assault is especially dangerous.
  • Ideologues don’t just disregard facts they disagree with, he explains, but willingly embrace any information, however dubious, that fits their agenda. “This is not the abandonment of facts, but a corruption of the process by which facts are credibly gathered and reliably used to shape one’s beliefs about reality. Indeed, the rejection of this undermines the idea that some things are true irrespective of how we feel about them.”
  • “It is hardly a depressing new phenomenon that people’s beliefs are capable of being moved by their hopes, grievances and fears,” Blackburn writes. “In order to move people, objective facts must become personal beliefs.” But it can’t work — or shouldn’t work — in reverse.
  • More than fearing a post-truth world, Blackburn is concerned by a “post-shame environment,” in which politicians easily brush off their open disregard for truth.
  • it is human nature to rationalize away the dissonance. “Why get upset by his lies, when all politicians lie?” Kakutani asks, distilling the mind-set. “Why get upset by his venality, when the law of the jungle rules?”
  • So any opposition is deemed a witch hunt, or fake news, rigged or just so unfair. Trump is not killing the truth. But he is vandalizing it, constantly and indiscriminately, diminishing its prestige and appeal, coaxing us to look away from it.
  • the collateral damage includes the American experiment.
  • “One of the most important ways to fight back against post-truth is to fight it within ourselves,” he writes, whatever our particular politics may be. “It is easy to identify a truth that someone else does not want to see. But how many of us are prepared to do this with our own beliefs? To doubt something that we want to believe, even though a little piece of us whispers that we do not have all the facts?”
4More

Opinion | In Memoriam: What Would Gary Gutting Do? - The New York Times - 0 views

  • He was an adviser and mentor to both me and The Stone’s co-founder and moderator, the philosopher Simon Critchley, who first met and worked with Gary at Notre Dame more than 15 years ago. Simon described Gary’s work well as “a properly American voice, clear, without ever being shrill, tolerant without ever being uncritical, and instinctively committed to the idea that philosophy could be communicated to a larger public audience.”
  • The most bitter cultural arguments in American intellectual life were comfortable places for Gary — or perhaps he saw them as opportunities — and I believe that he entered in them not so much to establish the dominance of his own view — as a believer in God, in humanistic education, or in the promise of the United States — but to help put the debates on sane ground, to level them through reason and friendly engagement, to be a peacemaker and to advance the invaluable work of civil public discourse and argument.
  • I often found myself considering the merit of a certain idea or argument, or wondering about the philosophical soundness of a particular essay. I would quite literally ask, sometimes out loud, “What would Gary do?” I would then think hard about that and try to act accordingly. But when I got stuck, I would write or call him for guidance — a session, I might call it. The pleasure of those calls came not just from having my thinking clarified and gently set right by a person wiser than me, but also from hearing once again his reassuring, friendly, articulate Midwestern tenor, and what seemed to be his endlessly renewable excitement about people and ideas.
  • ...1 more annotation...
  • A digestible portion of Gary’s thinking on his work and career can be found at 3AM Magazine in this 2012 interview with Richard Marshall. And this quote from that talk is as good as any to return to now, as a reminder of the continual work he saw as necessary for remaining true to both ourselves and to the world around us: “Our fundamental beliefs don’t need intellectual justification, but they do need intellectual maintenance. We need to understand their implications, modify them to eliminate internal contradictions, defend and perhaps modify them in response to objections.”
24More

The Facebook Fallacy: Privacy Is Up to You - The New York Times - 0 views

  • As Facebook’s co-founder and chief executive parried questions from members of Congress about how the social network would protect its users’ privacy, he returned time and again to what probably sounded like an unimpeachable proposition.
  • By providing its users with greater and more transparent controls over the personal data they share and how it is used for targeted advertising, he insisted, Facebook could empower them to make their own call and decide how much privacy they were willing to put on the block.
  • providing a greater sense of control over their personal data won’t make Facebook users more cautious. It will instead encourage them to share more.
  • ...21 more annotations...
  • “Disingenuous is the adjective I had in my mind,”
  • “Fifteen years ago it would have been legitimate to propose this argument,” he added. “But it is no longer legitimate to ignore the behavioral problems and propose simply more transparency and controls.”
  • Professor Acquisti and two colleagues, Laura Brandimarte and the behavioral economist George Loewenstein, published research on this behavior nearly six years ago. “Providing users of modern information-sharing technologies with more granular privacy controls may lead them to share more sensitive information with larger, and possibly riskier, audiences,” they concluded.
  • the critical question is whether, given the tools, we can be trusted to manage the experience. The increasing body of research into how we behave online suggests not.
  • “Privacy control settings give people more rope to hang themselves,” Professor Loewenstein told me. “Facebook has figured this out, so they give you incredibly granular controls.”
  • This paradox is hardly the only psychological quirk for the social network to exploit. Consider default settings. Tons of research in behavioral economics has found that people tend to stick to the default setting of whatever is offered to them, even when they could change it easily.
  • “Facebook is acutely aware of this,” Professor Loewenstein told me. In 2005, its default settings shared most profile fields with, at most, friends of friends. Nothing was shared by default with the full internet.
  • By 2010, however, likes, name, gender, picture and a lot of other things were shared with everybody online. “Facebook changed the defaults because it appreciated their power,” Professor Loewenstein added.
  • The phenomenon even has a name: the “control paradox.”
  • people who profess concern about privacy will provide the emails of their friends in exchange for some pizza.
  • They also found that providing consumers reassuring though irrelevant information about their ability to protect their privacy will make them less likely to avoid surveillance.
  • Another experiment revealed that people are more willing to come clean about their engagement in illicit or questionable behavior when they believe others have done so, too
  • Those in the industry often argue that people don’t really care about their privacy — that they may seem concerned when they answer surveys, but still routinely accept cookies and consent to have their data harvested in exchange for cool online experiences
  • Professor Acquisti thinks this is a fallacy. The cognitive hurdles to manage our privacy online are simply too steep.
  • While we are good at handling our privacy in the offline world, lowering our voices or closing the curtains as the occasion may warrant, there are no cues online to alert us to a potential privacy invasion
  • Even if we were to know precisely what information companies like Facebook have about us and how it will be used, which we don’t, it would be hard for us to assess potential harms
  • Members of Congress have mostly let market forces prevail online, unfettered by government meddling. Privacy protection in the internet economy has relied on the belief that consumers will make rational choices
  • Europe’s stringent new privacy protection law, which Facebook has promised to apply in the United States, may do better than the American system of disclosure and consen
  • the European system also relies mostly on faith that consumers will make rational choices.
  • The more that psychologists and behavioral economists study psychological biases and quirks, the clearer it seems that rational choices alone won’t work. “I don’t think any kind of disclosure or opt in or opt out is going to protect us from our worst instincts,”
  • What to do? Professor Acquisti suggests flipping the burden of proof. The case for privacy regulation rests on consumers’ proving that data collection is harmful. Why not ask the big online platforms like Facebook to prove they can’t work without it? If reducing data collection imposes a cost, we could figure out who bears it — whether consumers, advertisers or Facebook’s bottom line.
« First ‹ Previous 41 - 60 of 89 Next › Last »
Showing 20 items per page