Skip to main content

Home/ TOK Friends/ Group items tagged alternative facts

Rss Feed Group items tagged

sandrine_h

To advance science we need to think about the impossible | New Scientist - 0 views

  • Science sets out what we think is true – but when it gets stuck, it’s time to explore what we think isn’t
  • science has always advanced in small steps, paving the way for occasional leaps. But sometimes fact-collecting yields nothing more than a collection of facts; no revelation follows. At such times, we need to step back from the facts we know and imagine alternatives: in other words, to ask “what if?”
  • That was how Albert Einstein broke the bind in which physics found itself in the early 20th century. His conception of a scenario that received wisdom deemed impossible – that light’s speed is always the same, regardless of how you look at it – led to special relativity and demolished what we thought we knew about space and time.
  • ...3 more annotations...
  • Despite its dependence on hard evidence, science is a creative discipline. That creativity needs nurturing, even in this age of performance targets and impact assessments. Scientist need to flex their imaginations, too.
  • “Let us dare to dream,” the chemist August Kekulé once suggested, “and then perhaps we may learn the truth.”
  • Physics isn’t the only field that might benefit from a judicious dose of what-iffery. Attempts to understand consciousness are also just inching forward
tongoscar

Did Trump Propose Cuts to Federal Pay Raises, Citing 'Serious Economic Conditions'? - 0 views

  • It was. In the message, Trump stated that federal law “authorizes me to implement alternative plans for pay adjustments for civilian Federal employees covered by the General Schedule and certain other pay systems if, because of ‘national emergency or serious economic conditions affecting the general welfare,’ I view the increases that would otherwise take effect as inappropriate.”
  • Congress approved a 3.1% raise for federal workers, which went into effect in 2020. It was the largest pay increase in a decade. Legislation introduced for 2021 calls for for a 3.5% increase.
  • It’s not unusual for presidents to make efforts to prevent full pay raises for federal workers from going into effect, Kauffman said. What is unusual is that Trump is using the economy to justify cuts even as he has been outspoken about the strength of the economy. One day after his message to Congress, Trump tweeted, “BEST USA ECONOMY IN HISTORY!”
  • ...2 more annotations...
  • “If the economy is doing so well, then why can’t we afford to provide a pay raise to the federal workers, a third of whom are veterans, who ensure our democracy is working every day and ensure services are delivered,” Kauffman said. “Currently federal employees make about 4% less today than they did at the start of the decade, counting for inflation.”
  • But “every single president except maybe the first year [the law was in effect] under [George H.W. Bush], they have used national security reasons or national economic concerns to prevent the full raise from taking effect,” Kauffman said.
Duncan H

Mitt Romney's Problem Speaking About Money - NYTimes.com - 0 views

  • Why is someone who is so good at making money so bad at talking about it?Mitt Romney is not the first presidential candidate who’s had trouble communicating with working-class voters: John Kerry famously enjoyed wind-surfing, and George Bush blamed a poor showing in a straw poll on the fact that many of his supporters were “at their daughter’s coming out party.”Veritable battalions of Kennedys and Roosevelts have dealt with the economic and cultural gaps between themselves and the voters over the years without much difficulty. Not so Barack Obama, whose attempt to commiserate with Iowa farmers in 2007 about crop prices by mentioning the cost of arugula at Whole Foods fell flat.
  • Romney’s reference last week to the fact that his wife “drives a couple of Cadillacs, actually” is not grounds in itself for a voter to oppose his candidacy. Neither was the $10,000 bet he offered to Rick Perry during a debate in December or the time he told a group of the unemployed in Florida that he was “also unemployed.”But his penchant for awkward references to his own wealth has underscored the suspicion that many voters have about his ability to understand their economic problems. His opponents in both parties  are gleefully highlighting these moments as a way to drive a wedge between Romney and the working class voters who have become an increasingly important part of the Republican Party base.
  • The current economic circumstances have undoubtedly exacerbated the problem for Romney. Had Obama initially sought the presidency during a primary season dominated by concerns about the domestic economy rather than war in Iraq, his explanation that small town voters “get bitter, they cling to guns or religion or antipathy to people who aren’t like them” might have created an opportunity for Hillary Clinton or even the populist message of John Edwards.
  • ...7 more annotations...
  • But Obama’s early opposition to the Iraq war gave him a political firewall that protected him throughout that primary campaign, while Romney has no such policy safe harbor to safeguard him from an intramural backlash.
  • Romney and Obama share a lack of natural affinity for this key group of swing voters, but it is Romney who needs to figure out some way of addressing this shortcoming if he wants to make it to the White House. It’s Romney’s misfortune that the voters’ prioritization of economic issues, his own privileged upbringing and his lack of connection with his party’s base on other core issues put him in a much more precarious position than candidate Obama ever reached.
  • By the time the 2008 general election rolled around, Obama had bolstered his outreach to these voters by recruiting the blue-collar avatar Joe Biden as his running mate. Should Romney win the Republican nomination this year, his advisers will almost certainly be tempted by the working-class credentials that a proletarian like New Jersey Governor Chris Christie or Florida Senator Marco Rubio would bring to the ticket.
  • Of more immediate concern to Team Romney should be how their candidate can overcome his habit of economic tone-deafness before Rick Santorum steals away enough working-class and culturally conservative voters to throw the Republican primary into complete and utter turmoil.
  • The curious thing about Romney’s verbal missteps is how limited they are to this very specific area of public policy. He is usually quite articulate when talking about foreign affairs and national security. Despite his complicated history on social and cultural matters like health care and abortion, his explanations are usually both coherent and comprehensible, even to those who oppose his positions. It’s only when he begins talking about economic issues – his biographical strength – that he seems to get clumsy.
  • The second possibility would be for him to outline a series of proposals specifically targeted at the needs of working-class and poor Americans, not only to control the damage from his gaffes but also to underscore the conservative premise that a right-leaning agenda will create opportunities for those on the lower rungs of the economic ladder. But while that approach might help Romney in a broader philosophical conversation, it’s unlikely to offer him much protection from the attacks and ridicule that his unforced errors will continue to bring him.
  • The question is why Romney hasn’t embraced a third alternative – admitting the obvious and then explaining why he gets so tongue-tied when the conversation turns to money. Romney’s upbringing and religious faith suggest a sense of obligation to the less fortunate and an unspoken understanding that it isn’t appropriate to call attention to one’s financial success.It wouldn’t be that hard for him to say something like:I was taught not to brag and boast and think I’m better than other people because of the successes I’ve had, so occasionally I’m going to say things that sound awkward. It’s because I’d rather talk about what it takes to get America back to work.
  •  
    Do you think the solution Douthat proposes would work?
Javier E

Kung Fu for Philosophers - NYTimes.com - 0 views

  • any ability resulting from practice and cultivation could accurately be said to embody kung fu.
  • the predominant orientation of traditional Chinese philosophy is the concern about how to live one’s life, rather than finding out the truth about reality.
  • Confucius’s call for “rectification of names” — one must use words appropriately — is more a kung fu method for securing sociopolitical order than for capturing the essence of things, as “names,” or words, are placeholders for expectations of how the bearer of the names should behave and be treated. This points to a realization of what J. L. Austin calls the “performative” function of language.
  • ...12 more annotations...
  • Instead of leading to a search for certainty, as Descartes’s dream did, Zhuangzi came to the realization that he had perceived “the transformation of things,” indicating that one should go along with this transformation rather than trying in vain to search for what is real.
  • the views of Mencius and his later opponent Xunzi’s views about human nature are more recommendations of how one should view oneself in order to become a better person than metaphysical assertions about whether humans are by nature good or bad. Though each man’s assertions about human nature are incompatible with each other, they may still function inside the Confucian tradition as alternative ways of cultivation.
  • The Buddhist doctrine of no-self surely looks metaphysical, but its real aim is to free one from suffering, since according to Buddhism suffering comes ultimately from attachment to the self. Buddhist meditations are kung fu practices to shake off one’s attachment, and not just intellectual inquiries for getting propositional truth.
  • The essence of kung fu — various arts and instructions about how to cultivate the person and conduct one’s life — is often hard to digest for those who are used to the flavor and texture of mainstream Western philosophy. It is understandable that, even after sincere willingness to try, one is often still turned away by the lack of clear definitions of key terms and the absence of linear arguments in classic Chinese texts. This, however, is not a weakness, but rather a requirement of the kung fu orientation — not unlike the way that learning how to swim requires one to focus on practice and not on conceptual understanding.
  • It even expands epistemology into the non-conceptual realm in which the accessibility of knowledge is dependent on the cultivation of cognitive abilities, and not simply on whatever is “publicly observable” to everyone. It also shows that cultivation of the person is not confined to “knowing how.” An exemplary person may well have the great charisma to affect others but does not necessarily know how to affect others.
  • Western philosophy at its origin is similar to classic Chinese philosophy. The significance of this point is not merely in revealing historical facts. It calls our attention to a dimension that has been eclipsed by the obsession with the search for eternal, universal truth and the way it is practiced, namely through rational arguments.
  • One might well consider the Chinese kung fu perspective a form of pragmatism.  The proximity between the two is probably why the latter was well received in China early last century when John Dewey toured the country. What the kung fu perspective adds to the pragmatic approach, however, is its clear emphasis on the cultivation and transformation of the person, a dimension that is already in Dewey and William James but that often gets neglected
  • A kung fu master does not simply make good choices and use effective instruments to satisfy whatever preferences a person happens to have. In fact the subject is never simply accepted as a given. While an efficacious action may be the result of a sound rational decision, a good action that demonstrates kung fu has to be rooted in the entire person, including one’s bodily dispositions and sentiments, and its goodness is displayed not only through its consequences but also in the artistic style one does it. It also brings forward what Charles Taylor calls the “background” — elements such as tradition and community — in our understanding of the formation of a person’s beliefs and attitudes. Through the kung fu approach, classic Chinese philosophy displays a holistic vision that brings together these marginalized dimensions and thereby forces one to pay close attention to the ways they affect each other.
  • This kung fu approach shares a lot of insights with the Aristotelian virtue ethics, which focuses on the cultivation of the agent instead of on the formulation of rules of conduct. Yet unlike Aristotelian ethics, the kung fu approach to ethics does not rely on any metaphysics for justification.
  • This approach opens up the possibility of allowing multiple competing visions of excellence, including the metaphysics or religious beliefs by which they are understood and guided, and justification of these beliefs is then left to the concrete human experiences.
  • it is more appropriate to consider kung fu as a form of art. Art is not ultimately measured by its dominance of the market. In addition, the function of art is not accurate reflection of the real world; its expression is not constrained to the form of universal principles and logical reasoning, and it requires cultivation of the artist, embodiment of virtues/virtuosities, and imagination and creativity.
  • If philosophy is “a way of life,” as Pierre Hadot puts it, the kung fu approach suggests that we take philosophy as the pursuit of the art of living well, and not just as a narrowly defined rational way of life.
Javier E

Raymond Tallis Takes Out the 'Neurotrash' - The Chronicle Review - The Chronicle of Hig... - 0 views

  • Tallis informs 60 people gathered in a Kent lecture hall that his talk will demolish two "pillars of unwisdom." The first, "neuromania," is the notion that to understand people you must peer into the "intracranial darkness" of their skulls with brain-scanning technology. The second, "Darwinitis," is the idea that Charles Darwin's evolutionary theory can explain not just the origin of the human species—a claim Tallis enthusiastically accepts—but also the nature of human behavior and institutions.
  • Aping Mankind argues that neuroscientific approaches to things like love, wisdom, and beauty are flawed because you can't reduce the mind to brain activity alone.
  • Stephen Cave, a Berlin-based philosopher and writer who has called Aping Mankind "an important work," points out that most philosophers and scientists do in fact believe "that mind is just the product of certain brain activity, even if we do not currently know quite how." Tallis "does both the reader and these thinkers an injustice" by declaring that view "obviously" wrong,
  • ...5 more annotations...
  • Geraint Rees, director of University College London's Institute of Cognitive Neuroscience, complains that reading Tallis is "a bit like trying to nail jelly to the wall." He "rubbishes every current theory of the relationship between mind and brain, whether philosophical or neuroscientific," while offering "little or no alternative,"
  • cultural memes. The Darwinesque concept originates in Dawkins's 1976 book, The Selfish Gene. Memes are analogous to genes, Dennett has said, "replicating units of culture" that spread from mind to mind like a virus. Religion, chess, songs, clothing, tolerance for free speech—all have been described as memes. Tallis considers it absurd to talk of a noun-phrase like "tolerance for free speech" as a discrete entity. But Dennett argues that Tallis's objections are based on "a simplistic idea of what one might mean by a unit." Memes aren't units? Well, in that spirit, says Dennett, organisms aren't units of biology, nor are species—they're too complex, with too much variation. "He's got to allow theory to talk about entities which are not simple building blocks," Dennett says.
  • How is it that he perceives the glass of water on the table? How is it that he feels a sense of self over time? How is it that he can remember a patient he saw in 1973, and then cast his mind forward to his impending visit to the zoo? There are serious problems with trying to reduce such things to impulses in the brain, he argues. We can explain "how the light gets in," he says, but not "how the gaze looks out." And isn't it astonishing, he adds, that much neural activity seems to have no link to consciousness? Instead, it's associated with things like controlling automatic movements and regulating blood pressure. Sure, we need the brain for consciousness: "Chop my head off, and my IQ descends." But it's not the whole story. There is more to perceptions, memories, and beliefs than neural impulses can explain. The human sphere encompasses a "community of minds," Tallis has written, "woven out of a trillion cognitive handshakes of shared attention, within which our freedom operates and our narrated lives are led." Those views on perception and memory anchor his attack on "neurobollocks." Because if you can't get the basics right, he says, then it's premature to look to neuroscience for clues to complex things like love.
  • Yes, many unanswered questions persist. But these are early days, and neuroscience remains immature, says Churchland, a professor emerita of philosophy at University of California at San Diego and author of the subfield-spawning 1986 book Neurophilosophy. In the 19th century, she points out, people thought we'd never understand light. "Well, by gosh," she says, "by the time the 20th century rolls around, it turns out that light is electromagnetic radiation. ... So the fact that at a certain point in time something seems like a smooth-walled mystery that we can't get a grip on, doesn't tell us anything about whether some real smart graduate student is going to sort it out in the next 10 years or not."
  • Dennett claims he's got much of it sorted out already. He wrote a landmark book on the topic in 1991, Consciousness Explained. (The title "should have landed him in court, charged with breach of the Trade Descriptions Act," writes Tallis.) Dennett uses the vocabulary of computer science to explain how consciousness emerges from the huge volume of things happening in the brain all at once. We're not aware of everything, he tells me, only a "limited window." He describes that stream of consciousness as "the activities of a virtual machine which is running on the parallel hardware of the brain." "You—the fruits of all your experience, not just your genetic background, but everything you've learned and done and all your memories—what ties those all together? What makes a self?" Dennett asks. "The answer is, and has to be, the self is like a software program that organizes the activities of the brain."
Javier E

The Fall of Facebook - The Atlantic - 0 views

  • When a research company looked at how people use their phones, it found that they spend more time on Facebook than they do browsing the entire rest of the Web.
  • Digital-media companies have grown reliant on Facebook’s powerful distribution capabilities.
  • this weakens the basic idea of a publication. The media bundles known as magazines and newspapers were built around letting advertisers reach an audience. But now virtually all of the audiences are in the same place, and media entities and advertisers alike know how to target them: they go to Facebook, select some options from a drop-down menu—18-to-24-year-old men in Maryland who are college-football fans—and their ads materialize in the feeds of that demographic.
  • ...9 more annotations...
  • when Google was the dominant distribution force on the Web, that fact was reflected in the kinds of content media companies produced—fact-filled, keyword-stuffed posts that Google’s software seemed to prefer.
  • while, once upon a time, everyone with a TV and an antenna could see “what was on,” Facebook news feeds are personalized, so no one outside the company actually knows what anyone else is seeing. This opacity would have been impossible to imagine in previous eras.
  • it is the most powerful information gatekeeper the world has ever known. It is only slightly hyperbolic to say that Facebook is like all the broadcast-television networks put together.
  • Facebook is different, though. It measures what is “engaging”—what you (and people you resemble, according to its databases) like, comment on, and share. Then it shows you more things related to that.
  • Facebook has built a self-perpetuating optimization machine. It’s as if every time you turned on the TV, your cable box ranked every episode of every show just for you. Or when you went to a bar, only the people you’d been hanging out with regularly showed up
  • It’s all enough to make you wonder whether Facebook, unlike AOL or MySpace, really might be forever
  • “In three years of research and talking to hundreds of people and everyday users, I  don’t think I heard anyone say once, ‘I love Facebook,’ ”
  • The software’s primary attributes—its omniscience, its solicitousness—all too easily provoke claustrophobia.
  • users are spreading themselves around, maintaining Facebook as their social spine, but investing in and loving a wide variety of other social apps. None of them seems likely to supplant Facebook on its own, but taken together, they form a pretty decent network of networks, a dispersed alternative to Facebook life.
kushnerha

A Placebo Treatment for Pain - The New York Times - 0 views

  • This phenomenon — in which someone feels better after receiving fake treatment — was once dismissed as an illusion. People who are ill often improve regardless of the treatment they receive. But neuroscientists are discovering that in some conditions, including pain, placebos create biological effects similar to those caused by drugs.
  • a key ingredient is expectation: The greater our belief that a treatment will work, the better we’ll respond.
  • Placebo effects in pain are so large, in fact, that drug manufacturers are finding it hard to beat them. Finding ways to minimize placebo effects in trials, for example by screening out those who are most susceptible, is now a big focus for research. But what if instead we seek to harness these effects? Placebos might ruin drug trials, but they also show us a new approach to treating pain.
  • ...9 more annotations...
  • It is unethical to deceive patients by prescribing fake treatments, of course. But there is evidence that people with some conditions benefit even if they know they are taking placebos. In a 2014 study that followed 459 migraine attacks in 66 patients, honestly labeled placebos provided significantly more pain relief than no treatment, and were nearly half as effective as the painkiller Maxalt.
  • With placebo responses in pain so high — and the risks of drugs so severe — why not prescribe a course of “honest” placebos for those who wish to try it, before proceeding, if necessary, to an active drug?
  • Another option is to employ alternative therapies, which through placebo responses can benefit patients even when there is no physical mode of action.
  • Taking a placebo painkiller dampens activity in pain-related areas of the brain and spinal cord, and triggers the release of endorphins, the natural pain-relieving chemicals that opioid drugs are designed to mimic. Even when we take a real painkiller, a big chunk of its effect is delivered not by any direct chemical action, but by our expectation that the drug will work. Studies show that widely used painkillers like morphine, buprenorphine and tramadol are markedly less effective if we don’t know we’re taking them.
  • Individual attitudes and experiences are important, as are cultural factors. Placebo effects are getting stronger in the United States, for example, though not elsewhere.
  • Likely explanations include a growing cultural belief in the effectiveness of painkillers — a result of direct-to-consumer advertising (illegal in most other countries) and perhaps the fact that so many Americans have taken these drugs in the past.
  • Trials show, for example, that strengthening patients’ positive expectations and reducing their anxiety during a variety of procedures, including minimally invasive surgery, while still being honest, can reduce the dose of painkillers required and cut complications.
  • Placebo studies also reveal the value of social interaction as a treatment for pain. Harvard researchers studied patients in pain from irritable bowel syndrome and found that 44 percent of those given sham acupuncture had adequate relief from their symptoms. If the person who performed the acupuncture was extra supportive and empathetic, however, that figure jumped to 62 percent.
  • Placebos tell us that pain is a complex mix of biological, psychological and social factors. We need to develop better drugs to treat it, but let’s also take more seriously the idea of relieving pain without them.
Javier E

Why Silicon Valley can't fix itself | News | The Guardian - 1 views

  • After decades of rarely apologising for anything, Silicon Valley suddenly seems to be apologising for everything. They are sorry about the trolls. They are sorry about the bots. They are sorry about the fake news and the Russians, and the cartoons that are terrifying your kids on YouTube. But they are especially sorry about our brains.
  • Sean Parker, the former president of Facebook – who was played by Justin Timberlake in The Social Network – has publicly lamented the “unintended consequences” of the platform he helped create: “God only knows what it’s doing to our children’s brains.”
  • Parker, Rosenstein and the other insiders now talking about the harms of smartphones and social media belong to an informal yet influential current of tech critics emerging within Silicon Valley. You could call them the “tech humanists”. Amid rising public concern about the power of the industry, they argue that the primary problem with its products is that they threaten our health and our humanity.
  • ...52 more annotations...
  • It is clear that these products are designed to be maximally addictive, in order to harvest as much of our attention as they can. Tech humanists say this business model is both unhealthy and inhumane – that it damages our psychological well-being and conditions us to behave in ways that diminish our humanity
  • The main solution that they propose is better design. By redesigning technology to be less addictive and less manipulative, they believe we can make it healthier – we can realign technology with our humanity and build products that don’t “hijack” our minds.
  • its most prominent spokesman is executive director Tristan Harris, a former “design ethicist” at Google who has been hailed by the Atlantic magazine as “the closest thing Silicon Valley has to a conscience”. Harris has spent years trying to persuade the industry of the dangers of tech addiction.
  • In February, Pierre Omidyar, the billionaire founder of eBay, launched a related initiative: the Tech and Society Solutions Lab, which aims to “maximise the tech industry’s contributions to a healthy society”.
  • the tech humanists are making a bid to become tech’s loyal opposition. They are using their insider credentials to promote a particular diagnosis of where tech went wrong and of how to get it back on track
  • The real reason tech humanism matters is because some of the most powerful people in the industry are starting to speak its idiom. Snap CEO Evan Spiegel has warned about social media’s role in encouraging “mindless scrambles for friends or unworthy distractions”,
  • In short, the effort to humanise computing produced the very situation that the tech humanists now consider dehumanising: a wilderness of screens where digital devices chase every last instant of our attention.
  • After years of ignoring their critics, industry leaders are finally acknowledging that problems exist. Tech humanists deserve credit for drawing attention to one of those problems – the manipulative design decisions made by Silicon Valley.
  • these decisions are only symptoms of a larger issue: the fact that the digital infrastructures that increasingly shape our personal, social and civic lives are owned and controlled by a few billionaires
  • Because it ignores the question of power, the tech-humanist diagnosis is incomplete – and could even help the industry evade meaningful reform
  • Taken up by leaders such as Zuckerberg, tech humanism is likely to result in only superficial changes
  • they will not address the origin of that anger. If anything, they will make Silicon Valley even more powerful.
  • To the litany of problems caused by “technology that extracts attention and erodes society”, the text asserts that “humane design is the solution”. Drawing on the rhetoric of the “design thinking” philosophy that has long suffused Silicon Valley, the website explains that humane design “starts by understanding our most vulnerable human instincts so we can design compassionately”
  • this language is not foreign to Silicon Valley. On the contrary, “humanising” technology has long been its central ambition and the source of its power. It was precisely by developing a “humanised” form of computing that entrepreneurs such as Steve Jobs brought computing into millions of users’ everyday lives
  • Facebook had a new priority: maximising “time well spent” on the platform, rather than total time spent. By “time well spent”, Zuckerberg means time spent interacting with “friends” rather than businesses, brands or media sources. He said the News Feed algorithm was already prioritising these “more meaningful” activities.
  • Tech humanists say they want to align humanity and technology. But this project is based on a deep misunderstanding of the relationship between humanity and technology: namely, the fantasy that these two entities could ever exist in separation.
  • They believe we can use better design to make technology serve human nature rather than exploit and corrupt it. But this idea is drawn from the same tradition that created the world that tech humanists believe is distracting and damaging us.
  • The story of our species began when we began to make tools
  • All of which is to say: humanity and technology are not only entangled, they constantly change together.
  • This is not just a metaphor. Recent research suggests that the human hand evolved to manipulate the stone tools that our ancestors used
  • The ways our bodies and brains change in conjunction with the tools we make have long inspired anxieties that “we” are losing some essential qualities
  • Yet as we lose certain capacities, we gain new ones.
  • The nature of human nature is that it changes. It can not, therefore, serve as a stable basis for evaluating the impact of technology
  • Yet the assumption that it doesn’t change serves a useful purpose. Treating human nature as something static, pure and essential elevates the speaker into a position of power. Claiming to tell us who we are, they tell us how we should be.
  • Messaging, for instance, is considered the strongest signal. It’s reasonable to assume that you’re closer to somebody you exchange messages with than somebody whose post you once liked.
  • Harris and his fellow tech humanists also frequently invoke the language of public health. The Center for Humane Technology’s Roger McNamee has gone so far as to call public health “the root of the whole thing”, and Harris has compared using Snapchat to smoking cigarettes
  • The public-health framing casts the tech humanists in a paternalistic role. Resolving a public health crisis requires public health expertise. It also precludes the possibility of democratic debate. You don’t put the question of how to treat a disease up for a vote – you call a doctor.
  • They also remain confined to the personal level, aiming to redesign how the individual user interacts with technology rather than tackling the industry’s structural failures. Tech humanism fails to address the root cause of the tech backlash: the fact that a small handful of corporations own our digital lives and strip-mine them for profit.
  • This is a fundamentally political and collective issue. But by framing the problem in terms of health and humanity, and the solution in terms of design, the tech humanists personalise and depoliticise it.
  • Far from challenging Silicon Valley, tech humanism offers Silicon Valley a useful way to pacify public concerns without surrendering any of its enormous wealth and power.
  • these principles could make Facebook even more profitable and powerful, by opening up new business opportunities. That seems to be exactly what Facebook has planned.
  • reported that total time spent on the platform had dropped by around 5%, or about 50m hours per day. But, Zuckerberg said, this was by design: in particular, it was in response to tweaks to the News Feed that prioritised “meaningful” interactions with “friends” rather than consuming “public content” like video and news. This would ensure that “Facebook isn’t just fun, but also good for people’s well-being”
  • Zuckerberg said he expected those changes would continue to decrease total time spent – but “the time you do spend on Facebook will be more valuable”. This may describe what users find valuable – but it also refers to what Facebook finds valuable
  • not all data is created equal. One of the most valuable sources of data to Facebook is used to inform a metric called “coefficient”. This measures the strength of a connection between two users – Zuckerberg once called it “an index for each relationship”
  • Facebook records every interaction you have with another user – from liking a friend’s post or viewing their profile, to sending them a message. These activities provide Facebook with a sense of how close you are to another person, and different activities are weighted differently.
  • Holding humanity and technology separate clears the way for a small group of humans to determine the proper alignment between them
  • Why is coefficient so valuable? Because Facebook uses it to create a Facebook they think you will like: it guides algorithmic decisions about what content you see and the order in which you see it. It also helps improve ad targeting, by showing you ads for things liked by friends with whom you often interact
  • emphasising time well spent means creating a Facebook that prioritises data-rich personal interactions that Facebook can use to make a more engaging platform.
  • “time well spent” means Facebook can monetise more efficiently. It can prioritise the intensity of data extraction over its extensiveness. This is a wise business move, disguised as a concession to critics
  • industrialists had to find ways to make the time of the worker more valuable – to extract more money from each moment rather than adding more moments. They did this by making industrial production more efficient: developing new technologies and techniques that squeezed more value out of the worker and stretched that value further than ever before.
  • there is another way of thinking about how to live with technology – one that is both truer to the history of our species and useful for building a more democratic future. This tradition does not address “humanity” in the abstract, but as distinct human beings, whose capacities are shaped by the tools they use.
  • It sees us as hybrids of animal and machine – as “cyborgs”, to quote the biologist and philosopher of science Donna Haraway.
  • The cyborg way of thinking, by contrast, tells us that our species is essentially technological. We change as we change our tools, and our tools change us. But even though our continuous co-evolution with our machines is inevitable, the way it unfolds is not. Rather, it is determined by who owns and runs those machines. It is a question of power
  • The various scandals that have stoked the tech backlash all share a single source. Surveillance, fake news and the miserable working conditions in Amazon’s warehouses are profitable. If they were not, they would not exist. They are symptoms of a profound democratic deficit inflicted by a system that prioritises the wealth of the few over the needs and desires of the many.
  • If being technological is a feature of being human, then the power to shape how we live with technology should be a fundamental human right
  • The decisions that most affect our technological lives are far too important to be left to Mark Zuckerberg, rich investors or a handful of “humane designers”. They should be made by everyone, together.
  • Rather than trying to humanise technology, then, we should be trying to democratise it. We should be demanding that society as a whole gets to decide how we live with technology
  • What does this mean in practice? First, it requires limiting and eroding Silicon Valley’s power.
  • Antitrust laws and tax policy offer useful ways to claw back the fortunes Big Tech has built on common resources
  • democratic governments should be making rules about how those firms are allowed to behave – rules that restrict how they can collect and use our personal data, for instance, like the General Data Protection Regulation
  • This means developing publicly and co-operatively owned alternatives that empower workers, users and citizens to determine how they are run.
  • we might demand that tech firms pay for the privilege of extracting our data, so that we can collectively benefit from a resource we collectively create.
Javier E

Who Decides What's Racist? - Persuasion - 1 views

  • The implication of Hannah-Jones’s tweet and candidate Biden’s quip seems to be that you can have African ancestry, dark skin, textured hair, and perhaps even some “culturally black” traits regarding tastes in food, music, and ways of moving through the world. But unless you hold the “correct” political beliefs and values, you are not authentically black.
  • In a now-deleted tweet from May 22, 2020, Nikole Hannah-Jones, a Pulitzer Prize-winning reporter for The New York Times, opined, “There is a difference between being politically black and being racially black.”
  • Shelly Eversley’s The Real Negro suggests that in the latter half of the 20th century, the criteria of what constitutes “authentic” black experience moved from perceptible outward signs, like the fact of being restricted to segregated public spaces and speaking in a “black” dialect, to psychological, interior signs. In this new understanding, Eversley writes, “the ‘truth’ about race is felt, not performed, not seen.”
  • ...26 more annotations...
  • This insight goes a long way to explaining the current fetishization of experience, especially if it is (redundantly) “lived.” Black people from all walks of life find themselves deferred to by non-blacks
  • black people certainly don’t all “feel” or “experience” the same things. Nor do they all "experience" the same event in an identical way. Finally, even when their experiences are similar, they don’t all think about or interpret their experiences in the same way.
  • we must begin to attend in a serious way to heterodox black voices
  • This need is especially urgent given the ideological homogeneity of the “antiracist” outlook and efforts of elite institutions, including media, corporations, and an overwhelmingly progressive academia. For the arbiters of what it means to be black that dominate these institutions, there is a fairly narrowly prescribed “authentic” black narrative, black perspective, and black position on every issue that matters.
  • When we hear the demand to “listen to black voices,” what is usually meant is “listen to the right black voices.”
  • Many non-black people have heard a certain construction of “the black voice” so often that they are perplexed by black people who don’t fit the familiar model.
  • Similarly, many activists are not in fact “pro-black”: they are pro a rather specific conception of “blackness” that is not necessarily endorsed by all black people.
  • This is where our new website, Free Black Thought (FBT), seeks to intervene in the national conversation. FBT honors black individuals for their distinctive, diverse, and heterodox perspectives, and offers up for all to hear a polyphony, perhaps even a cacophony, of different and differing black voices.
  • The practical effects of the new antiracism are everywhere to be seen, but in few places more clearly than in our children’s schools
  • one might reasonably question what could be wrong with teaching children “antiracist” precepts. But the details here are full of devils.
  • To take an example that could affect millions of students, the state of California has adopted a statewide Ethnic Studies Model Curriculum (ESMC) that reflects “antiracist” ideas. The ESMC’s content inadvertently confirms that contemporary antiracism is often not so much an extension of the civil rights movement but in certain respects a tacit abandonment of its ideals.
  • It has thus been condemned as a “perversion of history” by Dr. Clarence Jones, MLK’s legal counsel, advisor, speechwriter, and Scholar in Residence at the Martin Luther King, Jr. Institute at Stanford University:
  • Essentialist thinking about race has also gained ground in some schools. For example, in one elite school, students “are pressured to conform their opinions to those broadly associated with their race and gender and to minimize or dismiss individual experiences that don’t match those assumptions.” These students report feeling that “they must never challenge any of the premises of [the school’s] ‘antiracist’ teachings.”
  • In contrast, the non-white students were taught that they were “folx (sic) who do not benefit from their social identities,” and “have little to no privilege and power.”
  • The children with “white” in their identity map were taught that they were part of the “dominant culture” which has been “created and maintained…to hold power and stay in power.” They were also taught that they had “privilege” and that “those with privilege have power over others.
  • Or consider the third-grade students at R.I. Meyerholz Elementary School in Cupertino, California
  • Or take New York City’s public school system, one of the largest educators of non-white children in America. In an effort to root out “implicit bias,” former Schools Chancellor Richard Carranza had his administrators trained in the dangers of “white supremacy culture.”
  • A slide from a training presentation listed “perfectionism,” “individualism,” “objectivity” and “worship of the written word” as white supremacist cultural traits to be “dismantled,”
  • Finally, some schools are adopting antiracist ideas of the sort espoused by Ibram X. Kendi, according to whom, if metrics such as tests and grades reveal disparities in achievement, the project of measuring achievement must itself be racist.
  • Parents are justifiably worried about such innovations. What black parent wants her child to hear that grading or math are “racist” as a substitute for objective assessment and real learning? What black parent wants her child told she shouldn’t worry about working hard, thinking objectively, or taking a deep interest in reading and writing because these things are not authentically black?
  • Clearly, our children’s prospects for success depend on the public being able to have an honest and free-ranging discussion about this new antiracism and its utilization in schools. Even if some black people have adopted its tenets, many more, perhaps most, hold complex perspectives that draw from a constellation of rather different ideologies.
  • So let’s listen to what some heterodox black people have to say about the new antiracism in our schools.
  • Coleman Hughes, a fellow at the Manhattan Institute, points to a self-defeating feature of Kendi-inspired grading and testing reforms: If we reject high academic standards for black children, they are unlikely to rise to “those same rejected standards” and racial disparity is unlikely to decrease
  • Chloé Valdary, the founder of Theory of Enchantment, worries that antiracism may “reinforce a shallow dogma of racial essentialism by describing black and white people in generalizing ways” and discourage “fellowship among peers of different races.”
  • We hope it’s obvious that the point we’re trying to make is not that everyone should accept uncritically everything these heterodox black thinkers say. Our point in composing this essay is that we all desperately need to hear what these thinkers say so we can have a genuine conversation
  • We promote no particular politics or agenda beyond a desire to offer a wide range of alternatives to the predictable fare emanating from elite mainstream outlets. At FBT, Marxists rub shoulders with laissez-faire libertarians. We have no desire to adjudicate who is “authentically black” or whom to prefer.
caelengrubb

How to read the news like a scientist | - 0 views

  • “In present times, our risk of being fooled is especially high,” she says. There are two main factors at play: “Disinformation spreads like wildfire in social media,” she adds, “and when it comes to news reporting, sometimes it is more important for journalists to be fast than accurate.”
  • Scientists labor under a burden of proof. They must conduct experiments and collect data under controlled conditions to arrive at their conclusions — and be ready to defend their findings with facts, not emotions.
  • 1. Cultivate your skepticism.
  • ...15 more annotations...
  • When you learn a new piece of information through social media, think to yourself: “This may be true, but it also may be false,”
  • 2. Find out who is making the claim.
  • When you encounter a new claim, look for conflicts of interest. Ask: Do they stand to profit from what they say? Are they affiliated with an organization that could be swaying them? Two other questions to consider: What makes the writer or speaker qualified to comment on the topic? What statements have they made in the past?
  • 3. Watch out for the halo effect.
  • The halo effect, says Frans, “is a cognitive bias that makes our feeling towards someone affect how we judge their claims.
  • If we dislike someone, we are a lot more likely to disagree with them; if we like them, we are biased to agree.”
  • New scientific papers under review are read “blind,” with the authors’ names removed. That way, the experts who are deciding whether it’s worthy of publication don’t know which of their fellow scientists wrote it so they’ll be able to react free from pre-judgement or bias.
  • 4. Look at the evidence.
  • Before you act on or share a particularly surprising or enraging story, do a quick Google search — you might learn something even more interesting.
  • 5. Beware of the tendency to cherry-pick information.
  • Another human bias — confirmation bias — means we’re more likely to notice stories or facts that fit what we already believe (or want to believe).
  • When you search for information, you should not disregard the information that goes against whatever opinion you might have in advance.”
  • In your own life, look for friends and acquaintances on social media with alternative viewpoints. You don’t have to agree with them, or tolerate misinformation from them — but it’s healthy and balanced to have some variety in your information diet.
  • 6. Recognize the difference between correlation and causation.
  • However, she says, “there is no evidence supporting these claims, and it’s important to remember that just because two things increase simultaneously, this does not mean that they are causally linked to each other. Correlation does not equal causality.”
tongoscar

Trump's Iran strike could present an opportunity to China - CNN - 0 views

shared by tongoscar on 20 Jan 20 - No Cached
  • Last June, however, world leaders flocked to the capital of Kyrgyzstan for a meeting of the Shanghai Cooperation Organization, a key regional security and political alliance. Attendees included Russian President Vladimir Putin and Chinese leader Xi Jinping, as well as Iranian President Hassan Rouhani, with whom they posed alongside in photos from the event. It was a pertinent reminder of Tehran's strong ties with two of the world's foremost powers, further underlined when the three countries held joint naval exercises near the strategically vital Strait of Hormuz in the Indian Ocean last month.
  • A statement added that Tehran hoped China could "play an important role in preventing escalation of regional tensions."Such sentiments are also likely shared well beyond Iran's borders, including among other Middle Eastern powers which are no fans of Tehran. The killing of Soleimani could present Beijing with a major opportunity, not only to prevent another disastrous war, but to increase its influence in the region, supplanting an increasingly unpredictable Washington.
  • "China's emphasis on noninterference, state-led economic development, and regional stability resonates with many autocratic leaders in the Middle East, allowing China to promote its 'alternative' model of great power leadership."
  • ...4 more annotations...
  • So far -- in no small part thanks to its humungous checkbook -- China has managed to thread the needle of maintaining ties with traditional allies such as Iran and Syria, while also improving relations with their rivals in Saudi Arabia, Israel and the United Arab Emirates. Beijing has also resisted strong pressure from Washington to ditch both Tehran and Damascus, using its role as a United Nations Security Council member to rein in some international action against them.
  • Tehran's enemies may frown at Beijing's refusal to ditch its old ally to make new ones, but this policy will appear far more attractive in the wake of Soleimani's death. And the distinct chance we could now be headed for another Middle Eastern conflict -- or at the very least a period of saber-rattling and disruption to global trade -- could prop up Beijing's ability to play all sides, perhaps indefinitely.
  • "China is not a revisionist state. It does not want to reshape the Middle East and take over the responsibility of securing it. It wants a predictable, stable region -- as much as that is possible -- in which it can trade and invest,"
  • Such a role will likely be welcomed by many players in the region. Indeed, it's difficult to think of a more pertinent example of the contrast between Chinese and US policy than Trump threatening -- just as Beijing was calling for calm -- to target Iranian cultural sites, in what could well be a war crime if it was carried out.
tongoscar

40 years later, the mothers of Argentina's 'disappeared' refuse to be silent | World ne... - 0 views

  • Haydée Gastelú was among the first to arrive. “We were absolutely terrified,” she recalls.
  • Four decades on and 2,037 marches later, the mothers are still marching, though some of them must now use wheelchairs.
  • “Argentina’s new government wants to erase the memory of those terrible years and is putting the brakes on the continuation of trials,” says Taty Almeida, 86, whose 20-year-old son, Alejandro, disappeared in 1975.
  • ...3 more annotations...
  • But the mothers – most of them now in their late 80s – warn that the current era of alternative facts and revisionist history poses a new kind of threat for the country.
  • “People were scared,” recalls Gastelú, now 88. “If I talked about my kidnapped son at the hairdresser or supermarket they would run away. Even listening was dangerous.
  • “Among us there are mothers who escaped from the Nazi Holocaust, only to lose their Argentinian-born children to another dictatorship – so we know for a fact that these tragedies can repeat themselves,” Gastelú says.
sanderk

4 Everyday Items Einstein Helped Create - 0 views

  • Albert Einstein is justly famous for devising his theory of relativity, which revolutionized our understanding of space, time, gravity, and the universe. Relativity also showed us that matter and energy are just two different forms of the same thing—a fact that Einstein expressed as E=mc2, the most widely recognized equation in history.
  • Credit for inventing paper towels goes to the Scott Paper Company of Pennsylvania, which introduced the disposable product in 1907 as a more hygienic alternative to cloth towels. But in the very first physics article that Einstein ever published, he did analyze wicking: the phenomenon that allows paper towels to soak up liquids even when gravity wants to drag the fluid downward.
  • Again, Einstein didn’t invent solar cells; the first crude versions of them date back to 1839. But he did sketch out their basic principle of operation in 1905. His starting point was a simple analogy: If matter is lumpy—that is, if every substance in the universe consists of atoms and molecules—then surely light must be lumpy as well.
  • ...5 more annotations...
  • Einstein turned this insight into an equation that described the jittering mathematically. His Brownian motion paper is widely recognized as the first incontrovertible proof that atoms and molecules really exist—and it still serves as the basis for some stock market forecasts.
  • He was trying to explain an odd fact that was first noticed by English botanist Robert Brown in 1827. Brown looked through his microscope and saw that the dust grains in a droplet of water were jittering around aimlessly. This Brownian motion, as it was first dubbed, had nothing to do with the grains being alive, so what kept them moving?
  • If you’ve been to a conference or played with a cat, chances are you’ve seen a laser pointer in action. In the nearly six decades since physicists demonstrated the first laboratory prototype of a laser in 1960, the devices have come to occupy almost every niche imaginable, from barcode readers to systems for hair removal.
  • So Einstein made an inspired guess: Maybe photons like to march in step, so that the presence of a bunch of them going in the same direction will increase the probability of a high-energy atom emitting another photon in that direction. He called this process stimulated emission, and when he included it in his equations, his calculations fit the observations perfectly
  • A laser is just a gadget for harnessing this phenomenon
Javier E

Cognitive Biases and the Human Brain - The Atlantic - 1 views

  • Present bias shows up not just in experiments, of course, but in the real world. Especially in the United States, people egregiously undersave for retirement—even when they make enough money to not spend their whole paycheck on expenses, and even when they work for a company that will kick in additional funds to retirement plans when they contribute.
  • hen people hear the word bias, many if not most will think of either racial prejudice or news organizations that slant their coverage to favor one political position over another. Present bias, by contrast, is an example of cognitive bias—the collection of faulty ways of thinking that is apparently hardwired into the human brain. The collection is large. Wikipedia’s “List of cognitive biases” contains 185 entries, from actor-observer bias (“the tendency for explanations of other individuals’ behaviors to overemphasize the influence of their personality and underemphasize the influence of their situation … and for explanations of one’s own behaviors to do the opposite”) to the Zeigarnik effect (“uncompleted or interrupted tasks are remembered better than completed ones”)
  • If I had to single out a particular bias as the most pervasive and damaging, it would probably be confirmation bias. That’s the effect that leads us to look for evidence confirming what we already think or suspect, to view facts and ideas we encounter as further confirmation, and to discount or ignore any piece of evidence that seems to support an alternate view
  • ...48 more annotations...
  • Confirmation bias shows up most blatantly in our current political divide, where each side seems unable to allow that the other side is right about anything.
  • The whole idea of cognitive biases and faulty heuristics—the shortcuts and rules of thumb by which we make judgments and predictions—was more or less invented in the 1970s by Amos Tversky and Daniel Kahneman
  • versky died in 1996. Kahneman won the 2002 Nobel Prize in Economics for the work the two men did together, which he summarized in his 2011 best seller, Thinking, Fast and Slow. Another best seller, last year’s The Undoing Project, by Michael Lewis, tells the story of the sometimes contentious collaboration between Tversky and Kahneman
  • Another key figure in the field is the University of Chicago economist Richard Thaler. One of the biases he’s most linked with is the endowment effect, which leads us to place an irrationally high value on our possessions.
  • In an experiment conducted by Thaler, Kahneman, and Jack L. Knetsch, half the participants were given a mug and then asked how much they would sell it for. The average answer was $5.78. The rest of the group said they would spend, on average, $2.21 for the same mug. This flew in the face of classic economic theory, which says that at a given time and among a certain population, an item has a market value that does not depend on whether one owns it or not. Thaler won the 2017 Nobel Prize in Economics.
  • “The question that is most often asked about cognitive illusions is whether they can be overcome. The message … is not encouraging.”
  • that’s not so easy in the real world, when we’re dealing with people and situations rather than lines. “Unfortunately, this sensible procedure is least likely to be applied when it is needed most,” Kahneman writes. “We would all like to have a warning bell that rings loudly whenever we are about to make a serious error, but no such bell is available.”
  • At least with the optical illusion, our slow-thinking, analytic mind—what Kahneman calls System 2—will recognize a Müller-Lyer situation and convince itself not to trust the fast-twitch System 1’s perception
  • Kahneman and others draw an analogy based on an understanding of the Müller-Lyer illusion, two parallel lines with arrows at each end. One line’s arrows point in; the other line’s arrows point out. Because of the direction of the arrows, the latter line appears shorter than the former, but in fact the two lines are the same length.
  • Because biases appear to be so hardwired and inalterable, most of the attention paid to countering them hasn’t dealt with the problematic thoughts, judgments, or predictions themselves
  • Is it really impossible, however, to shed or significantly mitigate one’s biases? Some studies have tentatively answered that question in the affirmative.
  • what if the person undergoing the de-biasing strategies was highly motivated and self-selected? In other words, what if it was me?
  • Over an apple pastry and tea with milk, he told me, “Temperament has a lot to do with my position. You won’t find anyone more pessimistic than I am.”
  • I met with Kahneman
  • “I see the picture as unequal lines,” he said. “The goal is not to trust what I think I see. To understand that I shouldn’t believe my lying eyes.” That’s doable with the optical illusion, he said, but extremely difficult with real-world cognitive biases.
  • In this context, his pessimism relates, first, to the impossibility of effecting any changes to System 1—the quick-thinking part of our brain and the one that makes mistaken judgments tantamount to the Müller-Lyer line illusion
  • he most effective check against them, as Kahneman says, is from the outside: Others can perceive our errors more readily than we can.
  • “slow-thinking organizations,” as he puts it, can institute policies that include the monitoring of individual decisions and predictions. They can also require procedures such as checklists and “premortems,”
  • A premortem attempts to counter optimism bias by requiring team members to imagine that a project has gone very, very badly and write a sentence or two describing how that happened. Conducting this exercise, it turns out, helps people think ahead.
  • “My position is that none of these things have any effect on System 1,” Kahneman said. “You can’t improve intuition.
  • Perhaps, with very long-term training, lots of talk, and exposure to behavioral economics, what you can do is cue reasoning, so you can engage System 2 to follow rules. Unfortunately, the world doesn’t provide cues. And for most people, in the heat of argument the rules go out the window.
  • Kahneman describes an even earlier Nisbett article that showed subjects’ disinclination to believe statistical and other general evidence, basing their judgments instead on individual examples and vivid anecdotes. (This bias is known as base-rate neglect.)
  • over the years, Nisbett had come to emphasize in his research and thinking the possibility of training people to overcome or avoid a number of pitfalls, including base-rate neglect, fundamental attribution error, and the sunk-cost fallacy.
  • Nisbett’s second-favorite example is that economists, who have absorbed the lessons of the sunk-cost fallacy, routinely walk out of bad movies and leave bad restaurant meals uneaten.
  • When Nisbett asks the same question of students who have completed the statistics course, about 70 percent give the right answer. He believes this result shows, pace Kahneman, that the law of large numbers can be absorbed into System 2—and maybe into System 1 as well, even when there are minimal cues.
  • about half give the right answer: the law of large numbers, which holds that outlier results are much more frequent when the sample size (at bats, in this case) is small. Over the course of the season, as the number of at bats increases, regression to the mean is inevitabl
  • When Nisbett has to give an example of his approach, he usually brings up the baseball-phenom survey. This involved telephoning University of Michigan students on the pretense of conducting a poll about sports, and asking them why there are always several Major League batters with .450 batting averages early in a season, yet no player has ever finished a season with an average that high.
  • we’ve tested Michigan students over four years, and they show a huge increase in ability to solve problems. Graduate students in psychology also show a huge gain.”
  • , “I know from my own research on teaching people how to reason statistically that just a few examples in two or three domains are sufficient to improve people’s reasoning for an indefinitely large number of events.”
  • isbett suggested another factor: “You and Amos specialized in hard problems for which you were drawn to the wrong answer. I began to study easy problems, which you guys would never get wrong but untutored people routinely do … Then you can look at the effects of instruction on such easy problems, which turn out to be huge.”
  • Nisbett suggested that I take “Mindware: Critical Thinking for the Information Age,” an online Coursera course in which he goes over what he considers the most effective de-biasing skills and concepts. Then, to see how much I had learned, I would take a survey he gives to Michigan undergraduates. So I did.
  • he course consists of eight lessons by Nisbett—who comes across on-screen as the authoritative but approachable psych professor we all would like to have had—interspersed with some graphics and quizzes. I recommend it. He explains the availability heuristic this way: “People are surprised that suicides outnumber homicides, and drownings outnumber deaths by fire. People always think crime is increasing” even if it’s not.
  • When I finished the course, Nisbett sent me the survey he and colleagues administer to Michigan undergrads
  • It contains a few dozen problems meant to measure the subjects’ resistance to cognitive biases
  • I got it right. Indeed, when I emailed my completed test, Nisbett replied, “My guess is that very few if any UM seniors did as well as you. I’m sure at least some psych students, at least after 2 years in school, did as well. But note that you came fairly close to a perfect score.”
  • Nevertheless, I did not feel that reading Mindware and taking the Coursera course had necessarily rid me of my biases
  • For his part, Nisbett insisted that the results were meaningful. “If you’re doing better in a testing context,” he told me, “you’ll jolly well be doing better in the real world.”
  • The New York–based NeuroLeadership Institute offers organizations and individuals a variety of training sessions, webinars, and conferences that promise, among other things, to use brain science to teach participants to counter bias. This year’s two-day summit will be held in New York next month; for $2,845, you could learn, for example, “why are our brains so bad at thinking about the future, and how do we do it better?”
  • Philip E. Tetlock, a professor at the University of Pennsylvania’s Wharton School, and his wife and research partner, Barbara Mellers, have for years been studying what they call “superforecasters”: people who manage to sidestep cognitive biases and predict future events with far more accuracy than the pundits
  • One of the most important ingredients is what Tetlock calls “the outside view.” The inside view is a product of fundamental attribution error, base-rate neglect, and other biases that are constantly cajoling us into resting our judgments and predictions on good or vivid stories instead of on data and statistics
  • In 2006, seeking to prevent another mistake of that magnitude, the U.S. government created the Intelligence Advanced Research Projects Activity (iarpa), an agency designed to use cutting-edge research and technology to improve intelligence-gathering and analysis. In 2011, iarpa initiated a program, Sirius, to fund the development of “serious” video games that could combat or mitigate what were deemed to be the six most damaging biases: confirmation bias, fundamental attribution error, the bias blind spot (the feeling that one is less biased than the average person), the anchoring effect, the representativeness heuristic, and projection bias (the assumption that everybody else’s thinking is the same as one’s own).
  • most promising are a handful of video games. Their genesis was in the Iraq War
  • Together with collaborators who included staff from Creative Technologies, a company specializing in games and other simulations, and Leidos, a defense, intelligence, and health research company that does a lot of government work, Morewedge devised Missing. Some subjects played the game, which takes about three hours to complete, while others watched a video about cognitive bias. All were tested on bias-mitigation skills before the training, immediately afterward, and then finally after eight to 12 weeks had passed.
  • “The literature on training suggests books and classes are fine entertainment but largely ineffectual. But the game has very large effects. It surprised everyone.”
  • he said he saw the results as supporting the research and insights of Richard Nisbett. “Nisbett’s work was largely written off by the field, the assumption being that training can’t reduce bias,
  • even the positive results reminded me of something Daniel Kahneman had told me. “Pencil-and-paper doesn’t convince me,” he said. “A test can be given even a couple of years later. But the test cues the test-taker. It reminds him what it’s all about.”
  • Morewedge told me that some tentative real-world scenarios along the lines of Missing have shown “promising results,” but that it’s too soon to talk about them.
  • In the future, I will monitor my thoughts and reactions as best I can
Javier E

Elon Musk May Kill Us Even If Donald Trump Doesn't - 0 views

  • In his extraordinary 2021 book, The Constitution of Knowledge: A Defense of Truth, Jonathan Rauch, a scholar at Brookings, writes that modern societies have developed an implicit “epistemic” compact–an agreement about how we determine truth–that rests on a broad public acceptance of science and reason, and a respect and forbearance towards institutions charged with advancing knowledge.
  • Today, Rauch writes, those institutions have given way to digital “platforms” that traffic in “information” rather than knowledge and disseminate that information not according to its accuracy but its popularity. And what is popular is sensation, shock, outrage. The old elite consensus has given way to an algorithm. Donald Trump, an entrepreneur of outrage, capitalized on the new technology to lead what Rauch calls “an epistemic secession.”
  • Rauch foresees the arrival of “Internet 3.0,” in which the big companies accept that content regulation is in their interest and erect suitable “guardrails.” In conversation with me, Rauch said that social media companies now recognize that their algorithm are “toxic,” and spoke hopefully of alternative models like Mastodon, which eschews algorithms and allows users to curate their own feeds
  • ...10 more annotations...
  • In an Atlantic essay, “Why The Past Ten Years of American Life have Been Uniquely Stupid,” and in a follow-up piece, Haidt argued that the Age of Gutenberg–of books and the depth understanding that comes with them–ended somewhere around 2014 with the rise of “Share,” “Like” and “Retweet” buttons that opened the way for trolls, hucksters and Trumpists
  • The new age of “hyper-virality,” he writes, has given us both January 6 and cancel culture–ugly polarization in both directions. On the subject of stupidification, we should add the fact that high school students now get virtually their entire stock of knowledge about the world from digital platforms.
  • Haidt proposed several reforms, including modifying Facebook’s “Share” function and requiring “user verification” to get rid of trolls. But he doesn’t really believe in his own medicine
  • Haidt said that the era of “shared understanding” is over–forever. When I asked if he could envision changes that would help protect democracy, Haidt quoted Goldfinger: “Do you expect me to talk?” “No, Mr. Bond, I expect you to die!”
  • Social media is a public health hazard–the cognitive equivalent of tobacco and sugary drinks. Adopting a public health model, we could, for examople, ban the use of algorithms to reduce virality, or even require social media platforms to adopt a subscription rather than advertising revenue model and thus remove their incentive to amass ev er more eyeballs.
  • We could, but we won’t, because unlike other public health hazards, digital platforms are forms of speech. Fox New is probably responsible for more polarization than all social media put together, but the federal government could not compel it–and all other media firms–to change its revenue model.
  • If Mark Zuckerberg or Elon Musk won’t do so out of concern for the public good–a pretty safe bet–they could be compelled to do so only by public or competitive pressure. 
  • Taiwan has provide resilient because its society is resilient; people reject China’s lies. We, here, don’t lack for fact-checkers, but rather for people willing to believe them. The problem is not the technology, but ourselves.
  • you have to wonder if people really are repelled by our poisonous discourse, or by the hailstorm of disinformation, or if they just want to live comfortably inside their own bubble, and not somebody else’
  • If Jonathan Haidt is right, it’s not because we’ve created a self-replicating machine that is destined to annihilate reason; it’s because we are the self-replicating machine.
Javier E

How Conservative Media Lost to the MSM and Failed the Rank and File - Conor Friedersdor... - 0 views

  • Before rank-and-file conservatives ask, "What went wrong?", they should ask themselves a question every bit as important: "Why were we the last to realize that things were going wrong for us?"
  • It is easy to close oneself off inside a conservative echo chamber. And right-leaning outlets like Fox News and Rush Limbaugh's show are far more intellectually closed than CNN or public radio.
  • Since the very beginning of the election cycle, conservative media has been failing you. With a few exceptions, they haven't tried to rigorously tell you the truth, or even to bring you intellectually honest opinion. What they've done instead helps to explain why the right failed to triumph in a very winnable election.
  • ...6 more annotations...
  • Conservatives were at a disadvantage because Romney supporters like Jennifer Rubin and Hugh Hewitt saw it as their duty to spin constantly for their favored candidate rather than being frank about his strengths and weaknesses.
  • Conservatives were at an information disadvantage because so many right-leaning outlets wasted time on stories the rest of America dismissed as nonsense. WorldNetDaily brought you birtherism. Forbes brought you Kenyan anti-colonialism. National Review obsessed about an imaginary rejection of American exceptionalism, misrepresenting an Obama quote in the process, and Andy McCarthy was interviewed widely about his theory that Obama, aka the Drone Warrior in Chief, allied himself with our Islamist enemies in a "Grand Jihad" against America. Seriously? 
  • Conservatives were at a disadvantage because their information elites pandered in the most cynical, self-defeating ways, treating would-be candidates like Sarah Palin and Herman Cain as if they were plausible presidents rather than national jokes who'd lose worse than George McGovern.
  • How many hours of Glenn Beck conspiracy theories did Fox News broadcast to its viewers? How many hours of transparently mindless Sean Hannity content is still broadcast daily? Why don't Americans trust Republicans on foreign policy as they once did? In part because conservatism hasn't grappled with the foreign-policy failures of George W. Bush. A conspiracy of silence surrounds the subject. Romney could neither run on the man's record nor repudiate it.
  • Most conservative pundits know better than this nonsense -- not that they speak up against it. They see criticizing their own side as a sign of disloyalty. I see a coalition that has lost all perspective, partly because there's no cost to broadcasting or publishing inane bullshit. In fact, it's often very profitable. A lot of cynical people have gotten rich broadcasting and publishing red meat for movement conservative consumption.
  • On the biggest political story of the year, the conservative media just got its ass handed to it by the mainstream media. And movement conservatives, who believe the MSM is more biased and less rigorous than their alternatives, have no way to explain how their trusted outlets got it wrong, while the New York Times got it right. Hint: The Times hired the most rigorous forecaster it could find.   It ought to be an eye-opening moment.   
Javier E

The Failure of Rational Choice Philosophy - NYTimes.com - 1 views

  • According to Hegel, history is idea-driven.
  • Ideas for him are public, rather than in our heads, and serve to coordinate behavior. They are, in short, pragmatically meaningful words.  To say that history is “idea driven” is to say that, like all cooperation, nation building requires a common basic vocabulary.
  • One prominent component of America’s basic vocabulary is ”individualism.”
  • ...12 more annotations...
  • individualism, the desire to control one’s own life, has many variants. Tocqueville viewed it as selfishness and suspected it, while Emerson and Whitman viewed it as the moment-by-moment expression of one’s unique self and loved it.
  • individualism as the making of choices so as to maximize one’s preferences. This differed from “selfish individualism” in that the preferences were not specified: they could be altruistic as well as selfish. It differed from “expressive individualism” in having general algorithms by which choices were made. These made it rational.
  • it was born in 1951 as “rational choice theory.” Rational choice theory’s mathematical account of individual choice, originally formulated in terms of voting behavior, made it a point-for-point antidote to the collectivist dialectics of Marxism
  • Functionaries at RAND quickly expanded the theory from a tool of social analysis into a set of universal doctrines that we may call “rational choice philosophy.” Governmental seminars and fellowships spread it to universities across the country, aided by the fact that any alternative to it would by definition be collectivist.
  • rational choice philosophy moved smoothly on the backs of their pupils into the “real world” of business and governme
  • Today, governments and businesses across the globe simply assume that social reality  is merely a set of individuals freely making rational choices.
  • At home, anti-regulation policies are crafted to appeal to the view that government must in no way interfere with Americans’ freedom of choice.
  • But the real significance of rational choice philosophy lay in ethics. Rational choice theory, being a branch of economics, does not question people’s preferences; it simply studies how they seek to maximize them. Rational choice philosophy seems to maintain this ethical neutrality (see Hans Reichenbach’s 1951 “The Rise of Scientific Philosophy,” an unwitting masterpiece of the genre); but it does not.
  • Whatever my preferences are, I have a better chance of realizing them if I possess wealth and power. Rational choice philosophy thus promulgates a clear and compelling moral imperative: increase your wealth and power!
  • Today, institutions which help individuals do that (corporations, lobbyists) are flourishing; the others (public hospitals, schools) are basically left to rot. Business and law schools prosper; philosophy departments are threatened with closure.
  • Hegel, for one, had denied all three of its central claims in his “Encyclopedia of the Philosophical Sciences” over a century before. In that work, as elsewhere in his writings, nature is not neatly causal, but shot through with randomness. Because of this chaos, we cannot know the significance of what we have done until our community tells us; and ethical life correspondingly consists, not in pursuing wealth and power, but in integrating ourselves into the right kinds of community.
  • By 1953, W. V. O. Quine was exposing the flaws in rational choice epistemology. John Rawls, somewhat later, took on its sham ethical neutrality, arguing that rationality in choice includes moral constraints. The neat causality of rational choice ontology, always at odds with quantum physics, was further jumbled by the environmental crisis, exposed by Rachel Carson’s 1962 book “The Silent Spring,” which revealed that the causal effects of human actions were much more complex, and so less predicable, than previously thought.
Javier E

The Choice Explosion - The New York Times - 0 views

  • the social psychologist Sheena Iyengar asked 100 American and Japanese college students to take a piece of paper. On one side, she had them write down the decisions in life they would like to make for themselves. On the other, they wrote the decisions they would like to pass on to others.
  • The Americans desired choice in four times more domains than the Japanese.
  • Americans now have more choices over more things than any other culture in human history. We can choose between a broader array of foods, media sources, lifestyles and identities. We have more freedom to live out our own sexual identities and more religious and nonreligious options to express our spiritual natures.
  • ...15 more annotations...
  • But making decisions well is incredibly difficult, even for highly educated professional decision makers. As Chip Heath and Dan Heath point out in their book “Decisive,” 83 percent of corporate mergers and acquisitions do not increase shareholder value, 40 percent of senior hires do not last 18 months in their new position, 44 percent of lawyers would recommend that a young person not follow them into the law.
  • It’s becoming incredibly important to learn to decide well, to develop the techniques of self-distancing to counteract the flaws in our own mental machinery. The Heath book is a very good compilation of those techniques.
  • assume positive intent. When in the midst of some conflict, start with the belief that others are well intentioned. It makes it easier to absorb information from people you’d rather not listen to.
  • Suzy Welch’s 10-10-10 rule. When you’re about to make a decision, ask yourself how you will feel about it 10 minutes from now, 10 months from now and 10 years from now. People are overly biased by the immediate pain of some choice, but they can put the short-term pain in long-term perspective by asking these questions.
  • An "explosion" that may also be a "dissolution" or "disintegration," in my view. Unlimited choices. Conduct without boundaries. All of which may be viewed as either "great" or "terrible." The poor suffer when they have no means to pursue choices, which is terrible. The rich seem only to want more and more, wealth without boundaries, which is great for those so able to do. Yes, we need a new decision-making tool, but perhaps one that is also very old: simplify, simplify,simplify by setting moral boundaries that apply to all and which define concisely what our life together ought to be.
  • our tendency to narrow-frame, to see every decision as a binary “whether or not” alternative. Whenever you find yourself asking “whether or not,” it’s best to step back and ask, “How can I widen my options?”
  • deliberate mistakes. A survey of new brides found that 20 percent were not initially attracted to the man they ended up marrying. Sometimes it’s useful to make a deliberate “mistake” — agreeing to dinner with a guy who is not your normal type. Sometimes you don’t really know what you want and the filters you apply are hurting you.
  • It makes you think that we should have explicit decision-making curriculums in all schools. Maybe there should be a common course publicizing the work of Daniel Kahneman, Cass Sunstein, Dan Ariely and others who study the way we mess up and the techniques we can adopt to prevent error.
  • The explosion of choice places extra burdens on the individual. Poorer Americans have fewer resources to master decision-making techniques, less social support to guide their decision-making and less of a safety net to catch them when they err.
  • the stress of scarcity itself can distort decision-making. Those who experienced stress as children often perceive threat more acutely and live more defensively.
  • The explosion of choice means we all need more help understanding the anatomy of decision-making.
  • living in an area of concentrated poverty can close down your perceived options, and comfortably “relieve you of the burden of choosing life.” It’s hard to maintain a feeling of agency when you see no chance of opportunity.
  • In this way the choice explosion has contributed to widening inequality.
  • The relentless all-hour reruns of "Law and Order" in 100 channel cable markets provide direct rebuff to the touted but hollow promise/premise of wider "choice." The small group of personalities debating a pre-framed trivial point of view, over and over, nightly/daily (in video clips), without data, global comparison, historic reference, regional content, or a deep commitment to truth or knowledge of facts has resulted in many choosing narrower limits: streaming music, coffee shops, Facebook--now a "choice" of 1.65 billion users.
  • It’s important to offer opportunity and incentives. But we also need lessons in self-awareness — on exactly how our decision-making tool is fundamentally flawed, and on mental frameworks we can adopt to avoid messing up even more than we do.
Javier E

Are We Ready for a 'Morality Pill'? - NYTimes.com - 0 views

  • It seems plausible that humans, like rats, are spread along a continuum of readiness to help others. There has been considerable research on abnormal people, like psychopaths, but we need to know more about relatively stable differences (perhaps rooted in our genes) in the great majority of people as well.
  • Undoubtedly, situational factors can make a huge difference, and perhaps moral beliefs do as well, but if humans are just different in their predispositions to act morally, we also need to know more about these differences. Only then will we gain a proper understanding of our moral behavior
  • If continuing brain research does in fact show biochemical differences between the brains of those who help others and the brains of those who do not, could this lead to a “morality pill” — a drug that makes us more likely to help?
  • ...3 more annotations...
  • many argued that we could never be justified in depriving someone of his free will, no matter how gruesome the violence that would thereby be prevented. No doubt any proposal to develop a morality pill would encounter the same objection.
  • If so, would people choose to take it? Could criminals be given the option, as an alternative to prison, of a drug-releasing implant that would make them less likely to harm others?
  • But if our brain’s chemistry does affect our moral behavior, the question of whether that balance is set in a natural way or by medical intervention will make no difference in how freely we act. If there are already biochemical differences between us that can be used to predict how ethically we will act, then either such differences are compatible with free will, or they are evidence that at least as far as some of our ethical actions are concerned, none of us have ever had free will anyway.
Javier E

How 'ObamaCare' Went from Smear to Cheer - Politics - The Atlantic Wire - 3 views

  • Friday's effort strikes us as a pretty smart, if overdue, campaign strategy to try to make a mark on a term that has undoubtedly entered the lexicon, whether the campaign likes it or not.
  • why shouldn't they use "Obamacare" to describe the bill? There's nothing immediately or obviously pejorative in the word. In fact, given the bill's very long actual name, we're kind of appreciative for an easily recognized alternative. The problem for Democrats is that conservatives really effectively claimed the term and attached it in the public's mind with negative sentiment. As Kiran Moodley wrote in a piece for The Atlantic last year on the term, conservatives, through repetition, were able to link the idea to  government intrusion and the larger Obama agenda. "Obamacare" became more than an innocent shorthand: It became a rallying cry.
‹ Previous 21 - 40 of 66 Next › Last »
Showing 20 items per page