Skip to main content

Home/ TOK Friends/ Group items tagged peer-pressure

Rss Feed Group items tagged

Javier E

Yelp and the Wisdom of 'The Lonely Crowd' : The New Yorker - 1 views

  • David Riesman spent the first half of his career writing one of the most important books of the twentieth century. He spent the second half correcting its pervasive misprision. “The Lonely Crowd,” an analysis of the varieties of social character that examined the new American middle class
  • the “profound misinterpretation” of the book as a simplistic critique of epidemic American postwar conformity via its description of the contours of the “other-directed character,” whose identity and behavior is shaped by its relationships.
  • he never meant to suggest that Americans now were any more conformist than they ever had been, or that there’s even such a thing as social structure without conformist consensus.
  • ...17 more annotations...
  • In this past weekend’s Styles section of the New York Times, Siegel uses “The Lonely Crowd” to analyze the putative “Yelpification” of contemporary life: according to Siegel, Riesman’s view was that “people went from being ‘inner-directed’ to ‘outer-directed,’ from heeding their own instincts and judgment to depending on the judgments and opinions of tastemakers and trendsetters.” The “conformist power of the crowd” and its delighted ability to write online reviews led Siegel down a sad path to a lackluster expensive dinner.
  • What Riesman actually suggested was that we think of social organization in terms of a series of “ideal types” along a spectrum of increasingly loose authority
  • On one end of the spectrum is a “tradition-directed” community, where we all understand that what we’re supposed to do is what we’re supposed to do because it’s just the thing that one does; authority is unequivocal, and there’s neither the room nor the desire for autonomous action
  • In the middle of the spectrum, as one moves toward a freer distribution of, and response to, authority, is “inner-direction.” The inner-directed character is concerned not with “what one does” but with “what people like us do.” Which is to say that she looks to her own internalizations of past authorities to get a sense for how to conduct her affairs.
  • Contemporary society, Riesman thought, was best understood as chiefly “other-directed,” where the inculcated authority of the vertical (one’s lineage) gives way to the muddled authority of the horizontal (one’s peers).
  • The inner-directed person orients herself by an internal “gyroscope,” while the other-directed person orients herself by “radar.”
  • It’s not that the inner-directed person consults some deep, subjective, romantically sui generis oracle. It’s that the inner-directed person consults the internalized voices of a mostly dead lineage, while her other-directed counterpart heeds the external voices of her living contemporaries.
  • “the gyroscopic mechanism allows the inner-directed person to appear far more independent than he really is: he is no less a conformist to others than the other-directed person, but the voices to which he listens are more distant, of an older generation, their cues internalized in his childhood.” The inner-directed person is, simply, “somewhat less concerned than the other-directed person with continuously obtaining from contemporaries (or their stand-ins: the mass media) a flow of guidance, expectation, and approbation.
  • Riesman drew no moral from the transition from a community of primarily inner-directed people to a community of the other-directed. Instead, he saw that each ideal type had different advantages and faced different problems
  • As Riesman understood it, the primary disciplining emotion under tradition direction is shame, the threat of ostracism and exile that enforces traditional action. Inner-directed people experience not shame but guilt, or the fear that one’s behavior won’t be commensurate with the imago within. And, finally, other-directed folks experience not guilt but a “contagious, highly diffuse” anxiety—the possibility that, now that authority itself is diffuse and ambiguous, we might be doing the wrong thing all the time.
  • Siegel is right to make the inference, if wayward in his conclusions. It makes sense to associate the anxiety of how to relate to livingly diffuse authorities with the Internet, which presents the greatest signal-to-noise-ratio problem in human history.
  • The problem with Yelp is not the role it plays, for Siegel, in the proliferation of monoculture; most people of my generation have learned to ignore Yelp entirely. It’s the fact that, after about a year of usefulness, Yelp very quickly became a terrible source of information.
  • There are several reasons for this. The first is the nature of an algorithmic response to the world. As Jaron Lanier points out in “Who Owns the Future?,” the hubris behind each new algorithm is the idea that its predictive and evaluatory structure is game-proof; but the minute any given algorithm gains real currency, all the smart and devious people devote themselves to gaming it. On Yelp, the obvious case would be garnering positive reviews by any means necessary.
  • A second problem with Yelp’s algorithmic ranking is in the very idea of using online reviews; as anybody with a book on Amazon knows, they tend to draw more contributions from people who feel very strongly about something, positively or negatively. This undermines the statistical relevance of their recommendations.
  • the biggest problem with Yelp is not that it’s a popularity contest. It’s not even that it’s an exploitable popularity contest.
  • it’s the fact that Yelp makes money by selling ads and prime placements to the very businesses it lists under ostensibly neutral third-party review
  • But Yelp’s valuations are always possibly in bad faith, even if its authority is dressed up as the distilled algorithmic wisdom of a crowd. For Riesman, that’s the worst of all possible worlds: a manipulated consumer certainty that only shores up the authority of an unchosen, hidden source. In that world, cold monkfish is the least of our problems.
Javier E

Achievement gaps: Revenge of the tiger mother | The Economist - 2 views

  • WHEN measured in terms of academic achievement, Asian Americans are a successful bunch. Forty-nine percent have a bachelor's degree or higher. This compares favourably against white Americans (30%), African-Americans (19%) and Latinos (13%).
  • Amy Chua, a self-declared "tiger mother" who became famous for promoting the benefits of harsh parenting, would put this down to culture. She has argued that Chinese-American children statistically out-perform their peers because they are pushed harder at home.
  • she ascribes the success of different cultures in America to a "triple package" comprised of a superiority complex, insecurity and good impulse control. In other words, certain groups tell themselves they are better than other groups, but learn that they have to work hard to succeed, and must resist temptation and distraction in proving themselves.
  • ...8 more annotations...
  • sociologists at City University of New York and the University of Michigan, wanted to try to find out why it exists. In a new paper in the journal PNAS, they looked at whether it could be explained by socio-demographic factors (such as family income and parental education), cognitive ability (were these children simply more intelligent?), or work ethic. 
  • socio-demographic factors could not explain the achievement gap between Asians and whites. This is because recently arrived Asian immigrants with little formal education and low incomes have children that do better in school than their white peers.
  • Being brainier isn't the answer either. When the pair looked at cognitive ability as measured by standardised tests, Asian-Americans were not different from their white peers
  •  Instead Dr Hsin and Dr Xie find that the achievement gap can be explained through harder work—as measured by teacher assessments of student work habits and motivation.
  • What might explain harder work? The authors point to the fact Asian-Americans are likely to be immigrants or children of immigrants who, as a group, tend to be more optimistic. These are people who have made a big move in search of better opportunities. Immigration is a "manifestation of that optimism through effort, that you can have a better life"
  • Added to this mix is a general cultural belief among Asian-Americans that achievement comes with effort. We know that children who believe ability is innate are more inclined to give up if something doesn't come naturally. An understanding that success requires hard work—not merely an aptitude—is therefore useful.
  • “Tiger” parenting clearly has its place, but it is not everything, according to this study. Dr Hsin says that Asian-Americans also have some unique social and ethnic capital, such as good access to tutors and social networks that offer information about schools and college-admission routes. They also benefit from positive stereotypes which lead to wider expectations of success.
  • Should Ms Chua’s approach to child-rearing replace the American standard, which seems to emphasise self-esteem over test scores? Not necessarily. The report’s researchers point out that Asian-American children also suffer from poorer self-images and more conflicted relationships with their parents. Dr Hsin wonders if this may be the result of pressure to meet narrowly defined and high standards for success. Children who fail to meet these expectations end up feeling like failures, while those who succeed fail to feel satisfied because they are simply achieving what is expected.
Javier E

The Narrative Frays for Theranos and Elizabeth Holmes - The New York Times - 1 views

  • Few people, let alone those just 31 years old, have amassed the accolades and riches bestowed on Elizabeth Holmes, founder and chief executive of the blood-testing start-up Theranos.
  • This year President Obama named her a United States ambassador for global entrepreneurship. She gave the commencement address at Pepperdine University. She was the youngest person ever to be awarded the Horatio Alger Award in recognition of “remarkable achievements accomplished through honesty, hard work, self-reliance and perseverance over adversity.” She is on the Board of Fellows of Harvard Medical School.
  • Time named her one of the 100 Most Influential People in the World this year. She was the subject of lengthy profiles in The New Yorker and Fortune. Over the last week, she appeared on the cover of T: The New York Times Style Magazine, and Glamour anointed her one of its eight Women of the Year. She has been on “Charlie Rose,” as well as on stage at the Clinton Global Initiative, the World Economic Forum at Davos and the Aspen Ideas Festival, among numerous other conferences.
  • ...14 more annotations...
  • Theranos, which she started after dropping out of Stanford at age 19, has raised more than $400 million in venture capital and has been valued at $9 billion, which makes Ms. Holmes’s 50 percent stake worth $4.5 billion. Forbes put her on the cover of its Forbes 400 issue, ranking her No. 121 on the list of wealthiest Americans.
  • Thanks to an investigative article in The Wall Street Journal this month by John Carreyrou, one of the company’s central claims, and the one most exciting to many investors and doctors, is being called into question. Theranos has acknowledged it was only running a limited number of tests on a microsample of blood using its finger-prick technology. Since then, it said it had stopped using its proprietary methods on all but one relatively simple test for herpes.
  • “The constant was that nobody had any idea how this works or even if it works,” Mr. Loria told me this week. “People in medicine couldn’t understand why the media and technology worlds were so in thrall to her.
  • that so many eminent authorities — from Henry Kissinger, who had served on the company’s board; to prominent investors like the Oracle founder Larry Ellison; to the Cleveland Clinic — appear to have embraced Theranos with minimal scrutiny is a testament to the ageless power of a great story.
  • Ms. Holmes seems to have perfectly executed the current Silicon Valley playbook: Drop out of a prestigious college to pursue an entrepreneurial vision; adopt an iconic uniform; embrace an extreme diet; and champion a humanitarian mission, preferably one that can be summed up in one catchy phrase.
  • She stays relentlessly on message, as a review of her numerous conference and TV appearances make clear, while at the same time saying little of scientific substance.
  • The natural human tendency to fit complex facts into a simple, compelling narrative has grown stronger in the digital age of 24/7 news and social media,
  • “We’re deluged with information even as pressure has grown to make snap decisions,”
  • “People see a TED talk. They hear this amazing story of a 30-something-year-old woman with a wonder procedure. They see the Cleveland Clinic is on board. A switch goes off and they make an instant decision that everything is fine. You see this over and over: Really smart and wealthy people start to believe completely implausible things with 100 percent certainty.”
  • Ms. Holmes’s story also fits into a broader narrative underway in medicine, in which new health care entrepreneurs are upending ossified hospital practices with the goal of delivering more effective and patient-oriented care.
  • as a medical technology company, Theranos has bumped up against something else: the scientific method, which puts a premium on verification over narrative.
  • “You have to subject yourself to peer review. You can’t just go in a stealthy mode and then announce one day that you’ve got technology that’s going to disrupt the world.”
  • Professor Yeo said that he and his colleagues wanted to see data and testing in independent labs. “We have a small army of people ready and willing to test Theranos’s products if they’d ask us,” he said. “And that can be done without revealing any trade secrets.”
  • “Every other company in this field has gone through peer review,” said Mr. Cherny of Evercore. “Why hold back so much of the platform if your goal is the greater good of humanity?”
Javier E

Opinion | How to Be More Resilient - The New York Times - 1 views

  • As a psychiatrist, I’ve long wondered why some people get ill in the face of stress and adversity — either mentally or physically — while others rarely succumb.
  • not everyone gets PTSD after exposure to extreme trauma, while some people get disabling depression with minimal or no stress
  • What makes people resilient, and is it something they are born with or can it be acquired later in life?
  • ...17 more annotations...
  • New research suggests that one possible answer can be found in the brain’s so-called central executive network, which helps regulate emotions, thinking and behavior
  • used M.R.I. to study the brains of a racially diverse group of 218 people, ages 12 to 14, living in violent neighborhoods in Chicago
  • the youths who had higher levels of functional connectivity in the central executive network had better cardiac and metabolic health than their peers with lower levels of connectivity
  • when neighborhood homicide rates went up, the young people’s cardiometabolic risk — as measured by obesity, blood-pressure and insulin levels, among other variables — also increased, but only in youths who showed lower activity in this brain network
  • “Active resilience happens when people who are vulnerable find resources to cope with stress and bounce back, and do so in a way that leaves them stronger, ready to handle additional stress, in more adaptive ways.”
  • the more medically hardy young people were no less anxious or depressed than their less fortunate peers, which suggests that while being more resilient makes you less vulnerable to adversity, it doesn’t guarantee happiness — or even an awareness of being resilient.
  • there is good reason to believe the link may be causal because other studies have found that we can change the activity in the self-control network, and increase healthy behaviors, with simple behavioral interventions
  • For example, mindfulness training, which involves attention control, emotion regulation and increased self-awareness, can increase connectivity within this network and help people to quit smoking.
  • n one study, two weeks of mindfulness training produced a 60 percent reduction in smoking, compared with no reduction in a control group that focused on relaxation. An M.R.I. following mindfulness training showed increased activity in the anterior cingulate cortex and prefrontal cortex, key brain areas in the executive self-control network
  • Clearly self-control is one critical component of resilience that can be easily fostered. But there are others.
  • One plausible explanation is that greater activity in this network increases self-control, which most likely reduces some unhealthy behaviors people often use to cope with stress, like eating junk food or smoking
  • she and colleagues studied the brains of depressed patients who died. They found that the most disrupted genes were those for growth factors, proteins that act like a kind of brain fertilizer.
  • “We came to realize that depressed people have lost their power to remodel their brains. And that is in fact devastating because brain remodeling is something we need to do all the time — we are constantly rewiring our brains based on past experience and the expectation of how we need to use them in the future,
  • one growth factor that is depleted in depressed brains, called fibroblast growth factor 2, also plays a role in resilience. When they gave it to stressed animals, they bounced back faster and acted less depressed. And when they gave it just once after birth to animals that had been bred for high levels of anxiety and inhibition, they were hardier for the rest of their lives.
  • The good news is that we have some control over our own brain BDNF levels: Getting more physical exercise and social support, for example, has been shown to increase BDNF.
  • Perhaps someday we might be able to protect young people exposed to violence and adversity by supplementing them with neuroprotective growth factors. We know enough now to help them by fortifying their brains through exercise, mindfulness training and support systems
  • Some people have won the genetic sweepstakes and are naturally tough. But there is plenty the rest of us can do to be more resilient and healthier.
Javier E

Opinion | I Came to College Eager to Debate. I Found Self-Censorship Instead. - The New... - 0 views

  • Hushed voices and anxious looks dictate so many conversations on campus here at the University of Virginia, where I’m finishing up my senior year.
  • I was shaken, but also determined to not silence myself. Still, the disdain of my fellow students stuck with me. I was a welcome member of the group — and then I wasn’t.
  • Instead, my college experience has been defined by strict ideological conformity. Students of all political persuasions hold back — in class discussions, in friendly conversations, on social media — from saying what we really think.
  • ...23 more annotations...
  • Even as a liberal who has attended abortion rights demonstrations and written about standing up to racism, I sometimes feel afraid to fully speak my mind.
  • In the classroom, backlash for unpopular opinions is so commonplace that many students have stopped voicing them, sometimes fearing lower grades if they don’t censor themselves.
  • According to a 2021 survey administered by College Pulse of over 37,000 students at 159 colleges, 80 percent of students self-censor at least some of the time.
  • Forty-eight percent of undergraduate students described themselves as “somewhat uncomfortable” or “very uncomfortable” with expressing their views on a controversial topic in the classroom.
  • When a class discussion goes poorly for me, I can tell.
  • The room felt tense. I saw people shift in their seats. Someone got angry, and then everyone seemed to get angry. After the professor tried to move the discussion along, I still felt uneasy. I became a little less likely to speak up again and a little less trusting of my own thoughts.
  • This anxiety affects not just conservatives. I spoke with Abby Sacks, a progressive fourth-year student. She said she experienced a “pile-on” during a class discussion about sexism in media
  • Throughout that semester, I saw similar reactions in response to other students’ ideas. I heard fewer classmates speak up. Eventually, our discussions became monotonous echo chambers. Absent rich debate and rigor, we became mired in socially safe ideas.
  • when criticism transforms into a public shaming, it stifles learning.
  • Professors have noticed a shift in their classrooms
  • “First, students are afraid of being called out on social media by their peers,”
  • “Second, the dominant messages students hear from faculty, administrators and staff are progressive ones. So they feel an implicit pressure to conform to those messages in classroom and campus conversations and debates.”
  • I met Stephen Wiecek at our debate club. He’s an outgoing, formidable first-year debater who often stays after meetings to help clean up. He’s also conservative.
  • He told me that he has often “straight-up lied” about his beliefs to avoid conflict. Sometimes it’s at a party, sometimes it’s at an a cappella rehearsal, and sometimes it’s in the classroom. When politics comes up, “I just kind of go into survival mode,” he said. “I tense up a lot more, because I’ve got to think very carefully about how I word things. It’s very anxiety inducing.”
  • I went to college to learn from my professors and peers. I welcomed an environment that champions intellectual diversity and rigorous disagreement
  • “It was just a succession of people, one after each other, each vehemently disagreeing with me,” she told me.
  • Ms. Sacks felt overwhelmed. “Everyone adding on to each other kind of energized the room, like everyone wanted to be part of the group with the correct opinion,” she said. The experience, she said, “made me not want to go to class again.” While Ms. Sacks did continue to attend the class, she participated less frequently. She told me that she felt as if she had become invisible.
  • Other campuses also struggle with this. “Viewpoint diversity is no longer considered a sacred, core value in higher education,”
  • Dr. Abrams said the environment on today’s campuses differs from his undergraduate experience. He recalled late-night debates with fellow students that sometimes left him feeling “hurt” but led to “the ecstasy of having my mind opened up to new ideas.”
  • He worries that self-censorship threatens this environment and argues that college administrations in particular “enforce and create a culture of obedience and fear that has chilled speech.”
  • Universities must do more than make public statements supporting free expression. We need a campus culture that prioritizes ideological diversity and strong policies that protect expression in the classroom.
  • Universities should refuse to cancel controversial speakers or cave to unreasonable student demands. They should encourage professors to reward intellectual diversity and nonconformism in classroom discussions. And most urgently, they should discard restrictive speech codes and bias response teams that pathologize ideological conflict.
  • We cannot experience the full benefits of a university education without having our ideas challenged, yet challenged in ways that allow us to grow.
kushnerha

Is the Drive for Success Making Our Children Sick? - The New York Times - 0 views

  • results of testing he did in cooperation with Irvington High School in Fremont, Calif., a once-working-class city that is increasingly in Silicon Valley’s orbit. He had anonymously surveyed two-thirds of Irvington’s 2,100 students last spring, using two standard measures, the Center for Epidemiologic Studies Depression Scale and the State-Trait Anxiety Inventory. The results were stunning: 54 percent of students showed moderate to severe symptoms of depression. More alarming, 80 percent suffered moderate to severe symptoms of anxiety.
  • “This is so far beyond what you would typically see in an adolescent population,” he told the school’s faculty at a meeting just before the fall semester began. “It’s unprecedented.” Worse, those alarming figures were probably an underestimation; some students had missed the survey while taking Advanced Placement exams.
  • What Dr. Slavin saw at Irvington is a microcosm of a nationwide epidemic of school-related stress. We think of this as a problem only of the urban and suburban elite, but in traveling the country to report on this issue, I have seen that this stress has a powerful effect on children across the socioeconomic spectrum.
  • ...8 more annotations...
  • Expectations surrounding education have spun out of control. On top of a seven-hour school day, our kids march through hours of nightly homework, daily sports practices and band rehearsals, and weekend-consuming assignments and tournaments. Each activity is seen as a step on the ladder to a top college, an enviable job and a successful life. Children living in poverty who aspire to college face the same daunting admissions arms race, as well as the burden of competing for scholarships, with less support than their privileged peers.
  • Yet instead of empowering them to thrive, this drive for success is eroding children’s health and undermining their potential. Modern education is actually making them sick.
  • Nearly one in three teenagers told the American Psychological Association that stress drove them to sadness or depression — and their single biggest source of stress was school. According to the Centers for Disease Control and Prevention, a vast majority of American teenagers get at least two hours less sleep each night than recommended — and research shows the more homework they do, the fewer hours they sleep. At the university level, 94 percent of college counseling directors in a survey from last year said they were seeing rising numbers of students with severe psychological problems.
  • At the other end of the age spectrum, doctors increasingly see children in early elementary school suffering from migraine headaches and ulcers. Many physicians see a clear connection to performance pressure.
  • chosen to start making a change. Teachers are re-examining their homework demands, in some cases reviving the school district’s forgotten homework guideline — no more than 20 minutes per class per night, and none on weekends. In fact, research supports limits on homework. Students have started a task force to promote healthy habits and balanced schedules.
  • A growing body of medical evidence suggests that long-term childhood stress is linked not only with a higher risk of adult depression and anxiety, but with poor physical health outcomes, as well.
  • Paradoxically, the pressure cooker is hurting, not helping, our kids’ prospects for success. Many college students struggle with critical thinking, a fact that hasn’t escaped their professors, only 14 percent of whom believe that their students are prepared for college work, according to a 2015 report.
  • At Irvington, it’s too early to gauge the impact of new reforms, but educators see promising signs. Calls to school counselors to help students having emotional episodes in class have dropped from routine to nearly nonexistent. The A.P. class failure rate dropped by half. Irvington students continue to be accepted at respected colleges.
Javier E

Thieves of experience: On the rise of surveillance capitalism - 1 views

  • Harvard Business School professor emerita Shoshana Zuboff argues in her new book that the Valley’s wealth and power are predicated on an insidious, essentially pathological form of private enterprise—what she calls “surveillance capitalism.” Pioneered by Google, perfected by Facebook, and now spreading throughout the economy, surveillance capitalism uses human life as its raw material. Our everyday experiences, distilled into data, have become a privately-owned business asset used to predict and mold our behavior, whether we’re shopping or socializing, working or voting.
  • By reengineering the economy and society to their own benefit, Google and Facebook are perverting capitalism in a way that undermines personal freedom and corrodes democracy.
  • Under the Fordist model of mass production and consumption that prevailed for much of the twentieth century, industrial capitalism achieved a relatively benign balance among the contending interests of business owners, workers, and consumers. Enlightened executives understood that good pay and decent working conditions would ensure a prosperous middle class eager to buy the goods and services their companies produced. It was the product itself — made by workers, sold by companies, bought by consumers — that tied the interests of capitalism’s participants together. Economic and social equilibrium was negotiated through the product.
  • ...72 more annotations...
  • By removing the tangible product from the center of commerce, surveillance capitalism upsets the equilibrium. Whenever we use free apps and online services, it’s often said, we become the products, our attention harvested and sold to advertisers
  • this truism gets it wrong. Surveillance capitalism’s real products, vaporous but immensely valuable, are predictions about our future behavior — what we’ll look at, where we’ll go, what we’ll buy, what opinions we’ll hold — that internet companies derive from our personal data and sell to businesses, political operatives, and other bidders.
  • Unlike financial derivatives, which they in some ways resemble, these new data derivatives draw their value, parasite-like, from human experience.To the Googles and Facebooks of the world, we are neither the customer nor the product. We are the source of what Silicon Valley technologists call “data exhaust” — the informational byproducts of online activity that become the inputs to prediction algorithms
  • Another 2015 study, appearing in the Journal of Computer-Mediated Communication, showed that when people hear their phone ring but are unable to answer it, their blood pressure spikes, their pulse quickens, and their problem-solving skills decline.
  • The smartphone has become a repository of the self, recording and dispensing the words, sounds and images that define what we think, what we experience and who we are. In a 2015 Gallup survey, more than half of iPhone owners said that they couldn’t imagine life without the device.
  • So what happens to our minds when we allow a single tool such dominion over our perception and cognition?
  • Not only do our phones shape our thoughts in deep and complicated ways, but the effects persist even when we aren’t using the devices. As the brain grows dependent on the technology, the research suggests, the intellect weakens.
  • he has seen mounting evidence that using a smartphone, or even hearing one ring or vibrate, produces a welter of distractions that makes it harder to concentrate on a difficult problem or job. The division of attention impedes reasoning and performance.
  • internet companies operate in what Zuboff terms “extreme structural independence from people.” When databases displace goods as the engine of the economy, our own interests, as consumers but also as citizens, cease to be part of the negotiation. We are no longer one of the forces guiding the market’s invisible hand. We are the objects of surveillance and control.
  • Social skills and relationships seem to suffer as well.
  • In both tests, the subjects whose phones were in view posted the worst scores, while those who left their phones in a different room did the best. The students who kept their phones in their pockets or bags came out in the middle. As the phone’s proximity increased, brainpower decreased.
  • In subsequent interviews, nearly all the participants said that their phones hadn’t been a distraction—that they hadn’t even thought about the devices during the experiment. They remained oblivious even as the phones disrupted their focus and thinking.
  • The researchers recruited 520 undergraduates at UCSD and gave them two standard tests of intellectual acuity. One test gauged “available working-memory capacity,” a measure of how fully a person’s mind can focus on a particular task. The second assessed “fluid intelligence,” a person’s ability to interpret and solve an unfamiliar problem. The only variable in the experiment was the location of the subjects’ smartphones. Some of the students were asked to place their phones in front of them on their desks; others were told to stow their phones in their pockets or handbags; still others were required to leave their phones in a different room.
  • the “integration of smartphones into daily life” appears to cause a “brain drain” that can diminish such vital mental skills as “learning, logical reasoning, abstract thought, problem solving, and creativity.”
  •  Smartphones have become so entangled with our existence that, even when we’re not peering or pawing at them, they tug at our attention, diverting precious cognitive resources. Just suppressing the desire to check our phone, which we do routinely and subconsciously throughout the day, can debilitate our thinking.
  • They found that students who didn’t bring their phones to the classroom scored a full letter-grade higher on a test of the material presented than those who brought their phones. It didn’t matter whether the students who had their phones used them or not: All of them scored equally poorly.
  • A study of nearly a hundred secondary schools in the U.K., published last year in the journal Labour Economics, found that when schools ban smartphones, students’ examination scores go up substantially, with the weakest students benefiting the most.
  • Data, the novelist and critic Cynthia Ozick once wrote, is “memory without history.” Her observation points to the problem with allowing smartphones to commandeer our brains
  • Because smartphones serve as constant reminders of all the friends we could be chatting with electronically, they pull at our minds when we’re talking with people in person, leaving our conversations shallower and less satisfying.
  • In a 2013 study conducted at the University of Essex in England, 142 participants were divided into pairs and asked to converse in private for ten minutes. Half talked with a phone in the room, half without a phone present. The subjects were then given tests of affinity, trust and empathy. “The mere presence of mobile phones,” the researchers reported in the Journal of Social and Personal Relationships, “inhibited the development of interpersonal closeness and trust” and diminished “the extent to which individuals felt empathy and understanding from their partners.”
  • The evidence that our phones can get inside our heads so forcefully is unsettling. It suggests that our thoughts and feelings, far from being sequestered in our skulls, can be skewed by external forces we’re not even aware o
  •  Scientists have long known that the brain is a monitoring system as well as a thinking system. Its attention is drawn toward any object that is new, intriguing or otherwise striking — that has, in the psychological jargon, “salience.”
  • even in the history of captivating media, the smartphone stands out. It is an attention magnet unlike any our minds have had to grapple with before. Because the phone is packed with so many forms of information and so many useful and entertaining functions, it acts as what Dr. Ward calls a “supernormal stimulus,” one that can “hijack” attention whenever it is part of our surroundings — and it is always part of our surroundings.
  • Imagine combining a mailbox, a newspaper, a TV, a radio, a photo album, a public library and a boisterous party attended by everyone you know, and then compressing them all into a single, small, radiant object. That is what a smartphone represents to us. No wonder we can’t take our minds off it.
  • The irony of the smartphone is that the qualities that make it so appealing to us — its constant connection to the net, its multiplicity of apps, its responsiveness, its portability — are the very ones that give it such sway over our minds.
  • Phone makers like Apple and Samsung and app writers like Facebook, Google and Snap design their products to consume as much of our attention as possible during every one of our waking hours
  • Social media apps were designed to exploit “a vulnerability in human psychology,” former Facebook president Sean Parker said in a recent interview. “[We] understood this consciously. And we did it anyway.”
  • A quarter-century ago, when we first started going online, we took it on faith that the web would make us smarter: More information would breed sharper thinking. We now know it’s not that simple.
  • As strange as it might seem, people’s knowledge and understanding may actually dwindle as gadgets grant them easier access to online data stores
  • In a seminal 2011 study published in Science, a team of researchers — led by the Columbia University psychologist Betsy Sparrow and including the late Harvard memory expert Daniel Wegner — had a group of volunteers read forty brief, factual statements (such as “The space shuttle Columbia disintegrated during re-entry over Texas in Feb. 2003”) and then type the statements into a computer. Half the people were told that the machine would save what they typed; half were told that the statements would be erased.
  • Afterward, the researchers asked the subjects to write down as many of the statements as they could remember. Those who believed that the facts had been recorded in the computer demonstrated much weaker recall than those who assumed the facts wouldn’t be stored. Anticipating that information would be readily available in digital form seemed to reduce the mental effort that people made to remember it
  • The researchers dubbed this phenomenon the “Google effect” and noted its broad implications: “Because search engines are continually available to us, we may often be in a state of not feeling we need to encode the information internally. When we need it, we will look it up.”
  • as the pioneering psychologist and philosopher William James said in an 1892 lecture, “the art of remembering is the art of thinking.”
  • Only by encoding information in our biological memory can we weave the rich intellectual associations that form the essence of personal knowledge and give rise to critical and conceptual thinking. No matter how much information swirls around us, the less well-stocked our memory, the less we have to think with.
  • As Dr. Wegner and Dr. Ward explained in a 2013 Scientific American article, when people call up information through their devices, they often end up suffering from delusions of intelligence. They feel as though “their own mental capacities” had generated the information, not their devices. “The advent of the ‘information age’ seems to have created a generation of people who feel they know more than ever before,” the scholars concluded, even though “they may know ever less about the world around them.”
  • That insight sheds light on society’s current gullibility crisis, in which people are all too quick to credit lies and half-truths spread through social media. If your phone has sapped your powers of discernment, you’ll believe anything it tells you.
  • A second experiment conducted by the researchers produced similar results, while also revealing that the more heavily students relied on their phones in their everyday lives, the greater the cognitive penalty they suffered.
  • When we constrict our capacity for reasoning and recall or transfer those skills to a gadget, we sacrifice our ability to turn information into knowledge. We get the data but lose the meaning
  • We need to give our minds more room to think. And that means putting some distance between ourselves and our phones.
  • Google’s once-patient investors grew restive, demanding that the founders figure out a way to make money, preferably lots of it.
  • nder pressure, Page and Brin authorized the launch of an auction system for selling advertisements tied to search queries. The system was designed so that the company would get paid by an advertiser only when a user clicked on an ad. This feature gave Google a huge financial incentive to make accurate predictions about how users would respond to ads and other online content. Even tiny increases in click rates would bring big gains in income. And so the company began deploying its stores of behavioral data not for the benefit of users but to aid advertisers — and to juice its own profits. Surveillance capitalism had arrived.
  • Google’s business now hinged on what Zuboff calls “the extraction imperative.” To improve its predictions, it had to mine as much information as possible from web users. It aggressively expanded its online services to widen the scope of its surveillance.
  • Through Gmail, it secured access to the contents of people’s emails and address books. Through Google Maps, it gained a bead on people’s whereabouts and movements. Through Google Calendar, it learned what people were doing at different moments during the day and whom they were doing it with. Through Google News, it got a readout of people’s interests and political leanings. Through Google Shopping, it opened a window onto people’s wish lists,
  • The company gave all these services away for free to ensure they’d be used by as many people as possible. It knew the money lay in the data.
  • the organization grew insular and secretive. Seeking to keep the true nature of its work from the public, it adopted what its CEO at the time, Eric Schmidt, called a “hiding strategy” — a kind of corporate omerta backed up by stringent nondisclosure agreements.
  • Page and Brin further shielded themselves from outside oversight by establishing a stock structure that guaranteed their power could never be challenged, neither by investors nor by directors.
  • What’s most remarkable about the birth of surveillance capitalism is the speed and audacity with which Google overturned social conventions and norms about data and privacy. Without permission, without compensation, and with little in the way of resistance, the company seized and declared ownership over everyone’s information
  • The companies that followed Google presumed that they too had an unfettered right to collect, parse, and sell personal data in pretty much any way they pleased. In the smart homes being built today, it’s understood that any and all data will be beamed up to corporate clouds.
  • Google conducted its great data heist under the cover of novelty. The web was an exciting frontier — something new in the world — and few people understood or cared about what they were revealing as they searched and surfed. In those innocent days, data was there for the taking, and Google took it
  • Google also benefited from decisions made by lawmakers, regulators, and judges — decisions that granted internet companies free use of a vast taxpayer-funded communication infrastructure, relieved them of legal and ethical responsibility for the information and messages they distributed, and gave them carte blanche to collect and exploit user data.
  • Consider the terms-of-service agreements that govern the division of rights and the delegation of ownership online. Non-negotiable, subject to emendation and extension at the company’s whim, and requiring only a casual click to bind the user, TOS agreements are parodies of contracts, yet they have been granted legal legitimacy by the court
  • Law professors, writes Zuboff, “call these ‘contracts of adhesion’ because they impose take-it-or-leave-it conditions on users that stick to them whether they like it or not.” Fundamentally undemocratic, the ubiquitous agreements helped Google and other firms commandeer personal data as if by fiat.
  • n the choices we make as consumers and private citizens, we have always traded some of our autonomy to gain other rewards. Many people, it seems clear, experience surveillance capitalism less as a prison, where their agency is restricted in a noxious way, than as an all-inclusive resort, where their agency is restricted in a pleasing way
  • Zuboff makes a convincing case that this is a short-sighted and dangerous view — that the bargain we’ve struck with the internet giants is a Faustian one
  • but her case would have been stronger still had she more fully addressed the benefits side of the ledger.
  • there’s a piece missing. While Zuboff’s assessment of the costs that people incur under surveillance capitalism is exhaustive, she largely ignores the benefits people receive in return — convenience, customization, savings, entertainment, social connection, and so on
  • hat the industries of the future will seek to manufacture is the self.
  • Behavior modification is the thread that ties today’s search engines, social networks, and smartphone trackers to tomorrow’s facial-recognition systems, emotion-detection sensors, and artificial-intelligence bots.
  • All of Facebook’s information wrangling and algorithmic fine-tuning, she writes, “is aimed at solving one problem: how and when to intervene in the state of play that is your daily life in order to modify your behavior and thus sharply increase the predictability of your actions now, soon, and later.”
  • “The goal of everything we do is to change people’s actual behavior at scale,” a top Silicon Valley data scientist told her in an interview. “We can test how actionable our cues are for them and how profitable certain behaviors are for us.”
  • This goal, she suggests, is not limited to Facebook. It is coming to guide much of the economy, as financial and social power shifts to the surveillance capitalists
  • Combining rich information on individuals’ behavioral triggers with the ability to deliver precisely tailored and timed messages turns out to be a recipe for behavior modification on an unprecedented scale.
  • it was Facebook, with its incredibly detailed data on people’s social lives, that grasped digital media’s full potential for behavior modification. By using what it called its “social graph” to map the intentions, desires, and interactions of literally billions of individuals, it saw that it could turn its network into a worldwide Skinner box, employing psychological triggers and rewards to program not only what people see but how they react.
  • spying on the populace is not the end game. The real prize lies in figuring out ways to use the data to shape how people think and act. “The best way to predict the future is to invent it,” the computer scientist Alan Kay once observed. And the best way to predict behavior is to script it.
  • competition for personal data intensified. It was no longer enough to monitor people online; making better predictions required that surveillance be extended into homes, stores, schools, workplaces, and the public squares of cities and towns. Much of the recent innovation in the tech industry has entailed the creation of products and services designed to vacuum up data from every corner of our lives
  • “The typical complaint is that privacy is eroded, but that is misleading,” Zuboff writes. “In the larger societal pattern, privacy is not eroded but redistributed . . . . Instead of people having the rights to decide how and what they will disclose, these rights are concentrated within the domain of surveillance capitalism.” The transfer of decision rights is also a transfer of autonomy and agency, from the citizen to the corporation.
  • What we lose under this regime is something more fundamental than privacy. It’s the right to make our own decisions about privacy — to draw our own lines between those aspects of our lives we are comfortable sharing and those we are not
  • Other possible ways of organizing online markets, such as through paid subscriptions for apps and services, never even got a chance to be tested.
  • Online surveillance came to be viewed as normal and even necessary by politicians, government bureaucrats, and the general public
  • Google and other Silicon Valley companies benefited directly from the government’s new stress on digital surveillance. They earned millions through contracts to share their data collection and analysis techniques with the National Security Agenc
  • As much as the dot-com crash, the horrors of 9/11 set the stage for the rise of surveillance capitalism. Zuboff notes that, in 2000, members of the Federal Trade Commission, frustrated by internet companies’ lack of progress in adopting privacy protections, began formulating legislation to secure people’s control over their online information and severely restrict the companies’ ability to collect and store it. It seemed obvious to the regulators that ownership of personal data should by default lie in the hands of private citizens, not corporations.
  • The 9/11 attacks changed the calculus. The centralized collection and analysis of online data, on a vast scale, came to be seen as essential to national security. “The privacy provisions debated just months earlier vanished from the conversation more or less overnight,”
Emily Freilich

The Man Who Would Teach Machines to Think - James Somers - The Atlantic - 1 views

  • Douglas Hofstadter, the Pulitzer Prize–winning author of Gödel, Escher, Bach, thinks we've lost sight of what artificial intelligence really means. His stubborn quest to replicate the human mind.
  • “If somebody meant by artificial intelligence the attempt to understand the mind, or to create something human-like, they might say—maybe they wouldn’t go this far—but they might say this is some of the only good work that’s ever been done
  • Their operating premise is simple: the mind is a very unusual piece of software, and the best way to understand how a piece of software works is to write it yourself.
  • ...43 more annotations...
  • “It depends on what you mean by artificial intelligence.”
  • Computers are flexible enough to model the strange evolved convolutions of our thought, and yet responsive only to precise instructions. So if the endeavor succeeds, it will be a double victory: we will finally come to know the exact mechanics of our selves—and we’ll have made intelligent machines.
  • Ever since he was about 14, when he found out that his youngest sister, Molly, couldn’t understand language, because she “had something deeply wrong with her brain” (her neurological condition probably dated from birth, and was never diagnosed), he had been quietly obsessed by the relation of mind to matter.
  • How could consciousness be physical? How could a few pounds of gray gelatin give rise to our very thoughts and selves?
  • In 1931, the Austrian-born logician Kurt Gödel had famously shown how a mathematical system could make statements not just about numbers but about the system itself.
  • Consciousness, Hofstadter wanted to say, emerged via just the same kind of “level-crossing feedback loop.”
  • But then AI changed, and Hofstadter didn’t change with it, and for that he all but disappeared.
  • By the early 1980s, the pressure was great enough that AI, which had begun as an endeavor to answer yes to Alan Turing’s famous question, “Can machines think?,” started to mature—or mutate, depending on your point of view—into a subfield of software engineering, driven by applications.
  • Take Deep Blue, the IBM supercomputer that bested the chess grandmaster Garry Kasparov. Deep Blue won by brute force.
  • Hofstadter wanted to ask: Why conquer a task if there’s no insight to be had from the victory? “Okay,” he says, “Deep Blue plays very good chess—so what? Does that tell you something about how we play chess? No. Does it tell you about how Kasparov envisions, understands a chessboard?”
  • AI started working when it ditched humans as a model, because it ditched them. That’s the thrust of the analogy: Airplanes don’t flap their wings; why should computers think?
  • It’s a compelling point. But it loses some bite when you consider what we want: a Google that knows, in the way a human would know, what you really mean when you search for something
  • How do you make a search engine that understands if you don’t know how you understand?
  • Cognition is recognition,” he likes to say. He describes “seeing as” as the essential cognitive act: you see some lines a
  • s “an A,” you see a hunk of wood as “a table,” you see a meeting as “an emperor-has-no-clothes situation” and a friend’s pouting as “sour grapes”
  • That’s what it means to understand. But how does understanding work?
  • analogy is “the fuel and fire of thinking,” the bread and butter of our daily mental lives.
  • there’s an analogy, a mental leap so stunningly complex that it’s a computational miracle: somehow your brain is able to strip any remark of the irrelevant surface details and extract its gist, its “skeletal essence,” and retrieve, from your own repertoire of ideas and experiences, the story or remark that best relates.
  • in Hofstadter’s telling, the story goes like this: when everybody else in AI started building products, he and his team, as his friend, the philosopher Daniel Dennett, wrote, “patiently, systematically, brilliantly,” way out of the light of day, chipped away at the real problem. “Very few people are interested in how human intelligence works,”
  • For more than 30 years, Hofstadter has worked as a professor at Indiana University at Bloomington
  • “Nobody is a very reliable guide concerning activities in their mind that are, by definition, subconscious,” he once wrote. “This is what makes vast collections of errors so important. In an isolated error, the mechanisms involved yield only slight traces of themselves; however, in a large collection, vast numbers of such slight traces exist, collectively adding up to strong evidence for (and against) particular mechanisms.
  • project out of IBM called Candide. The idea behind Candide, a machine-translation system, was to start by admitting that the rules-based approach requires too deep an understanding of how language is produced; how semantics, syntax, and morphology work; and how words commingle in sentences and combine into paragraphs—to say nothing of understanding the ideas for which those words are merely conduits.
  • , Hofstadter directs the Fluid Analogies Research Group, affectionately known as FARG.
  • Parts of a program can be selectively isolated to see how it functions without them; parameters can be changed to see how performance improves or degrades. When the computer surprises you—whether by being especially creative or especially dim-witted—you can see exactly why.
  • When you read Fluid Concepts and Creative Analogies: Computer Models of the Fundamental Mechanisms of Thought, which describes in detail this architecture and the logic and mechanics of the programs that use it, you wonder whether maybe Hofstadter got famous for the wrong book.
  • ut very few people, even admirers of GEB, know about the book or the programs it describes. And maybe that’s because FARG’s programs are almost ostentatiously impractical. Because they operate in tiny, seemingly childish “microdomains.” Because there is no task they perform better than a human.
  • “The entire effort of artificial intelligence is essentially a fight against computers’ rigidity.”
  • The quick unconscious chaos of a mind can be slowed down on the computer, or rewound, paused, even edited
  • So IBM threw that approach out the window. What the developers did instead was brilliant, but so straightforward,
  • The technique is called “machine learning.” The goal is to make a device that takes an English sentence as input and spits out a French sentence
  • What you do is feed the machine English sentences whose French translations you already know. (Candide, for example, used 2.2 million pairs of sentences, mostly from the bilingual proceedings of Canadian parliamentary debates.)
  • By repeating this process with millions of pairs of sentences, you will gradually calibrate your machine, to the point where you’ll be able to enter a sentence whose translation you don’t know and get a reasonable resul
  • Google Translate team can be made up of people who don’t speak most of the languages their application translates. “It’s a bang-for-your-buck argument,” Estelle says. “You probably want to hire more engineers instead” of native speakers.
  • But the need to serve 1 billion customers has a way of forcing the company to trade understanding for expediency. You don’t have to push Google Translate very far to see the compromises its developers have made for coverage, and speed, and ease of engineering. Although Google Translate captures, in its way, the products of human intelligence, it isn’t intelligent itself.
  • “Did we sit down when we built Watson and try to model human cognition?” Dave Ferrucci, who led the Watson team at IBM, pauses for emphasis. “Absolutely not. We just tried to create a machine that could win at Jeopardy.”
  • “There’s a limited number of things you can do as an individual, and I think when you dedicate your life to something, you’ve got to ask yourself the question: To what end? And I think at some point I asked myself that question, and what it came out to was, I’m fascinated by how the human mind works, it would be fantastic to understand cognition, I love to read books on it, I love to get a grip on it”—he called Hofstadter’s work inspiring—“but where am I going to go with it? Really what I want to do is build computer systems that do something.
  • For Ferrucci, the definition of intelligence is simple: it’s what a program can do. Deep Blue was intelligent because it could beat Garry Kasparov at chess. Watson was intelligent because it could beat Ken Jennings at Jeopardy.
  • Peter Norvig, one of Google’s directors of research, echoes Ferrucci almost exactly. “I thought he was tackling a really hard problem,” he told me about Hofstadter’s work. “And I guess I wanted to do an easier problem.”
  • Hofstadter hasn’t been to an artificial-intelligence conference in 30 years. “There’s no communication between me and these people,” he says of his AI peers. “None. Zero. I don’t want to talk to colleagues that I find very, very intransigent and hard to convince of anything
  • As our machines get faster and ingest more data, we allow ourselves to be dumber. Instead of wrestling with our hardest problems in earnest, we can just plug in billions of examples of them.
  • Of course, the folly of being above the fray is that you’re also not a part of it
  • Everything from plate tectonics to evolution—all those ideas, someone had to fight for them, because people didn’t agree with those ideas.
  • Academia is not an environment where you just sit in your bath and have ideas and expect everyone to run around getting excited. It’s possible that in 50 years’ time we’ll say, ‘We really should have listened more to Doug Hofstadter.’ But it’s incumbent on every scientist to at least think about what is needed to get people to understand the ideas.”
Javier E

The American Scholar: The Disadvantages of an Elite Education - William Deresiewicz - 1 views

  • the last thing an elite education will teach you is its own inadequacy
  • I’m talking about the whole system in which these skirmishes play out. Not just the Ivy League and its peer institutions, but also the mechanisms that get you there in the first place: the private and affluent public “feeder” schools, the ever-growing parastructure of tutors and test-prep courses and enrichment programs, the whole admissions frenzy and everything that leads up to and away from it. The message, as always, is the medium. Before, after, and around the elite college classroom, a constellation of values is ceaselessly inculcated.
  • The first disadvantage of an elite education, as I learned in my kitchen that day, is that it makes you incapable of talking to people who aren’t like you. Elite schools pride themselves on their diversity, but that diversity is almost entirely a matter of ethnicity and race. With respect to class, these schools are largely—indeed increasingly—homogeneous. Visit any elite campus in our great nation and you can thrill to the heartwarming spectacle of the children of white businesspeople and professionals studying and playing alongside the children of black, Asian, and Latino businesspeople and professionals.
  • ...34 more annotations...
  • My education taught me to believe that people who didn’t go to an Ivy League or equivalent school weren’t worth talking to, regardless of their class. I was given the unmistakable message that such people were beneath me.
  • The existence of multiple forms of intelligence has become a commonplace, but however much elite universities like to sprinkle their incoming classes with a few actors or violinists, they select for and develop one form of intelligence: the analytic.
  • Students at places like Cleveland State, unlike those at places like Yale, don’t have a platoon of advisers and tutors and deans to write out excuses for late work, give them extra help when they need it, pick them up when they fall down.
  • When people say that students at elite schools have a strong sense of entitlement, they mean that those students think they deserve more than other people because their SAT scores are higher.
  • The political implications should be clear. As John Ruskin told an older elite, grabbing what you can get isn’t any less wicked when you grab it with the power of your brains than with the power of your fists.
  • students at places like Yale get an endless string of second chances. Not so at places like Cleveland State.
  • The second disadvantage, implicit in what I’ve been saying, is that an elite education inculcates a false sense of self-worth. Getting to an elite college, being at an elite college, and going on from an elite college—all involve numerical rankings: SAT, GPA, GRE. You learn to think of yourself in terms of those numbers. They come to signify not only your fate, but your identity; not only your identity, but your value.
  • An elite education gives you the chance to be rich—which is, after all, what we’re talking about—but it takes away the chance not to be. Yet the opportunity not to be rich is one of the greatest opportunities with which young Americans have been blessed. We live in a society that is itself so wealthy that it can afford to provide a decent living to whole classes of people who in other countries exist (or in earlier times existed) on the brink of poverty or, at least, of indignity. You can live comfortably in the United States as a schoolteacher, or a community organizer, or a civil rights lawyer, or an artist
  • In short, the way students are treated in college trains them for the social position they will occupy once they get out. At schools like Cleveland State, they’re being trained for positions somewhere in the middle of the class system, in the depths of one bureaucracy or another. They’re being conditioned for lives with few second chances, no extensions, little support, narrow opportunity—lives of subordination, supervision, and control, lives of deadlines, not guidelines. At places like Yale, of course, it’s the reverse.
  • Elite schools nurture excellence, but they also nurture what a former Yale graduate student I know calls “entitled mediocrity.”
  • For the elite, there’s always another extension—a bailout, a pardon, a stint in rehab—always plenty of contacts and special stipends—the country club, the conference, the year-end bonus, the dividend.
  • The liberal arts university is becoming the corporate university, its center of gravity shifting to technical fields where scholarly expertise can be parlayed into lucrative business opportunities.
  • You have to live in an ordinary house instead of an apartment in Manhattan or a mansion in L.A.; you have to drive a Honda instead of a BMW or a Hummer; you have to vacation in Florida instead of Barbados or Paris, but what are such losses when set against the opportunity to do work you believe in, work you’re suited for, work you love, every day of your life? Yet it is precisely that opportunity that an elite education takes away. How can I be a schoolteacher—wouldn’t that be a waste of my expensive education?
  • Isn’t it beneath me? So a whole universe of possibility closes, and you miss your true calling.
  • This is not to say that students from elite colleges never pursue a riskier or less lucrative course after graduation, but even when they do, they tend to give up more quickly than others.
  • At a school like Yale, students who come to class and work hard expect nothing less than an A-. And most of the time, they get it.
  • being an intellectual is not the same as being smart. Being an intellectual means more than doing your homework.
  • The system forgot to teach them, along the way to the prestige admissions and the lucrative jobs, that the most important achievements can’t be measured by a letter or a number or a name. It forgot that the true purpose of education is to make minds, not careers.
  • Being an intellectual means, first of all, being passionate about ideas—and not just for the duration of a semester, for the sake of pleasing the teacher, or for getting a good grade.
  • Only a small minority have seen their education as part of a larger intellectual journey, have approached the work of the mind with a pilgrim soul. These few have tended to feel like freaks, not least because they get so little support from the university itself. Places like Yale, as one of them put it to me, are not conducive to searchers. GA_googleFillSlot('Rectangle_InArticle_Right'); GA_googleCreateDomIframe("google_ads_div_Rectangle_InArticle_Right_ad_container" ,"Rectangle_InArticle_Right"); Places like Yale are simply not set up to help students ask the big questions
  • Professors at top research institutions are valued exclusively for the quality of their scholarly work; time spent on teaching is time lost. If students want a conversion experience, they’re better off at a liberal arts college.
  • When elite universities boast that they teach their students how to think, they mean that they teach them the analytic and rhetorical skills necessary for success in law or medicine or science or business.
  • Although the notion of breadth is implicit in the very idea of a liberal arts education, the admissions process increasingly selects for kids who have already begun to think of themselves in specialized terms—the junior journalist, the budding astronomer, the language prodigy. We are slouching, even at elite schools, toward a glorified form of vocational training.
  • There’s a reason elite schools speak of training leaders, not thinkers—holders of power, not its critics. An independent mind is independent of all allegiances, and elite schools, which get a large percentage of their budget from alumni giving, are strongly invested in fostering institutional loyalty.
  • But if you’re afraid to fail, you’re afraid to take risks, which begins to explain the final and most damning disadvantage of an elite education: that it is profoundly anti-intellectual.
  • Yet there is a dimension of the intellectual life that lies above the passion for ideas, though so thoroughly has our culture been sanitized of it that it is hardly surprising if it was beyond the reach of even my most alert students. Since the idea of the intellectual emerged in the 18th century, it has had, at its core, a commitment to social transformation. Being an intellectual means thinking your way toward a vision of the good society and then trying to realize that vision by speaking truth to power.
  • It takes more than just intellect; it takes imagination and courage.
  • Being an intellectual begins with thinking your way outside of your assumptions and the system that enforces them. But students who get into elite schools are precisely the ones who have best learned to work within the system, so it’s almost impossible for them to see outside it, to see that it’s even there.
  • Paradoxically, the situation may be better at second-tier schools and, in particular, again, at liberal arts colleges than at the most prestigious universities. Some students end up at second-tier schools because they’re exactly like students at Harvard or Yale, only less gifted or driven. But others end up there because they have a more independent spirit. They didn’t get straight A’s because they couldn’t be bothered to give everything in every class. They concentrated on the ones that meant the most to them or on a single strong extracurricular passion or on projects that had nothing to do with school
  • I’ve been struck, during my time at Yale, by how similar everyone looks. You hardly see any hippies or punks or art-school types, and at a college that was known in the ’80s as the Gay Ivy, few out lesbians and no gender queers. The geeks don’t look all that geeky; the fashionable kids go in for understated elegance. Thirty-two flavors, all of them vanilla.
  • The most elite schools have become places of a narrow and suffocating normalcy. Everyone feels pressure to maintain the kind of appearance—and affect—that go with achievement
  • Now that students are in constant electronic contact, they never have trouble finding each other. But it’s not as if their compulsive sociability is enabling them to develop deep friendships.
  • What happens when busyness and sociability leave no room for solitude? The ability to engage in introspection, I put it to my students that day, is the essential precondition for living an intellectual life, and the essential precondition for introspection is solitude
  • the life of the mind is lived one mind at a time: one solitary, skeptical, resistant mind at a time. The best place to cultivate it is not within an educational system whose real purpose is to reproduce the class system.
Laura Gates

Why "Just Say No" Doesn't Work - 1 views

  •  
    Touches one subjects like confirmation bias, peer pressure, perception
Javier E

How Boys Teach Each Other to Be Boys - Noah Berlatsky - The Atlantic - 2 views

  • Judy Y. Chu reports on her two-year study in which she followed a group of boys from pre-kindergarten through first grade.
  • She concluded that most of what we think of as "boy" behavior isn't natural or authentic to boys, but is something they learn to perform. Boys aren't stoic or aggressive or hierarchical; they aren't bad at forming relationships or unable to express themselves. They pick up all these traditional traits of masculinity by adapting to a culture that expects and demands that they do so.
  • it seems like they become boys through learning from other boys; it's boys teaching themselves to be boys. So where do you see the inauthenticity or unnaturalness there?
  • ...9 more annotations...
  • It's not as though they're arriving in their interactions having come from an isolated place. They're hearing messages from older siblings, from media, or some of the boys' parents
  • They're creating a culture for themselves based on the bits and pieces they've gotten elsewhere.
  • it's more that distinction between compromise and over-compromise, in which they're so focused on setting up a particular image that they believe will get them what they want—acceptance and popularity and success—and realizing that that comes at a cost.
  • they need to have at least one place or one relationship where they can do those things.
  • when you take the whole range of human capabilities and qualities, and you say one half is masculine, and one half is feminine, and only boys can be masculine, and only girls can be feminine, then everybody loses, because you're asking e
  • we live in a culture and a society where we are perceived in our bodies, and people respond to us accordingly. So boys and girls do grow up in a gendered society
  • infant studies show us that both boys and girls are born with a capacity for and a fundamental desire for relationships—to be close and to be connected. And if anything they find that boy babies need more help regulating themselves. When they are upset they need their primary caregiver to help them regulate and come back to a feeling of contentment.
  • then at these later reports when they get to adolescence where boys are reporting fewer close relationships, lower levels of intimacy within their close relationships, then that kind of suggests that for boys, their socialization and development are associated with a move out of relationships. They start out wanting and thriving in relationships, and then they are moved away from those protective relationships, and there's a cost.
  • the single best protector during adolescence, against psychological risks like low self-esteem and depression, and against social risks like unintended pregnancy, was having access to at least one close, confiding relationship. And that could be with a parent or a mentor or a friend or a sibling or whoever. The kids who had at least one close relationship were protected against all of those risks, and were better off for it.  
Javier E

Technology's Man Problem - NYTimes.com - 0 views

  • computer engineering, the most innovative sector of the economy, remains behind. Many women who want to be engineers encounter a field where they not only are significantly underrepresented but also feel pushed away.
  • Among the women who join the field, 56 percent leave by midcareer, a startling attrition rate that is double that for men, according to research from the Harvard Business School.
  • A culprit, many people in the field say, is a sexist, alpha-male culture that can make women and other people who don’t fit the mold feel unwelcome, demeaned or even endangered.
  • ...12 more annotations...
  • “I’ve been a programmer for 13 years, and I’ve always been one of the only women and queer people in the room. I’ve been harassed, I’ve had people make suggestive comments to me, I’ve had people basically dismiss my expertise. I’ve gotten rape and death threats just for speaking out about this stuff.”
  • “We see these stories, ‘Why aren’t there more women in computer science and engineering?’ and there’s all these complicated answers like, ‘School advisers don’t have them take math and physics,’ and it’s probably true,” said Lauren Weinstein, a man who has spent his four-decade career in tech working mostly with other men, and is currently a consultant for Google.“But I think there’s probably a simpler reason,” he said, “which is these guys are just jerks, and women know it.”
  • once programming gained prestige, women were pushed out. Over the decades, the share of women in computing has continued to decline. In 2012, just 18 percent of computer-science college graduates were women, down from 37 percent in 1985, according to the National Center for Women & Information Technology.
  • Some 1.2 million computing jobs will be available in 2022, yet United States universities are producing only 39 percent of the graduates needed to fill them, the N.C.W.I.T. estimates.
  • Twenty percent of software developers are women, according to the Labor Department, and fewer than 6 percent of engineers are black or Hispanic. Comparatively, 56 percent of people in business and financial-operations jobs are women, as are 36 percent of physicians and surgeons and one-third of lawyers.
  • an engineer at Pinterest has collected data from people at 133 start-ups and found that an average of 12 percent of the engineers are women.
  • “It makes a hostile environment for me,” she said. “But I don’t want to raise my hand and call negative attention toward myself, and become the woman who is the problem — ‘that woman.’ In start-up culture they protect their own tribe, so by putting my hand up, I’m saying I’m an ‘other,’ I shouldn’t be there, so for me that’s an economic threat.”
  • “Many women have come to me and said they basically have had to hide on the Net now,” said Mr. Weinstein, who works on issues of identity and anonymity online. “They use male names, they don’t put their real photos up, because they are immediately targeted and harassed.”
  • “It’s a boys’ club, and you have to try to get into it, and they’re trying as hard as they can to prove you can’t,” said Ephrat Bitton, the director of algorithms at FutureAdvisor, an online investment start-up that she says has a better culture because almost half the engineers are women.
  • Writing code is a high-pressure job with little room for error, as are many jobs. But coding can be stressful in a different way, women interviewed for this article said, because code reviews — peer reviews to spot mistakes in software — can quickly devolve.
  • “Code reviews are brutal — ‘Mine is better than yours, I see flaws in yours’ — and they should be, for the creation of good software,” said Ellen Ullman, a software engineer and author. “I think when you add a drop of women into it, it just exacerbates the problem, because here’s a kind of foreigner.”
  • But some women argue that these kinds of initiatives are unhelpful.“My general issue with the coverage of women in tech is that women in the technology press are talked about in the context of being women, and men are talked about in the context of being in technology,” said a technical woman who would speak only on condition of anonymity because she did not want to be part of an article about women in tech.
Javier E

Today's Exhausted Superkids - The New York Times - 1 views

  • Sleep deprivation is just a part of the craziness, but it’s a perfect shorthand for childhoods bereft of spontaneity, stripped of real play and haunted by the “pressure of perfection,” to quote the headline on a story by Julie Scelfo in The Times this week.
  • In a study in the medical journal Pediatrics this year, about 55 percent of American teenagers from the ages of 14 to 17 reported that they were getting less than seven hours a night, though the National Sleep Foundation counsels 8 to 10.
  • Smartphones and tablets aggravate the problem, keeping kids connected and distracted long after lights out. But in communities where academic expectations run highest, the real culprit is panic: about acing the exam, burnishing the transcript, keeping up with high-achieving peers.
  • ...1 more annotation...
  • “No one is arguing for a generation of mediocre or underachieving kids — but plenty of people have begun arguing for a redefinition of what it means to achieve at all,” wrote Jeffrey Kluger in Time magazine last week. He noted, rightly, that “somewhere between the self-esteem building of going for the gold and the self-esteem crushing of the Ivy-or-die ethos, there has to be a place where kids can breathe.”
Javier E

Raymond Tallis Takes Out the 'Neurotrash' - The Chronicle Review - The Chronicle of Hig... - 0 views

  • Tallis informs 60 people gathered in a Kent lecture hall that his talk will demolish two "pillars of unwisdom." The first, "neuromania," is the notion that to understand people you must peer into the "intracranial darkness" of their skulls with brain-scanning technology. The second, "Darwinitis," is the idea that Charles Darwin's evolutionary theory can explain not just the origin of the human species—a claim Tallis enthusiastically accepts—but also the nature of human behavior and institutions.
  • Aping Mankind argues that neuroscientific approaches to things like love, wisdom, and beauty are flawed because you can't reduce the mind to brain activity alone.
  • Stephen Cave, a Berlin-based philosopher and writer who has called Aping Mankind "an important work," points out that most philosophers and scientists do in fact believe "that mind is just the product of certain brain activity, even if we do not currently know quite how." Tallis "does both the reader and these thinkers an injustice" by declaring that view "obviously" wrong,
  • ...5 more annotations...
  • cultural memes. The Darwinesque concept originates in Dawkins's 1976 book, The Selfish Gene. Memes are analogous to genes, Dennett has said, "replicating units of culture" that spread from mind to mind like a virus. Religion, chess, songs, clothing, tolerance for free speech—all have been described as memes. Tallis considers it absurd to talk of a noun-phrase like "tolerance for free speech" as a discrete entity. But Dennett argues that Tallis's objections are based on "a simplistic idea of what one might mean by a unit." Memes aren't units? Well, in that spirit, says Dennett, organisms aren't units of biology, nor are species—they're too complex, with too much variation. "He's got to allow theory to talk about entities which are not simple building blocks," Dennett says.
  • Geraint Rees, director of University College London's Institute of Cognitive Neuroscience, complains that reading Tallis is "a bit like trying to nail jelly to the wall." He "rubbishes every current theory of the relationship between mind and brain, whether philosophical or neuroscientific," while offering "little or no alternative,"
  • How is it that he perceives the glass of water on the table? How is it that he feels a sense of self over time? How is it that he can remember a patient he saw in 1973, and then cast his mind forward to his impending visit to the zoo? There are serious problems with trying to reduce such things to impulses in the brain, he argues. We can explain "how the light gets in," he says, but not "how the gaze looks out." And isn't it astonishing, he adds, that much neural activity seems to have no link to consciousness? Instead, it's associated with things like controlling automatic movements and regulating blood pressure. Sure, we need the brain for consciousness: "Chop my head off, and my IQ descends." But it's not the whole story. There is more to perceptions, memories, and beliefs than neural impulses can explain. The human sphere encompasses a "community of minds," Tallis has written, "woven out of a trillion cognitive handshakes of shared attention, within which our freedom operates and our narrated lives are led." Those views on perception and memory anchor his attack on "neurobollocks." Because if you can't get the basics right, he says, then it's premature to look to neuroscience for clues to complex things like love.
  • Yes, many unanswered questions persist. But these are early days, and neuroscience remains immature, says Churchland, a professor emerita of philosophy at University of California at San Diego and author of the subfield-spawning 1986 book Neurophilosophy. In the 19th century, she points out, people thought we'd never understand light. "Well, by gosh," she says, "by the time the 20th century rolls around, it turns out that light is electromagnetic radiation. ... So the fact that at a certain point in time something seems like a smooth-walled mystery that we can't get a grip on, doesn't tell us anything about whether some real smart graduate student is going to sort it out in the next 10 years or not."
  • Dennett claims he's got much of it sorted out already. He wrote a landmark book on the topic in 1991, Consciousness Explained. (The title "should have landed him in court, charged with breach of the Trade Descriptions Act," writes Tallis.) Dennett uses the vocabulary of computer science to explain how consciousness emerges from the huge volume of things happening in the brain all at once. We're not aware of everything, he tells me, only a "limited window." He describes that stream of consciousness as "the activities of a virtual machine which is running on the parallel hardware of the brain." "You—the fruits of all your experience, not just your genetic background, but everything you've learned and done and all your memories—what ties those all together? What makes a self?" Dennett asks. "The answer is, and has to be, the self is like a software program that organizes the activities of the brain."
krystalxu

Why Study Philosophy? 'To Challenge Your Own Point of View' - The Atlantic - 1 views

  • Goldstein’s forthcoming book, Plato at the Googleplex: Why Philosophy Won’t Go Away, offers insight into the significant—and often invisible—progress that philosophy has made. I spoke with Goldstein about her take on the science vs. philosophy debates, how we can measure philosophy’s advances, and why an understanding of philosophy is critical to our lives today.
  • One of the things about philosophy is that you don’t have to give up on any other field. Whatever field there is, there’s a corresponding field of philosophy. Philosophy of language, philosophy of politics, philosophy of math. All the things I wanted to know about I could still study within a philosophical framework.
  • There’s a peer pressure that sets in at a certain age. They so much want to be like everybody else. But what I’ve found is that if you instill this joy of thinking, the sheer intellectual fun, it will survive even the adolescent years and come back in fighting form. It’s empowering.
  • ...18 more annotations...
  • One thing that’s changed tremendously is the presence of women and the change in focus because of that. There’s a lot of interest in literature and philosophy, and using literature as a philosophical examination. It makes me so happy! Because I was seen as a hard-core analytic philosopher, and when I first began to write novels people thought, Oh, and we thought she was serious! But that’s changed entirely. People take literature seriously, especially in moral philosophy, as thought experiments. A lot of the most developed and effective thought experiments come from novels. Also, novels contribute to making moral progress, changing people’s emotions.
  • The other thing that’s changed is that there’s more applied philosophy. Let’s apply philosophical theory to real-life problems, like medical ethics, environmental ethics, gender issues. This is a real change from when I was in school and it was only theory.
  • here’s a lot of philosophical progress, it’s just a progress that’s very hard to see. It’s very hard to see because we see with it. We incorporate philosophical progress into our own way of viewing the world.
  • Plato would be constantly surprised by what we know. And not only what we know scientifically, or by our technology, but what we know ethically. We take a lot for granted. It’s obvious to us, for example, that individual’s ethical truths are equally important.
  • it’s usually philosophical arguments that first introduce the very outlandish idea that we need to extend rights. And it takes more, it takes a movement, and activism, and emotions, to affect real social change. It starts with an argument, but then it becomes obvious. The tracks of philosophy’s work are erased because it becomes intuitively obvious
  • The arguments against slavery, against cruel and unusual punishment, against unjust wars, against treating children cruelly—these all took arguments.
  • About 30 years ago, the philosopher Peter Singer started to argue about the way animals are treated in our factory farms. Everybody thought he was nuts. But I’ve watched this movement grow; I’ve watched it become emotional. It has to become emotional. You have to draw empathy into it. But here it is, right in our time—a philosopher making the argument, everyone dismissing it, but then people start discussing it. Even criticizing it, or saying it’s not valid, is taking it seriously
  • The question of whether some of these scientific theories are really even scientific. Can we get predictions out of them?
  • We are very inertial creatures. We do not like to change our thinking, especially if it’s inconvenient for us. And certainly the people in power never want to wonder whether they should hold power.
  • I’m really trying to draw the students out, make them think for themselves. The more they challenge me, the more successful I feel as a teacher. It has to be very active
  • Plato used the metaphor that in teaching philosophy, there needs to be a fire in the teacher, and the sheer heat will help the fire grow in the student. It’s something that’s kindled because of the proximity to the heat.
  • how can you make the case that they should study philosophy?
  • ches your inner life. You have lots of frameworks to apply to problems, and so many ways to interpret things. It makes life so much more interesting. It’s us at our most human. And it helps us increase our humanity. No matter what you do, that’s an asset.
  • What do you think are the biggest philosophical issues of our time? The growth in scientific knowledge presents new philosophical issues.
  • The idea of the multiverse. Where are we in the universe? Physics is blowing our minds about this.
  • This is what we have to teach our children. Even things that go against their intuition they need to take seriously. What was intuition two generations ago is no longer intuition; and it’s arguments that change i
  • And with the growth in cognitive science and neuroscience. We’re going into the brain and getting these images of the brain. Are we discovering what we really are? Are we solving the problem of free will? Are we learning that there isn’t any free will? How much do the advances in neuroscience tell us about the deep philosophical issues?
  • With the decline of religion is there a sense of the meaninglessness of life and the easy consumerist answer that’s filling the space religion used to occupy? This is something that philosophers ought to be addressing.
Javier E

ROUGH TYPE | Nicholas Carr's blog - 0 views

  • The smartphone has become a repository of the self, recording and dispensing the words, sounds and images that define what we think, what we experience and who we are. In a 2015 Gallup survey, more than half of iPhone owners said that they couldn’t imagine life without the device.
  • So what happens to our minds when we allow a single tool such dominion over our perception and cognition?
  • Not only do our phones shape our thoughts in deep and complicated ways, but the effects persist even when we aren’t using the devices. As the brain grows dependent on the technology, the research suggests, the intellect weakens.
  • ...39 more annotations...
  • he has seen mounting evidence that using a smartphone, or even hearing one ring or vibrate, produces a welter of distractions that makes it harder to concentrate on a difficult problem or job. The division of attention impedes reasoning and performance.
  • Another 2015 study, appearing in the Journal of Computer-Mediated Communication, showed that when people hear their phone ring but are unable to answer it, their blood pressure spikes, their pulse quickens, and their problem-solving skills decline.
  • The researchers recruited 520 undergraduates at UCSD and gave them two standard tests of intellectual acuity. One test gauged “available working-memory capacity,” a measure of how fully a person’s mind can focus on a particular task. The second assessed “fluid intelligence,” a person’s ability to interpret and solve an unfamiliar problem. The only variable in the experiment was the location of the subjects’ smartphones. Some of the students were asked to place their phones in front of them on their desks; others were told to stow their phones in their pockets or handbags; still others were required to leave their phones in a different room.
  • In both tests, the subjects whose phones were in view posted the worst scores, while those who left their phones in a different room did the best. The students who kept their phones in their pockets or bags came out in the middle. As the phone’s proximity increased, brainpower decreased.
  • In subsequent interviews, nearly all the participants said that their phones hadn’t been a distraction—that they hadn’t even thought about the devices during the experiment. They remained oblivious even as the phones disrupted their focus and thinking.
  • In a 2013 study conducted at the University of Essex in England, 142 participants were divided into pairs and asked to converse in private for ten minutes. Half talked with a phone in the room, half without a phone present. The subjects were then given tests of affinity, trust and empathy. “The mere presence of mobile phones,” the researchers reported in the Journal of Social and Personal Relationships, “inhibited the development of interpersonal closeness and trust” and diminished “the extent to which individuals felt empathy and understanding from their partners.”
  • the “integration of smartphones into daily life” appears to cause a “brain drain” that can diminish such vital mental skills as “learning, logical reasoning, abstract thought, problem solving, and creativity.”
  •  Smartphones have become so entangled with our existence that, even when we’re not peering or pawing at them, they tug at our attention, diverting precious cognitive resources. Just suppressing the desire to check our phone, which we do routinely and subconsciously throughout the day, can debilitate our thinking.
  • They found that students who didn’t bring their phones to the classroom scored a full letter-grade higher on a test of the material presented than those who brought their phones. It didn’t matter whether the students who had their phones used them or not: All of them scored equally poorly.
  • A study of nearly a hundred secondary schools in the U.K., published last year in the journal Labour Economics, found that when schools ban smartphones, students’ examination scores go up substantially, with the weakest students benefiting the most.
  • Social skills and relationships seem to suffer as well.
  • Because smartphones serve as constant reminders of all the friends we could be chatting with electronically, they pull at our minds when we’re talking with people in person, leaving our conversations shallower and less satisfying.
  • A second experiment conducted by the researchers produced similar results, while also revealing that the more heavily students relied on their phones in their everyday lives, the greater the cognitive penalty they suffered.
  • The evidence that our phones can get inside our heads so forcefully is unsettling. It suggests that our thoughts and feelings, far from being sequestered in our skulls, can be skewed by external forces we’re not even aware o
  •  Scientists have long known that the brain is a monitoring system as well as a thinking system. Its attention is drawn toward any object that is new, intriguing or otherwise striking — that has, in the psychological jargon, “salience.”
  • even in the history of captivating media, the smartphone stands out. It is an attention magnet unlike any our minds have had to grapple with before. Because the phone is packed with so many forms of information and so many useful and entertaining functions, it acts as what Dr. Ward calls a “supernormal stimulus,” one that can “hijack” attention whenever it is part of our surroundings — and it is always part of our surroundings.
  • Imagine combining a mailbox, a newspaper, a TV, a radio, a photo album, a public library and a boisterous party attended by everyone you know, and then compressing them all into a single, small, radiant object. That is what a smartphone represents to us. No wonder we can’t take our minds off it.
  • The irony of the smartphone is that the qualities that make it so appealing to us — its constant connection to the net, its multiplicity of apps, its responsiveness, its portability — are the very ones that give it such sway over our minds.
  • Phone makers like Apple and Samsung and app writers like Facebook, Google and Snap design their products to consume as much of our attention as possible during every one of our waking hours
  • Social media apps were designed to exploit “a vulnerability in human psychology,” former Facebook president Sean Parker said in a recent interview. “[We] understood this consciously. And we did it anyway.”
  • A quarter-century ago, when we first started going online, we took it on faith that the web would make us smarter: More information would breed sharper thinking. We now know it’s not that simple.
  • As strange as it might seem, people’s knowledge and understanding may actually dwindle as gadgets grant them easier access to online data stores
  • In a seminal 2011 study published in Science, a team of researchers — led by the Columbia University psychologist Betsy Sparrow and including the late Harvard memory expert Daniel Wegner — had a group of volunteers read forty brief, factual statements (such as “The space shuttle Columbia disintegrated during re-entry over Texas in Feb. 2003”) and then type the statements into a computer. Half the people were told that the machine would save what they typed; half were told that the statements would be erased.
  • Afterward, the researchers asked the subjects to write down as many of the statements as they could remember. Those who believed that the facts had been recorded in the computer demonstrated much weaker recall than those who assumed the facts wouldn’t be stored. Anticipating that information would be readily available in digital form seemed to reduce the mental effort that people made to remember it
  • The researchers dubbed this phenomenon the “Google effect” and noted its broad implications: “Because search engines are continually available to us, we may often be in a state of not feeling we need to encode the information internally. When we need it, we will look it up.”
  • as the pioneering psychologist and philosopher William James said in an 1892 lecture, “the art of remembering is the art of thinking.”
  • Only by encoding information in our biological memory can we weave the rich intellectual associations that form the essence of personal knowledge and give rise to critical and conceptual thinking. No matter how much information swirls around us, the less well-stocked our memory, the less we have to think with.
  • As Dr. Wegner and Dr. Ward explained in a 2013 Scientific American article, when people call up information through their devices, they often end up suffering from delusions of intelligence. They feel as though “their own mental capacities” had generated the information, not their devices. “The advent of the ‘information age’ seems to have created a generation of people who feel they know more than ever before,” the scholars concluded, even though “they may know ever less about the world around them.”
  • That insight sheds light on society’s current gullibility crisis, in which people are all too quick to credit lies and half-truths spread through social media. If your phone has sapped your powers of discernment, you’ll believe anything it tells you.
  • Data, the novelist and critic Cynthia Ozick once wrote, is “memory without history.” Her observation points to the problem with allowing smartphones to commandeer our brains
  • When we constrict our capacity for reasoning and recall or transfer those skills to a gadget, we sacrifice our ability to turn information into knowledge. We get the data but lose the meaning
  • We need to give our minds more room to think. And that means putting some distance between ourselves and our phones.
  • Harvard Business School professor emerita Shoshana Zuboff argues in her new book that the Valley’s wealth and power are predicated on an insidious, essentially pathological form of private enterprise—what she calls “surveillance capitalism.” Pioneered by Google, perfected by Facebook, and now spreading throughout the economy, surveillance capitalism uses human life as its raw material. Our everyday experiences, distilled into data, have become a privately-owned business asset used to predict and mold our behavior, whether we’re shopping or socializing, working or voting.
  • By reengineering the economy and society to their own benefit, Google and Facebook are perverting capitalism in a way that undermines personal freedom and corrodes democracy.
  • Under the Fordist model of mass production and consumption that prevailed for much of the twentieth century, industrial capitalism achieved a relatively benign balance among the contending interests of business owners, workers, and consumers. Enlightened executives understood that good pay and decent working conditions would ensure a prosperous middle class eager to buy the goods and services their companies produced. It was the product itself — made by workers, sold by companies, bought by consumers — that tied the interests of capitalism’s participants together. Economic and social equilibrium was negotiated through the product.
  • By removing the tangible product from the center of commerce, surveillance capitalism upsets the equilibrium. Whenever we use free apps and online services, it’s often said, we become the products, our attention harvested and sold to advertisers
  • this truism gets it wrong. Surveillance capitalism’s real products, vaporous but immensely valuable, are predictions about our future behavior — what we’ll look at, where we’ll go, what we’ll buy, what opinions we’ll hold — that internet companies derive from our personal data and sell to businesses, political operatives, and other bidders.
  • Unlike financial derivatives, which they in some ways resemble, these new data derivatives draw their value, parasite-like, from human experience.To the Googles and Facebooks of the world, we are neither the customer nor the product. We are the source of what Silicon Valley technologists call “data exhaust” — the informational byproducts of online activity that become the inputs to prediction algorithms
  • internet companies operate in what Zuboff terms “extreme structural independence from people.” When databases displace goods as the engine of the economy, our own interests, as consumers but also as citizens, cease to be part of the negotiation. We are no longer one of the forces guiding the market’s invisible hand. We are the objects of surveillance and control.
Javier E

Teenage behavior can be understoon through brain development - The Washington Post - 0 views

  • It turns out that much of what makes teenagers seem so, well, teenage is due not to their hormones but to their rapidly changing brain circuitry. The malleable mind continues to develop during adolescence, consolidating personality, preferences and behaviors.
  • Some of those behaviors, including risk-taking and a tendency toward self-consciousness, may seem connected to peer pressure. But, Blakemore writes, they’re actually signs of brain development.
  • “The adolescent brain isn’t a dysfunctional or a defective adult brain,” she writes; it’s “a lens through which we can begin to see ourselves anew.” Blakemore paints the teenage brain as tempestuous, impressionable, dynamic — and well worth studying.
Javier E

Have Smartphones Destroyed a Generation? - The Atlantic - 0 views

  • She told me she’d spent most of the summer hanging out alone in her room with her phone. That’s just the way her generation is, she said. “We didn’t have a choice to know any life without iPads or iPhones. I think we like our phones more than we like actual people.”
  • . I had grown accustomed to line graphs of trends that looked like modest hills and valleys. Then I began studying Athena’s generation.
  • Around 2012, I noticed abrupt shifts in teen behaviors and emotional states. The gentle slopes of the line graphs became steep mountains and sheer cliffs, and many of the distinctive characteristics of the Millennial generation began to disappear. In all my analyses of generational data—some reaching back to the 1930s—I had never seen anything like it.
  • ...54 more annotations...
  • the trends persisted, across several years and a series of national surveys. The changes weren’t just in degree, but in kind.
  • The biggest difference between the Millennials and their predecessors was in how they viewed the world; teens today differ from the Millennials not just in their views but in how they spend their time. The experiences they have every day are radically different from those of the generation that came of age just a few years before them.
  • it was exactly the moment when the proportion of Americans who owned a smartphone surpassed 50 percent.
  • theirs is a generation shaped by the smartphone and by the concomitant rise of social media. I call them iGen
  • Born between 1995 and 2012, members of this generation are growing up with smartphones, have an Instagram account before they start high school, and do not remember a time before the internet.
  • iGen’s oldest members were early adolescents when the iPhone was introduced, in 2007, and high-school students when the iPad entered the scene, in 2010. A 2017 survey of more than 5,000 American teens found that three out of four owned an iPhone.
  • The arrival of the smartphone has radically changed every aspect of teenagers’ lives, from the nature of their social interactions to their mental health. These changes have affected young people in every corner of the nation and in every type of household
  • More comfortable in their bedrooms than in a car or at a party, today’s teens are physically safer than teens have ever been. They’re markedly less likely to get into a car accident and, having less of a taste for alcohol than their predecessors, are less susceptible to drinking’s attendant ills.
  • Psychologically, however, they are more vulnerable than Millennials were: Rates of teen depression and suicide have skyrocketed since 2011. It’s not an exaggeration to describe iGen as being on the brink of the worst mental-health crisis in decades. Much of this deterioration can be traced to their phones.
  • the twin rise of the smartphone and social media has caused an earthquake of a magnitude we’ve not seen in a very long time, if ever. There is compelling evidence that the devices we’ve placed in young people’s hands are having profound effects on their lives—and making them seriously unhappy.
  • But the allure of independence, so powerful to previous generations, holds less sway over today’s teens, who are less likely to leave the house without their parents. The shift is stunning: 12th-graders in 2015 were going out less often than eighth-graders did as recently as 2009.
  • Today’s teens are also less likely to date. The initial stage of courtship, which Gen Xers called “liking” (as in “Ooh, he likes you!”), kids now call “talking”—an ironic choice for a generation that prefers texting to actual conversation. After two teens have “talked” for a while, they might start dating.
  • only about 56 percent of high-school seniors in 2015 went out on dates; for Boomers and Gen Xers, the number was about 85 percent.
  • The decline in dating tracks with a decline in sexual activity. The drop is the sharpest for ninth-graders, among whom the number of sexually active teens has been cut by almost 40 percent since 1991. The average teen now has had sex for the first time by the spring of 11th grade, a full year later than the average Gen Xer
  • The teen birth rate hit an all-time low in 2016, down 67 percent since its modern peak, in 1991.
  • Nearly all Boomer high-school students had their driver’s license by the spring of their senior year; more than one in four teens today still lack one at the end of high school.
  • In conversation after conversation, teens described getting their license as something to be nagged into by their parents—a notion that would have been unthinkable to previous generations.
  • In the late 1970s, 77 percent of high-school seniors worked for pay during the school year; by the mid-2010s, only 55 percent did. The number of eighth-graders who work for pay has been cut in half.
  • Beginning with Millennials and continuing with iGen, adolescence is contracting again—but only because its onset is being delayed. Across a range of behaviors—drinking, dating, spending time unsupervised— 18-year-olds now act more like 15-year-olds used to, and 15-year-olds more like 13-year-olds. Childhood now stretches well into high school.
  • In an information economy that rewards higher education more than early work history, parents may be inclined to encourage their kids to stay home and study rather than to get a part-time job. Teens, in turn, seem to be content with this homebody arrangement—not because they’re so studious, but because their social life is lived on their phone. They don’t need to leave home to spend time with their friends.
  • eighth-, 10th-, and 12th-graders in the 2010s actually spend less time on homework than Gen X teens did in the early 1990s.
  • The time that seniors spend on activities such as student clubs and sports and exercise has changed little in recent years. Combined with the decline in working for pay, this means iGen teens have more leisure time than Gen X teens did, not less.
  • So what are they doing with all that time? They are on their phone, in their room, alone and often distressed.
  • despite spending far more time under the same roof as their parents, today’s teens can hardly be said to be closer to their mothers and fathers than their predecessors were. “I’ve seen my friends with their families—they don’t talk to them,” Athena told me. “They just say ‘Okay, okay, whatever’ while they’re on their phones. They don’t pay attention to their family.” Like her peers, Athena is an expert at tuning out her parents so she can focus on her phone.
  • The number of teens who get together with their friends nearly every day dropped by more than 40 percent from 2000 to 2015; the decline has been especially steep recently.
  • It’s not only a matter of fewer kids partying; fewer kids are spending time simply hanging out
  • The roller rink, the basketball court, the town pool, the local necking spot—they’ve all been replaced by virtual spaces accessed through apps and the web.
  • The results could not be clearer: Teens who spend more time than average on screen activities are more likely to be unhappy, and those who spend more time than average on nonscreen activities are more likely to be happy.
  • There’s not a single exception. All screen activities are linked to less happiness, and all nonscreen activities are linked to more happiness
  • Eighth-graders who spend 10 or more hours a week on social media are 56 percent more likely to say they’re unhappy than those who devote less time to social media
  • If you were going to give advice for a happy adolescence based on this survey, it would be straightforward: Put down the phone, turn off the laptop, and do something—anything—that does not involve a screen
  • Social-networking sites like Facebook promise to connect us to friends. But the portrait of iGen teens emerging from the data is one of a lonely, dislocated generation. Teens who visit social-networking sites every day but see their friends in person less frequently are the most likely to agree with the statements “A lot of times I feel lonely,” “I often feel left out of things,” and “I often wish I had more good friends.” Teens’ feelings of loneliness spiked in 2013 and have remained high since.
  • This doesn’t always mean that, on an individual level, kids who spend more time online are lonelier than kids who spend less time online.
  • Teens who spend more time on social media also spend more time with their friends in person, on average—highly social teens are more social in both venues, and less social teens are less so.
  • The more time teens spend looking at screens, the more likely they are to report symptoms of depression.
  • Eighth-graders who are heavy users of social media increase their risk of depression by 27 percent, while those who play sports, go to religious services, or even do homework more than the average teen cut their risk significantly.
  • Teens who spend three hours a day or more on electronic devices are 35 percent more likely to have a risk factor for suicide, such as making a suicide plan. (That’s much more than the risk related to, say, watching TV.)
  • Since 2007, the homicide rate among teens has declined, but the suicide rate has increased. As teens have started spending less time together, they have become less likely to kill one another, and more likely to kill themselves. In 2011, for the first time in 24 years, the teen suicide rate was higher than the teen homicide rate.
  • For all their power to link kids day and night, social media also exacerbate the age-old teen concern about being left out.
  • Today’s teens may go to fewer parties and spend less time together in person, but when they do congregate, they document their hangouts relentlessly—on Snapchat, Instagram, Facebook. Those not invited to come along are keenly aware of it. Accordingly, the number of teens who feel left out has reached all-time highs across age groups.
  • Forty-eight percent more girls said they often felt left out in 2015 than in 2010, compared with 27 percent more boys. Girls use social media more often, giving them additional opportunities to feel excluded and lonely when they see their friends or classmates getting together without them.
  • Social media levy a psychic tax on the teen doing the posting as well, as she anxiously awaits the affirmation of comments and likes. When Athena posts pictures to Instagram, she told me, “I’m nervous about what people think and are going to say. It sometimes bugs me when I don’t get a certain amount of likes on a picture.”
  • Girls have also borne the brunt of the rise in depressive symptoms among today’s teens. Boys’ depressive symptoms increased by 21 percent from 2012 to 2015, while girls’ increased by 50 percent—more than twice as much
  • The rise in suicide, too, is more pronounced among girls. Although the rate increased for both sexes, three times as many 12-to-14-year-old girls killed themselves in 2015 as in 2007, compared with twice as many boys
  • Social media give middle- and high-school girls a platform on which to carry out the style of aggression they favor, ostracizing and excluding other girls around the clock.
  • What’s at stake isn’t just how kids experience adolescence. The constant presence of smartphones is likely to affect them well into adulthood. Among people who suffer an episode of depression, at least half become depressed again later in life. Adolescence is a key time for developing social skills; as teens spend less time with their friends face-to-face, they have fewer opportunities to practice them
  • the smartphone is cutting into teens’ sleep: Many now sleep less than seven hours most nights. Sleep experts say that teens should get about nine hours of sleep a night; a teen who is getting less than seven hours a night is significantly sleep deprived
  • Fifty-seven percent more teens were sleep deprived in 2015 than in 1991. In just the four years from 2012 to 2015, 22 percent more teens failed to get seven hours of sleep.
  • Two national surveys show that teens who spend three or more hours a day on electronic devices are 28 percent more likely to get less than seven hours of sleep than those who spend fewer than three hours, and teens who visit social-media sites every day are 19 percent more likely to be sleep deprived.
  • Teens who read books and magazines more often than the average are actually slightly less likely to be sleep deprived—either reading lulls them to sleep, or they can put the book down at bedtime.
  • Sleep deprivation is linked to myriad issues, including compromised thinking and reasoning, susceptibility to illness, weight gain, and high blood pressure. It also affects mood: People who don’t sleep enough are prone to depression and anxiety.
  • correlations between depression and smartphone use are strong enough to suggest that more parents should be telling their kids to put down their phone.
  • I asked my undergraduate students at San Diego State University what they do with their phone while they sleep. Their answers were a profile in obsession. Nearly all slept with their phone, putting it under their pillow, on the mattress, or at the very least within arm’s reach of the bed. They checked social media right before they went to sleep, and reached for their phone as soon as they woke up in the morning
  • Significant effects on both mental health and sleep time appear after two or more hours a day on electronic devices. The average teen spends about two and a half hours a day on electronic devices. Some mild boundary-setting could keep kids from falling into harmful habits.
Javier E

Who Decides What's Racist? - Persuasion - 1 views

  • The implication of Hannah-Jones’s tweet and candidate Biden’s quip seems to be that you can have African ancestry, dark skin, textured hair, and perhaps even some “culturally black” traits regarding tastes in food, music, and ways of moving through the world. But unless you hold the “correct” political beliefs and values, you are not authentically black.
  • In a now-deleted tweet from May 22, 2020, Nikole Hannah-Jones, a Pulitzer Prize-winning reporter for The New York Times, opined, “There is a difference between being politically black and being racially black.”
  • Shelly Eversley’s The Real Negro suggests that in the latter half of the 20th century, the criteria of what constitutes “authentic” black experience moved from perceptible outward signs, like the fact of being restricted to segregated public spaces and speaking in a “black” dialect, to psychological, interior signs. In this new understanding, Eversley writes, “the ‘truth’ about race is felt, not performed, not seen.”
  • ...26 more annotations...
  • This insight goes a long way to explaining the current fetishization of experience, especially if it is (redundantly) “lived.” Black people from all walks of life find themselves deferred to by non-blacks
  • black people certainly don’t all “feel” or “experience” the same things. Nor do they all "experience" the same event in an identical way. Finally, even when their experiences are similar, they don’t all think about or interpret their experiences in the same way.
  • we must begin to attend in a serious way to heterodox black voices
  • This need is especially urgent given the ideological homogeneity of the “antiracist” outlook and efforts of elite institutions, including media, corporations, and an overwhelmingly progressive academia. For the arbiters of what it means to be black that dominate these institutions, there is a fairly narrowly prescribed “authentic” black narrative, black perspective, and black position on every issue that matters.
  • When we hear the demand to “listen to black voices,” what is usually meant is “listen to the right black voices.”
  • Many non-black people have heard a certain construction of “the black voice” so often that they are perplexed by black people who don’t fit the familiar model.
  • Similarly, many activists are not in fact “pro-black”: they are pro a rather specific conception of “blackness” that is not necessarily endorsed by all black people.
  • This is where our new website, Free Black Thought (FBT), seeks to intervene in the national conversation. FBT honors black individuals for their distinctive, diverse, and heterodox perspectives, and offers up for all to hear a polyphony, perhaps even a cacophony, of different and differing black voices.
  • The practical effects of the new antiracism are everywhere to be seen, but in few places more clearly than in our children’s schools
  • one might reasonably question what could be wrong with teaching children “antiracist” precepts. But the details here are full of devils.
  • To take an example that could affect millions of students, the state of California has adopted a statewide Ethnic Studies Model Curriculum (ESMC) that reflects “antiracist” ideas. The ESMC’s content inadvertently confirms that contemporary antiracism is often not so much an extension of the civil rights movement but in certain respects a tacit abandonment of its ideals.
  • It has thus been condemned as a “perversion of history” by Dr. Clarence Jones, MLK’s legal counsel, advisor, speechwriter, and Scholar in Residence at the Martin Luther King, Jr. Institute at Stanford University:
  • Essentialist thinking about race has also gained ground in some schools. For example, in one elite school, students “are pressured to conform their opinions to those broadly associated with their race and gender and to minimize or dismiss individual experiences that don’t match those assumptions.” These students report feeling that “they must never challenge any of the premises of [the school’s] ‘antiracist’ teachings.”
  • In contrast, the non-white students were taught that they were “folx (sic) who do not benefit from their social identities,” and “have little to no privilege and power.”
  • The children with “white” in their identity map were taught that they were part of the “dominant culture” which has been “created and maintained…to hold power and stay in power.” They were also taught that they had “privilege” and that “those with privilege have power over others.
  • Or consider the third-grade students at R.I. Meyerholz Elementary School in Cupertino, California
  • Or take New York City’s public school system, one of the largest educators of non-white children in America. In an effort to root out “implicit bias,” former Schools Chancellor Richard Carranza had his administrators trained in the dangers of “white supremacy culture.”
  • A slide from a training presentation listed “perfectionism,” “individualism,” “objectivity” and “worship of the written word” as white supremacist cultural traits to be “dismantled,”
  • Finally, some schools are adopting antiracist ideas of the sort espoused by Ibram X. Kendi, according to whom, if metrics such as tests and grades reveal disparities in achievement, the project of measuring achievement must itself be racist.
  • Parents are justifiably worried about such innovations. What black parent wants her child to hear that grading or math are “racist” as a substitute for objective assessment and real learning? What black parent wants her child told she shouldn’t worry about working hard, thinking objectively, or taking a deep interest in reading and writing because these things are not authentically black?
  • Clearly, our children’s prospects for success depend on the public being able to have an honest and free-ranging discussion about this new antiracism and its utilization in schools. Even if some black people have adopted its tenets, many more, perhaps most, hold complex perspectives that draw from a constellation of rather different ideologies.
  • So let’s listen to what some heterodox black people have to say about the new antiracism in our schools.
  • Coleman Hughes, a fellow at the Manhattan Institute, points to a self-defeating feature of Kendi-inspired grading and testing reforms: If we reject high academic standards for black children, they are unlikely to rise to “those same rejected standards” and racial disparity is unlikely to decrease
  • Chloé Valdary, the founder of Theory of Enchantment, worries that antiracism may “reinforce a shallow dogma of racial essentialism by describing black and white people in generalizing ways” and discourage “fellowship among peers of different races.”
  • We hope it’s obvious that the point we’re trying to make is not that everyone should accept uncritically everything these heterodox black thinkers say. Our point in composing this essay is that we all desperately need to hear what these thinkers say so we can have a genuine conversation
  • We promote no particular politics or agenda beyond a desire to offer a wide range of alternatives to the predictable fare emanating from elite mainstream outlets. At FBT, Marxists rub shoulders with laissez-faire libertarians. We have no desire to adjudicate who is “authentically black” or whom to prefer.
Javier E

MacIntyre | Internet Encyclopedia of Philosophy - 0 views

  • For MacIntyre, “rationality” comprises all the intellectual resources, both formal and substantive, that we use to judge truth and falsity in propositions, and to determine choice-worthiness in courses of action
  • Rationality in this sense is not universal; it differs from community to community and from person to person, and may both develop and regress over the course of a person’s life or a community’s history.
  • So rationality itself, whether theoretical or practical, is a concept with a history: indeed, since there are also a diversity of traditions of enquiry, with histories, there are, so it will turn out, rationalities rather than rationality, just as it will also turn out that there are justices rather than justice
  • ...164 more annotations...
  • Rationality is the collection of theories, beliefs, principles, and facts that the human subject uses to judge the world, and a person’s rationality is, to a large extent, the product of that person’s education and moral formation.
  • To the extent that a person accepts what is handed down from the moral and intellectual traditions of her or his community in learning to judge truth and falsity, good and evil, that person’s rationality is “tradition-constituted.” Tradition-constituted rationality provides the schemata by which we interpret, understand, and judge the world we live in
  • The apparent problem of relativism in MacIntyre’s theory of rationality is much like the problem of relativism in the philosophy of science. Scientific claims develop within larger theoretical frameworks, so that the apparent truth of a scientific claim depends on one’s judgment of the larger framework. The resolution of the problem of relativism therefore appears to hang on the possibility of judging frameworks or rationalities, or judging between frameworks or rationalities from a position that does not presuppose the truth of the framework or rationality, but no such theoretical standpoint is humanly possible.
  • MacIntyre finds that the world itself provides the criterion for the testing of rationalities, and he finds that there is no criterion except the world itself that can stand as the measure of the truth of any philosophical theory.
  • MacIntyre’s philosophy is indebted to the philosophy of science, which recognizes the historicism of scientific enquiry even as it seeks a truthful understanding of the world. MacIntyre’s philosophy does not offer a priori certainty about any theory or principle; it examines the ways in which reflection upon experience supports, challenges, or falsifies theories that have appeared to be the best theories so far to the people who have accepted them so far. MacIntyre’s ideal enquirers remain Hamlets, not Emmas.
  • history shows us that individuals, communities, and even whole nations may commit themselves militantly over long periods of their histories to doctrines that their ideological adversaries find irrational. This qualified relativism of appearances has troublesome implications for anyone who believes that philosophical enquiry can easily provide certain knowledge of the world
  • According to MacIntyre, theories govern the ways that we interpret the world and no theory is ever more than “the best standards so far” (3RV, p. 65). Our theories always remain open to improvement, and when our theories change, the appearances of our world—the apparent truths of claims judged within those theoretical frameworks—change with them.
  • From the subjective standpoint of the human enquirer, MacIntyre finds that theories, concepts, and facts all have histories, and they are all liable to change—for better or for worse.
  • MacIntyre holds that the rationality of individuals is not only tradition-constituted, it is also tradition constitutive, as individuals make their own contributions to their own rationality, and to the rationalities of their communities. Rationality is not fixed, within either the history of a community or the life of a person
  • The modern account of first principles justifies an approach to philosophy that rejects tradition. The modern liberal individualist approach is anti-traditional. It denies that our understanding is tradition-constituted and it denies that different cultures may differ in their standards of rationality and justice:
  • Modernity does not see tradition as the key that unlocks moral and political understanding, but as a superfluous accumulation of opinions that tend to prejudice moral and political reasoning.
  • Although modernity rejects tradition as a method of moral and political enquiry, MacIntyre finds that it nevertheless bears all the characteristics of a moral and political tradition.
  • If historical narratives are only projections of the interests of historians, then it is difficult to see how this historical narrative can claim to be truthful
  • For these post-modern theorists, “if the Enlightenment conceptions of truth and rationality cannot be sustained,” either relativism or perspectivism “is the only possible alternative” (p. 353). MacIntyre rejects both challenges by developing his theory of tradition-constituted and tradition-constitutive rationality on pp. 354-369
  • How, then, is one to settle challenges between two traditions? It depends on whether the adherents of either take the challenges of the other tradition seriously. It depends on whether the adherents of either tradition, on seeing a failure in their own tradition are willing to consider an answer offered by their rival (p. 355)
  • how a person with no traditional affiliation is to deal with the conflicting claims of rival traditions: “The initial answer is: that will depend upon who you are and how you understand yourself. This is not the kind of answer which we have been educated to expect in philosophy”
  • MacIntyre focuses the critique of modernity on the question of rational justification. Modern epistemology stands or falls on the possibility of Cartesian epistemological first principles. MacIntyre’s history exposes that notion of first principle as a fiction, and at the same time demonstrates that rational enquiry advances (or declines) only through tradition
  • MacIntyre cites Foucault’s 1966 book, Les Mots et les choses (The Order of Things, 1970) as an example of the self-subverting character of Genealogical enquiry
  • Foucault’s book reduces history to a procession of “incommensurable ordered schemes of classification and representation” none of which has any greater claim to truth than any other, yet this book “is itself organized as a scheme of classification and representation.”
  • From MacIntyre’s perspective, there is no question of deciding whether or not to work within a tradition; everyone who struggles with practical, moral, and political questions simply does. “There is no standing ground, no place for enquiry . . . apart from that which is provided by some particular tradition or other”
  • Three Rival Versions of Moral Enquiry (1990). The central idea of the Gifford Lectures is that philosophers make progress by addressing the shortcomings of traditional narratives about the world, shortcomings that become visible either through the failure of traditional narratives to make sense of experience, or through the introduction of contradictory narratives that prove impossible to dismiss
  • MacIntyre compares three traditions exemplified by three literary works published near the end of Adam Gifford’s life (1820–1887)
  • The Ninth Edition of the Encyclopaedia Britannica (1875–1889) represents the modern tradition of trying to understand the world objectively without the influence of tradition.
  • The Genealogy of Morals (1887), by Friedrich Nietzsche embodies the post-modern tradition of interpreting all traditions as arbitrary impositions of power.
  • The encyclical letter Aeterni Patris (1879) of Pope Leo XIII exemplifies the approach of acknowledging one’s predecessors within one’s own tradition of enquiry and working to advance or improve that tradition in the pursuit of objective truth. 
  • Of the three versions of moral enquiry treated in 3RV, only tradition, exemplified in 3RV by the Aristotelian, Thomistic tradition, understands itself as a tradition that looks backward to predecessors in order to understand present questions and move forward
  • Encyclopaedia obscures the role of tradition by presenting the most current conclusions and convictions of a tradition as if they had no history, and as if they represented the final discovery of unalterable truth
  • Encyclopaedists focus on the present and ignore the past.
  • Genealogists, on the other hand, focus on the past in order to undermine the claims of the present.
  • In short, Genealogy denies the teleology of human enquiry by denying (1) that historical enquiry has been fruitful, (2) that the enquiring person has a real identity, and (3) that enquiry has a real goal. MacIntyre finds this mode of enquiry incoherent.
  • Genealogy is self-deceiving insofar as it ignores the traditional and teleological character of its enquiry.
  • Genealogical moral enquiry must make similar exceptions to its treatments of the unity of the enquiring subject and the teleology of moral enquiry; thus “it seems to be the case that the intelligibility of genealogy requires beliefs and allegiances of a kind precluded by the genealogical stance” (3RV, p. 54-55)
  • MacIntyre uses Thomism because it applies the traditional mode of enquiry in a self-conscious manner. Thomistic students learn the work of philosophical enquiry as apprentices in a craft (3RV, p. 61), and maintain the principles of the tradition in their work to extend the understanding of the tradition, even as they remain open to the criticism of those principles.
  • 3RV uses Thomism as its example of tradition, but this use should not suggest that MacIntyre identifies “tradition” with Thomism or Thomism-as-a-name-for-the-Western-tradition. As noted above, WJWR distinguished four traditions of enquiry within the Western European world alone
  • MacIntyre’s emphasis on the temporality of rationality in traditional enquiry makes tradition incompatible with the epistemological projects of modern philosophy
  • Tradition is not merely conservative; it remains open to improvement,
  • Tradition differs from both encyclopaedia and genealogy in the way it understands the place of its theories in the history of human enquiry. The adherent of a tradition must understand that “the rationality of a craft is justified by its history so far,” thus it “is inseparable from the tradition through which it was achieved”
  • MacIntyre uses Thomas Aquinas to illustrate the revolutionary potential of traditional enquiry. Thomas was educated in Augustinian theology and Aristotelian philosophy, and through this education he began to see not only the contradictions between the two traditions, but also the strengths and weaknesses that each tradition revealed in the other. His education also helped him to discover a host of questions and problems that had to be answered and solved. Many of Thomas Aquinas’ responses to these concerns took the form of disputed questions. “Yet to each question the answer produced by Aquinas as a conclusion is no more than and, given Aquinas’s method, cannot but be no more than, the best answer reached so far. And hence derives the essential incompleteness”
  • argue that the virtues are essential to the practice of independent practical reason. The book is relentlessly practical; its arguments appeal only to experience and to purposes, and to the logic of practical reasoning.
  • Like other intelligent animals, human beings enter life vulnerable, weak, untrained, and unknowing, and face the likelihood of infirmity in sickness and in old age. Like other social animals, humans flourish in groups. We learn to regulate our passions, and to act effectively alone and in concert with others through an education provided within a community. MacIntyre’s position allows him to look to the animal world to find analogies to the role of social relationships in the moral formation of human beings
  • The task for the human child is to make “the transition from the infantile exercise of animal intelligence to the exercise of independent practical reasoning” (DRA, p. 87). For a child to make this transition is “to redirect and transform her or his desires, and subsequently to direct them consistently towards the goods of different stages of her or his life” (DRA, p. 87). The development of independent practical reason in the human agent requires the moral virtues in at least three ways.
  • DRA presents moral knowledge as a “knowing how,” rather than as a “knowing that.” Knowledge of moral rules is not sufficient for a moral life; prudence is required to enable the agent to apply the rules well.
  • “Knowing how to act virtuously always involves more than rule-following” (DRA, p. 93). The prudent person can judge what must be done in the absence of a rule and can also judge when general norms cannot be applied to particular cases.
  • Flourishing as an independent practical reasoner requires the virtues in a second way, simply because sometimes we need our friends to tell us who we really are. Independent practical reasoning also requires self-knowledge, but self-knowledge is impossible without the input of others whose judgment provides a reliable touchstone to test our beliefs about ourselves. Self-knowledge therefore requires the virtues that enable an agent to sustain formative relationships and to accept the criticism of trusted friends
  • Human flourishing requires the virtues in a third way, by making it possible to participate in social and political action. They enable us to “protect ourselves and others against neglect, defective sympathies, stupidity, acquisitiveness, and malice” (DRA, p. 98) by enabling us to form and sustain social relationships through which we may care for one another in our infirmities, and pursue common goods with and for the other members of our societies.
  • MacIntyre argues that it is impossible to find an external standpoint, because rational enquiry is an essentially social work (DRA, p. 156-7). Because it is social, shared rational enquiry requires moral commitment to, and practice of, the virtues to prevent the more complacent members of communities from closing off critical reflection upon “shared politically effective beliefs and concepts”
  • MacIntyre finds himself compelled to answer what may be called the question of moral provincialism: If one is to seek the truth about morality and justice, it seems necessary to “find a standpoint that is sufficiently external to the evaluative attitudes and practices that are to be put to the question.” If it is impossible for the agent to take such an external standpoint, if the agent’s commitments preclude radical criticism of the virtues of the community, does that leave the agent “a prisoner of shared prejudices” (DRA, p. 154)?
  • The book moves from MacIntyre’s assessment of human needs for the virtues to the political implications of that assessment. Social and political institutions that form and enable independent practical reasoning must “satisfy three conditions.” (1) They must enable their members to participate in shared deliberations about the communities’ actions. (2) They must establish norms of justice “consistent with exercise of” the virtue of justice. (3) They must enable the strong “to stand proxy” as advocates for the needs of the weak and the disabled.
  • The social and political institutions that MacIntyre recommends cannot be identified with the modern nation state or the modern nuclear family
  • The political structures necessary for human flourishing are essentially local
  • Yet local communities support human flourishing only when they actively support “the virtues of just generosity and shared deliberation”
  • MacIntyre rejects individualism and insists that we view human beings as members of communities who bear specific debts and responsibilities because of our social identities. The responsibilities one may inherit as a member of a community include debts to one’s forbearers that one can only repay to people in the present and future
  • The constructive argument of the second half of the book begins with traditional accounts of the excellences or virtues of practical reasoning and practical rationality rather than virtues of moral reasoning or morality. These traditional accounts define virtue as arête, as excellence
  • Practices are supported by institutions like chess clubs, hospitals, universities, industrial corporations, sports leagues, and political organizations.
  • Practices exist in tension with these institutions, since the institutions tend to be oriented to goods external to practices. Universities, hospitals, and scholarly societies may value prestige, profitability, or relations with political interest groups above excellence in the practices they are said to support.
  • Personal desires and institutional pressures to pursue external goods may threaten to derail practitioners’ pursuits of the goods internal to practices. MacIntyre defines virtue initially as the quality of character that enables an agent to overcome these temptations:
  • “A virtue is an acquired human quality the possession and exercise of which tends to enable us to achieve those goods which are internal to practices
  • Excellence as a human agent cannot be reduced to excellence in a particular practice (See AV, pp. 204–
  • The virtues therefore are to be understood as those dispositions which will not only sustain practices and enable us to achieve the goods internal to practices, but which will also sustain us in the relevant kind of quest for the good, by enabling us to overcome the harms, dangers, temptations, and distractions which we encounter, and which will furnish us with increasing self-knowledge and increasing knowledge of the good (AV, p. 219).
  • The excellent human agent has the moral qualities to seek what is good and best both in practices and in life as a whole.
  • The virtues find their point and purpose not only in sustaining those relationships necessary if the variety of goods internal to practices are to be achieved and not only in sustaining the form of an individual life in which that individual may seek out his or her good as the good of his or her whole life, but also in sustaining those traditions which provide both practices and individual lives with their necessary historical context (AV, p. 223)
  • Since “goods, and with them the only grounds for the authority of laws and virtues, can only be discovered by entering into those relationships which constitute communities whose central bond is a shared vision of and understanding of goods” (AV, p. 258), any hope for the transformation and renewal of society depends on the development and maintenance of such communities.
  • MacIntyre’s Aristotelian approach to ethics as a study of human action distinguishes him from post-Kantian moral philosophers who approach ethics as a means of determining the demands of objective, impersonal, universal morality
  • This modern approach may be described as moral epistemology. Modern moral philosophy pretends to free the individual to determine for her- or himself what she or he must do in a given situation, irrespective of her or his own desires; it pretends to give knowledge of universal moral laws
  • Aristotelian metaphysicians, particularly Thomists who define virtue in terms of the perfection of nature, rejected MacIntyre’s contention that an adequate Aristotelian account of virtue as excellence in practical reasoning and human action need not appeal to Aristotelian metaphysic
  • one group of critics rejects MacIntyre’s Aristotelianism because they hold that any Aristotelian account of the virtues must first account for the truth about virtue in terms of Aristotle’s philosophy of nature, which MacIntyre had dismissed in AV as “metaphysical biology”
  • Many of those who rejected MacIntyre’s turn to Aristotle define “virtue” primarily along moral lines, as obedience to law or adherence to some kind of natural norm. For these critics, “virtuous” appears synonymous with “morally correct;” their resistance to MacIntyre’s appeal to virtue stems from their difficulties either with what they take to be the shortcomings of MacIntyre’s account of moral correctness or with the notion of moral correctness altogether
  • MacIntyre continues to argue from the experience of practical reasoning to the demands of moral education.
  • Descartes and his successors, by contrast, along with certain “notable Thomists of the last hundred years” (p. 175), have proposed that philosophy begins from knowledge of some “set of necessarily true first principles which any truly rational person is able to evaluate as true” (p. 175). Thus for the moderns, philosophy is a technical rather than moral endeavor
  • MacIntyre distinguishes two related challenges to his position, the “relativist challenge” and the “perspectivist challenge.” These two challenges both acknowledge that the goals of the Enlightenment cannot be met and that, “the only available standards of rationality are those made available by and within traditions” (p. 252); they conclude that nothing can be known to be true or false
  • MacIntyre follows the progress of the Western tradition through “three distinct traditions:” from Homer and Aristotle to Thomas Aquinas, from Augustine to Thomas Aquinas and from Augustine through Calvin to Hume
  • Chapter 17 examines the modern liberal denial of tradition, and the ironic transformation of liberalism into the fourth tradition to be treated in the book.
  • MacIntyre credits John Stuart Mill and Thomas Aquinas as “two philosophers of the kind who by their writing send us beyond philosophy into immediate encounter with the ends of life
  • First, both were engaged by questions about the ends of life as questioning human beings and not just as philosophers. . . .
  • Secondly, both Mill and Aquinas understood their speaking and writing as contributing to an ongoing philosophical conversation. . . .
  • Thirdly, it matters that both the end of the conversation and the good of those who participate in it is truth and that the nature of truth, of good, of rational justification, and of meaning therefore have to be central topics of that conversation (Tasks, pp. 130-1).
  • Without these three characteristics, philosophy is first reduced to “the exercise of a set of analytic and argumentative skills. . . . Secondly, philosophy may thereby become a diversion from asking questions about the ends of life with any seriousness”
  • Neither Rosenzweig nor Lukács made philosophical progress because both failed to relate “their questions about the ends of life to the ends of their philosophical writing”
  • First, any adequate philosophical history or biography must determine whether the authors studied remain engaged with the questions that philosophy studies, or set the questions aside in favor of the answers. Second, any adequate philosophical history or biography must determine whether the authors studied insulated themselves from contact with conflicting worldviews or remained open to learning from every available philosophical approach. Third, any adequate philosophical history or biography must place the authors studied into a broader context that shows what traditions they come from and “whose projects” they are “carrying forward
  • MacIntyre’s recognition of the connection between an author’s pursuit of the ends of life and the same author’s work as a philosophical writer prompts him to finish the essay by demanding three things of philosophical historians and biographers
  • Philosophy is not just a study; it is a practice. Excellence in this practice demands that an author bring her or his struggles with the questions of the ends of philosophy into dialogue with historic and contemporary texts and authors in the hope of making progress in answering those questions
  • MacIntyre defends Thomistic realism as rational enquiry directed to the discovery of truth.
  • The three Thomistic essays in this book challenge those caricatures by presenting Thomism in a way that people outside of contemporary Thomistic scholarship may find surprisingly flexible and open
  • To be a moral agent, (1) one must understand one’s individual identity as transcending all the roles that one fills; (2) one must see oneself as a practically rational individual who can judge and reject unjust social standards; and (3) one must understand oneself as “as accountable to others in respect of the human virtues and not just in respect of [one’s] role-performances
  • J is guilty because he complacently accepted social structures that he should have questioned, structures that undermined his moral agency. This essay shows that MacIntyre’s ethics of human agency is not just a descriptive narrative about the manner of moral education; it is a standard laden account of the demands of moral agency.
  • MacIntyre considers “the case of J” (J, for jemand, the German word for “someone”), a train controller who learned, as a standard for his social role, to take no interest in what his trains carried, even during war time when they carried “munitions and . . . Jews on their way to extermination camps”
  • J had learned to do his work for the railroad according to one set of standards and to live other parts of his life according to other standards, so that this compliant participant in “the final solution” could contend, “You cannot charge me with moral failure” (E&P, p. 187).
  • The epistemological theories of Modern moral philosophy were supposed to provide rational justification for rules, policies, and practical determinations according to abstract universal standards, but MacIntyre has dismissed those theorie
  • Modern metaethics is supposed to enable its practitioners to step away from the conflicting demands of contending moral traditions and to judge those conflicts from a neutral position, but MacIntyre has rejected this project as well
  • In his ethical writings, MacIntyre seeks only to understand how to liberate the human agent from blindness and stupidity, to prepare the human agent to recognize what is good and best to do in the concrete circumstances of that agent’s own life, and to strengthen the agent to follow through on that judgment.
  • In his political writings, MacIntyre investigates the role of communities in the formation of effective rational agents, and the impact of political institutions on the lives of communities. This kind of ethics and politics is appropriately named the ethics of human agency.
  • The purpose of the modern moral philosophy of authors like Kant and Mill was to determine, rationally and universally, what kinds of behavior ought to be performed—not in terms of the agent’s desires or goals, but in terms of universal, rational duties. Those theories purported to let agents know what they ought to do by providing knowledge of duties and obligations, thus they could be described as theories of moral epistemology.
  • Contemporary virtue ethics purports to let agents know what qualities human beings ought to have, and the reasons that we ought to have them, not in terms of our fitness for human agency, but in the same universal, disinterested, non-teleological terms that it inherits from Kant and Mill.
  • For MacIntyre, moral knowledge remains a “knowing how” rather than a “knowing that;” MacIntyre seeks to identify those moral and intellectual excellences that make human beings more effective in our pursuit of the human good.
  • MacIntyre’s purpose in his ethics of human agency is to consider what it means to seek one’s good, what it takes to pursue one’s good, and what kind of a person one must become if one wants to pursue that good effectively as a human agent.
  • As a philosophy of human agency, MacIntyre’s work belongs to the traditions of Aristotle and Thomas Aquinas.
  • in keeping with the insight of Marx’s third thesis on Feuerbach, it maintained the common condition of theorists and people as peers in the pursuit of the good life.
  • He holds that the human good plays a role in our practical reasoning whether we recognize it or not, so that some people may do well without understanding why (E&P, p. 25). He also reads Aristotle as teaching that knowledge of the good can make us better agents
  • AV defines virtue in terms of the practical requirements for excellence in human agency, in an agent’s participation in practices (AV, ch. 14), in an agent’s whole life, and in an agent’s involvement in the life of her or his community
  • MacIntyre’s Aristotelian concept of “human action” opposes the notion of “human behavior” that prevailed among mid-twentieth-century determinist social scientists. Human actions, as MacIntyre understands them, are acts freely chosen by human agents in order to accomplish goals that those agents pursue
  • Human behavior, according to mid-twentieth-century determinist social scientists, is the outward activity of a subject, which is said to be caused entirely by environmental influences beyond the control of the subject.
  • Rejecting crude determinism in social science, and approaches to government and public policy rooted in determinism, MacIntyre sees the renewal of human agency and the liberation of the human agent as central goals for ethics and politics.
  • MacIntyre’s Aristotelian account of “human action” examines the habits that an agent must develop in order to judge and act most effectively in the pursuit of truly choice-worthy ends
  • MacIntyre seeks to understand what it takes for the human person to become the kind of agent who has the practical wisdom to recognize what is good and best to do and the moral freedom to act on her or his best judgment.
  • MacIntyre rejected the determinism of modern social science early in his career (“Determinism,” 1957), yet he recognizes that the ability to judge well and act freely is not simply given; excellence in judgment and action must be developed, and it is the task of moral philosophy to discover how these excellences or virtues of the human agent are established, maintained, and strengthened
  • MacIntyre’s Aristotelian philosophy investigates the conditions that support free and deliberate human action in order to propose a path to the liberation of the human agent through participation in the life of a political community that seeks its common goods through the shared deliberation and action of its members
  • As a classics major at Queen Mary College in the University of London (1945-1949), MacIntyre read the Greek texts of Plato and Aristotle, but his studies were not limited to the grammars of ancient languages. He also examined the ethical theories of Immanuel Kant and John Stuart Mill. He attended the lectures of analytic philosopher A. J. Ayer and of philosopher of science Karl Popper. He read Ludwig Wittgenstein’s Tractatus Logico Philosophicus, Jean-Paul Sartre’s L'existentialisme est un humanisme, and Marx’s Eighteenth Brumaire of Napoleon Bonaparte (What happened, pp. 17-18). MacIntyre met the sociologist Franz Steiner, who helped direct him toward approaching moralities substantively
  • Alasdair MacIntyre’s philosophy builds on an unusual foundation. His early life was shaped by two conflicting systems of values. One was “a Gaelic oral culture of farmers and fishermen, poets and storytellers.” The other was modernity, “The modern world was a culture of theories rather than stories” (MacIntyre Reader, p. 255). MacIntyre embraced both value systems
  • From Marxism, MacIntyre learned to see liberalism as a destructive ideology that undermines communities in the name of individual liberty and consequently undermines the moral formation of human agents
  • For MacIntyre, Marx’s way of seeing through the empty justifications of arbitrary choices to consider the real goals and consequences of political actions in economic and social terms would remain the principal insight of Marxism
  • After his retirement from teaching, MacIntyre has continued his work of promoting a renewal of human agency through an examination of the virtues demanded by practices, integrated human lives, and responsible engagement with community life. He is currently affiliated with the Centre for Contemporary Aristotelian Studies in Ethics and Politics (CASEP) at London Metropolitan University.
  • The second half of AV proposes a conception of practice and practical reasoning and the notion of excellence as a human agent as an alternative to modern moral philosophy
  • AV rejects the view of “modern liberal individualism” in which autonomous individuals use abstract moral principles to determine what they ought to do. The critique of modern normative ethics in the first half of AV rejects modern moral reasoning for its failure to justify its premises, and criticizes the frequent use of the rhetoric of objective morality and scientific necessity to manipulate people to accept arbitrary decisions
  • MacIntyre uses “modern liberal individualism” to name a much broader category that includes both liberals and conservatives in contemporary American political parlance, as well as some Marxists and anarchists (See ASIA, pp. 280-284). Conservatism, liberalism, Marxism, and anarchism all present the autonomous individual as the unit of civil society
  • The sources of modern liberal individualism—Hobbes, Locke, and Rousseau—assert that human life is solitary by nature and social by habituation and convention. MacIntyre’s Aristotelian tradition holds, on the contrary, that human life is social by nature.
  • MacIntyre identifies moral excellence with effective human agency, and seeks a political environment that will help to liberate human agents to recognize and seek their own goods, as components of the common goods of their communities, more effectively. For MacIntyre therefore, ethics and politics are bound together.
  • For MacIntyre ethics is not an application of principles to facts, but a study of moral action. Moral action, free human action, involves decisions to do things in pursuit of goals, and it involves the understanding of the implications of one’s actions for the whole variety of goals that human agents seek
  • In this sense, “To act morally is to know how to act” (SMJ, p. 56). “Morality is not a ‘knowing that’ but a ‘knowing how’”
  • If human action is a ‘knowing how,’ then ethics must also consider how one learns ‘how.’ Like other forms of ‘knowing how,’ MacIntyre finds that one learns how to act morally within a community whose language and shared standards shape our judgment
  • MacIntyre had concluded that ethics is not an abstract exercise in the assessment of facts; it is a study of free human action and of the conditions that enable rational human agency.
  • MacIntyre gives Marx credit for concluding in the third of the Theses on Feuerbach, that the only way to change society is to change ourselves, and that “The coincidence of the changing of human activity or self-changing can only be comprehended and rationally understood as revolutionary practice”
  • MacIntyre distinguishes “religion which is an opiate for the people from religion which is not” (MI, p. 83). He condemns forms of religion that justify social inequities and encourage passivity. He argues that authentic Christian teaching criticizes social structures and encourages action
  • Where “moral philosophy textbooks” discuss the kinds of maxims that should guide “promise-keeping, truth-telling, and the like,” moral maxims do not guide real agents in real life at all. “They do not guide us because we do not need to be guided. We know what to do” (ASIA, p. 106). Sometimes we do this without any maxims at all, or even against all the maxims we know. MacIntyre Illustrates his point with Huckleberry Finn’s decision to help Jim, Miss Watson’s escaped slave, to make his way to freedom
  • MacIntyre develops the ideas that morality emerges from history, and that morality organizes the common life of a community
  • The book concludes that the concepts of morality are neither timeless nor ahistorical, and that understanding the historical development of ethical concepts can liberate us “from any false absolutist claims” (SHE, p. 269). Yet this conclusion need not imply that morality is essentially arbitrary or that one could achieve freedom by liberating oneself from the morality of one’s society.
  • From this “Aristotelian point of view,” “modern morality” begins to go awry when moral norms are separated from the pursuit of human goods and moral behavior is treated as an end in itself. This separation characterizes Christian divine command ethics since the fourteenth century and has remained essential to secularized modern morality since the eighteenth century
  • From MacIntyre’s “Aristotelian point of view,” the autonomy granted to the human agent by modern moral philosophy breaks down natural human communities and isolates the individual from the kinds of formative relationships that are necessary to shape the agent into an independent practical reasoner.
  • the 1977 essay “Epistemological Crises, Dramatic Narrative, and the Philosophy of Science” (Hereafter EC). This essay, MacIntyre reports, “marks a major turning-point in my thought in the 1970s” (The Tasks of Philosophy, p. vii) EC may be described fairly as MacIntyre’s discourse on method
  • First, Philosophy makes progress through the resolution of problems. These problems arise when the theories, histories, doctrines and other narratives that help us to organize our experience of the world fail us, leaving us in “epistemological crises.” Epistemological crises are the aftermath of events that undermine the ways that we interpret our world
  • it presents three general points on the method for philosophy.
  • To live in an epistemological crisis is to be aware that one does not know what one thought one knew about some particular subject and to be anxious to recover certainty about that subject.
  • To resolve an epistemological crisis it is not enough to impose some new way of interpreting our experience, we also need to understand why we were wrong before: “When an epistemological crisis is resolved, it is by the construction of a new narrative which enables the agent to understand both how he or she could intelligibly have held his or her original beliefs and how he or she could have been so drastically misled by them
  • MacIntyre notes, “Philosophers have customarily been Emmas and not Hamlets” (p. 6); that is, philosophers have treated their conclusions as accomplished truths, rather than as “more adequate narratives” (p. 7) that remain open to further improvement.
  • To illustrate his position on the open-endedness of enquiry, MacIntyre compares the title characters of Shakespeare’s Hamlet and Jane Austen’s Emma. When Emma finds that she is deeply misled in her beliefs about the other characters in her story, Mr. Knightly helps her to learn the truth and the story comes to a happy ending (p. 6). Hamlet, by contrast, finds no pat answers to his questions; rival interpretations remain throughout the play, so that directors who would stage the play have to impose their own interpretations on the script
  • Another approach to education is the method of Descartes, who begins by rejecting everything that is not clearly and distinctly true as unreliable and false in order to rebuild his understanding of the world on a foundation of undeniable truth.
  • Descartes presents himself as willfully rejecting everything he had believed, and ignores his obvious debts to the Scholastic tradition, even as he argues his case in French and Latin. For MacIntyre, seeking epistemological certainty through universal doubt as a precondition for enquiry is a mistake: “it is an invitation not to philosophy but to mental breakdown, or rather to philosophy as a means of mental breakdown.
  • MacIntyre contrasts Descartes’ descent into mythical isolation with Galileo, who was able to make progress in astronomy and physics by struggling with the apparently insoluble questions of late medieval astronomy and physics, and radically reinterpreting the issues that constituted those questions
  • To make progress in philosophy one must sort through the narratives that inform one’s understanding, struggle with the questions that those narratives raise, and on occasion, reject, replace, or reinterpret portions of those narratives and propose those changes to the rest of one’s community for assessment. Human enquiry is always situated within the history and life of a community.
  • The third point of EC is that we can learn about progress in philosophy from the philosophy of science
  • Kuhn’s “paradigm shifts,” however, are unlike MacIntyre’s resolutions of epistemological crises in two ways.
  • First they are not rational responses to specific problems. Kuhn compares paradigm shifts to religious conversions (pp. 150, 151, 158), stressing that they are not guided by rational norms and he claims that the “mopping up” phase of a paradigm shift is a matter of convention in the training of new scientists and attrition among the holdouts of the previous paradigm
  • Second, the new paradigm is treated as a closed system of belief that regulates a new period of “normal science”; Kuhn’s revolutionary scientists are Emmas, not Hamlets
  • MacIntyre proposes elements of Imre Lakatos’ philosophy of science as correctives to Kuhn’s. While Lakatos has his own shortcomings, his general account of the methodologies of scientific research programs recognizes the role of reason in the transitions between theories and between research programs (Lakatos’ analog to Kuhn’s paradigms or disciplinary matrices). Lakatos presents science as an open ended enquiry, in which every theory may eventually be replaced by more adequate theories. For Lakatos, unlike Kuhn, rational scientific progress occurs when a new theory can account both for the apparent promise and for the actual failure of the theory it replaces.
  • The third conclusion of MacIntyre’s essay is that decisions to support some theories over others may be justified rationally to the extent that those theories allow us to understand our experience and our history, including the history of the failures of inadequate theories
  • For Aristotle, moral philosophy is a study of practical reasoning, and the excellences or virtues that Aristotle recommends in the Nicomachean Ethics are the intellectual and moral excellences that make a moral agent effective as an independent practical reasoner.
  • MacIntyre also finds that the contending parties have little interest in the rational justification of the principles they use. The language of moral philosophy has become a kind of moral rhetoric to be used to manipulate others in defense of the arbitrary choices of its users
  • examining the current condition of secular moral and political discourse. MacIntyre finds contending parties defending their decisions by appealing to abstract moral principles, but he finds their appeals eclectic, inconsistent, and incoherent.
  • The secular moral philosophers of the eighteenth and nineteenth centuries shared strong and extensive agreements about the content of morality (AV, p. 51) and believed that their moral philosophy could justify the demands of their morality rationally, free from religious authority.
  • MacIntyre traces the lineage of the culture of emotivism to the secularized Protestant cultures of northern Europe
  • Modern moral philosophy had thus set for itself an incoherent goal. It was to vindicate both the moral autonomy of the individual and the objectivity, necessity, and categorical character of the rules of morality
  • MacIntyre turns to an apparent alternative, the pragmatic expertise of professional managers. Managers are expected to appeal to the facts to make their decisions on the objective basis of effectiveness, and their authority to do this is based on their knowledge of the social sciences
  • An examination of the social sciences reveals, however, that many of the facts to which managers appeal depend on sociological theories that lack scientific status. Thus, the predictions and demands of bureaucratic managers are no less liable to ideological manipulation than the determinations of modern moral philosophers.
  • Modern moral philosophy separates moral reasoning about duties and obligations from practical reasoning about ends and practical deliberation about the means to one’s ends, and in doing so it separates morality from practice.
  • Many Europeans also lost the practical justifications for their moral norms as they approached modernity; for these Europeans, claiming that certain practices are “immoral,” and invoking Kant’s categorical imperative or Mill’s principle of utility to explain why those practices are immoral, seems no more adequate than the Polynesian appeal to taboo.
  • MacIntyre sifts these definitions and then gives his own definition of virtue, as excellence in human agency, in terms of practices, whole human lives, and traditions in chapters 14 and 15 of AV.
  • In the most often quoted sentence of AV, MacIntyre defines a practice as (1) a complex social activity that (2) enables participants to gain goods internal to the practice. (3) Participants achieve excellence in practices by gaining the internal goods. When participants achieve excellence, (4) the social understandings of excellence in the practice, of the goods of the practice, and of the possibility of achieving excellence in the practice “are systematically extended”
  • Practices, like chess, medicine, architecture, mechanical engineering, football, or politics, offer their practitioners a variety of goods both internal and external to these practices. The goods internal to practices include forms of understanding or physical abilities that can be acquired only by pursuing excellence in the associated practice
  • Goods external to practices include wealth, fame, prestige, and power; there are many ways to gain these external goods. They can be earned or purchased, either honestly or through deception; thus the pursuit of these external goods may conflict with the pursuit of the goods internal to practices.
  • An intelligent child is given the opportunity to win candy by learning to play chess. As long as the child plays chess only to win candy, he has every reason to cheat if by doing so he can win more candy. If the child begins to desire and pursue the goods internal to chess, however, cheating becomes irrational, because it is impossible to gain the goods internal to chess or any other practice except through an honest pursuit of excellence. Goods external to practices may nevertheless remain tempting to the practitioner.
  • Since MacIntyre finds social identity necessary for the individual, MacIntyre’s definition of the excellence or virtue of the human agent needs a social dimension:
  • These responsibilities also include debts incurred by the unjust actions of ones’ predecessors.
  • The enslavement and oppression of black Americans, the subjugation of Ireland, and the genocide of the Jews in Europe remained quite relevant to the responsibilities of citizens of the United States, England, and Germany in 1981, as they still do today.
  • Thus an American who said “I never owned any slaves,” “the Englishman who says ‘I never did any wrong to Ireland,’” or “the young German who believes that being born after 1945 means that what Nazis did to Jews has no moral relevance to his relationship to his Jewish contemporaries” all exhibit a kind of intellectual and moral failure.
  • “I am born with a past, and to cut myself off from that past in the individualist mode, is to deform my present relationships” (p. 221).  For MacIntyre, there is no moral identity for the abstract individual; “The self has to find its moral identity in and through its membership in communities” (p. 221).
1 - 20 of 23 Next ›
Showing 20 items per page