Skip to main content

Home/ TOK Friends/ Group items tagged project

Rss Feed Group items tagged

Javier E

Opinion | The 1619 Chronicles - The New York Times - 0 views

  • The 1619 Project introduced a date, previously obscure to most Americans, that ought always to have been thought of as seminal — and probably now will. It offered fresh reminders of the extent to which Black freedom was a victory gained by courageous Black Americans, and not just a gift obtained from benevolent whites.
  • in a point missed by many of the 1619 Project’s critics, it does not reject American values. As Nikole Hannah-Jones, its creator and leading voice, concluded in her essay for the project, “I wish, now, that I could go back to the younger me and tell her that her people’s ancestry started here, on these lands, and to boldly, proudly, draw the stars and those stripes of the American flag.” It’s an unabashedly patriotic thought.
  • ambition can be double-edged. Journalists are, most often, in the business of writing the first rough draft of history, not trying to have the last word on it. We are best when we try to tell truths with a lowercase t, following evidence in directions unseen, not the capital-T truth of a pre-established narrative in which inconvenient facts get discarded
  • ...25 more annotations...
  • on these points — and for all of its virtues, buzz, spinoffs and a Pulitzer Prize — the 1619 Project has failed.
  • That doesn’t mean that the project seeks to erase the Declaration of Independence from history. But it does mean that it seeks to dethrone the Fourth of July by treating American history as a story of Black struggle against white supremacy — of which the Declaration is, for all of its high-flown rhetoric, supposed to be merely a part.
  • he deleted assertions went to the core of the project’s most controversial goal, “to reframe American history by considering what it would mean to regard 1619 as our nation’s birth year.”
  • She then challenged me to find any instance in which the project stated that “using 1776 as our country’s birth date is wrong,” that it “should not be taught to schoolchildren,” and that the only one “that should be taught” was 1619. “Good luck unearthing any of us arguing that,” she added.
  • I emailed her to ask if she could point to any instances before this controversy in which she had acknowledged that her claims about 1619 as “our true founding” had been merely metaphorical. Her answer was that the idea of treating the 1619 date metaphorically should have been so obvious that it went without saying.
  • “1619. It is not a year that most Americans know as a notable date in our country’s history. Those who do are at most a tiny fraction of those who can tell you that 1776 is the year of our nation’s birth. What if, however, we were to tell you that this fact, which is taught in our schools and unanimously celebrated every Fourth of July, is wrong, and that the country’s true birth date, the moment that its defining contradictions first came into the world, was in late August of 1619?”
  • Here is an excerpt from the introductory essay to the project by The New York Times Magazine’s editor, Jake Silverstein, as it appeared in print in August 2019 (italics added):
  • In his introduction, Silverstein argues that America’s “defining contradictions” were born in August 1619, when a ship carrying 20 to 30 enslaved Africans from what is present-day Angola arrived in Point Comfort, in the English colony of Virginia. And the title page of Hannah-Jones’s essay for the project insists that “our founding ideals of liberty and equality were false when they were written.”
  • What was surprising was that in 1776 a politically formidable “defining contradiction” — “that all men are created equal” — came into existence through the Declaration of Independence. As Abraham Lincoln wrote in 1859, that foundational document would forever serve as a “rebuke and stumbling block to the very harbingers of reappearing tyranny and oppression.”
  • As for the notion that the Declaration’s principles were “false” in 1776, ideals aren’t false merely because they are unrealized, much less because many of the men who championed them, and the nation they created, hypocritically failed to live up to them.
  • These two flaws led to a third, conceptual, error. “Out of slavery — and the anti-Black racism it required — grew nearly everything that has truly made America exceptional,” writes Silverstein.
  • Nearly everything? What about, say, the ideas contained by the First Amendment? Or the spirit of openness that brought millions of immigrants through places like Ellis Island? Or the enlightened worldview of the Marshall Plan and the Berlin airlift? Or the spirit of scientific genius and discovery exemplified by the polio vaccine and the moon landing?
  • On the opposite side of the moral ledger, to what extent does anti-Black racism figure in American disgraces such as the brutalization of Native Americans, the Chinese Exclusion Act or the internment of Japanese-Americans in World War II?
  • The world is complex. So are people and their motives. The job of journalism is to take account of that complexity, not simplify it out of existence through the adoption of some ideological orthodoxy.
  • This mistake goes far to explain the 1619 Project’s subsequent scholarly and journalistic entanglements. It should have been enough to make strong yet nuanced claims about the role of slavery and racism in American history. Instead, it issued categorical and totalizing assertions that are difficult to defend on close examination.
  • It should have been enough for the project to serve as curator for a range of erudite and interesting voices, with ample room for contrary takes. Instead, virtually every writer in the project seems to sing from the same song sheet, alienating other potential supporters of the project and polarizing national debate.
  • James McPherson, the Pulitzer Prize-winning author of “Battle Cry of Freedom” and a past president of the American Historical Association. He was withering: “Almost from the outset,” McPherson told the World Socialist Web Site, “I was disturbed by what seemed like a very unbalanced, one-sided account, which lacked context and perspective.”
  • In particular, McPherson objected to Hannah-Jones’s suggestion that the struggle against slavery and racism and for civil rights and democracy was, if not exclusively then mostly, a Black one. As she wrote in her essay: “The truth is that as much democracy as this nation has today, it has been borne on the backs of Black resistance.”
  • McPherson demurs: “From the Quakers in the 18th century, on through the abolitionists in the antebellum, to the Radical Republicans in the Civil War and Reconstruction, to the N.A.A.C.P., which was an interracial organization founded in 1909, down through the civil rights movements of the 1950s and 1960s, there have been a lot of whites who have fought against slavery and racial discrimination, and against racism,” he said. “And that’s what’s missing from this perspective.”
  • Wilentz’s catalog of the project’s mistakes is extensive. Hannah-Jones’s essay claimed that by 1776 Britain was “deeply conflicted” over its role in slavery. But despite the landmark Somerset v. Stewart court ruling in 1772, which held that slavery was not supported by English common law, it remained deeply embedded in the practices of the British Empire. The essay claimed that, among Londoners, “there were growing calls to abolish the slave trade” by 1776. But the movement to abolish the British slave trade only began about a decade later — inspired, in part, Wilentz notes, by American antislavery agitation that had started in the 1760s and 1770s.
  • ie M. Harris, an expert on pre-Civil War African-American life and slavery. “On Aug. 19 of last year,” Harris wrote, “I listened in stunned silence as Nikole Hannah-Jones … repeated an idea that I had vigorously argued against with her fact checker: that the patriots fought the American Revolution in large part to preserve slavery in North America.”
  • The larger problem is that The Times’s editors, however much background reading they might have done, are not in a position to adjudicate historical disputes. That should have been an additional reason for the 1619 Project to seek input from, and include contributions by, an intellectually diverse range of scholarly voices. Yet not only does the project choose a side, it also brooks no doubt.
  • “It is finally time to tell our story truthfully,” the magazine declares on its 1619 cover page. Finally? Truthfully? Is The Times suggesting that distinguished historians, like the ones who have seriously disputed aspects of the project, had previously been telling half-truths or falsehoods?
  • unlike other dates, 1776 uniquely marries letter and spirit, politics and principle: The declaration that something new is born, combined with the expression of an ideal that — because we continue to believe in it even as we struggle to live up to it — binds us to the date.
  • On the other, the 1619 Project has become, partly by its design and partly because of avoidable mistakes, a focal point of the kind of intense national debate that columnists are supposed to cover, and that is being widely written about outside The Times. To avoid writing about it on account of the first scruple is to be derelict in our responsibility toward the second.
Javier E

(1) Deep Reading Will Save Your Soul - by William Deresiewicz - 0 views

  • In today’s installment, William Deresiewicz—inspired by a student’s legacy—analyzes an important new trend: students and teachers abandoning traditional universities altogether and seeking a liberal arts education in self-fashioned programs.
  • Higher ed is at an impasse. So much about it sucks, and nothing about it is likely to change. Colleges and universities do not seem inclined to reform themselves, and if they were, they wouldn’t know how, and if they did, they couldn’t. Between bureaucratic inertia, faculty resistance, and the conflicting agendas of a heterogenous array of stakeholders, concerted change appears to be impossible.
  • Which is not to say that interesting things aren’t happening in post-secondary (and post-tertiary) education.
  • ...40 more annotations...
  • These come, as far as I can tell, in two broad types, corresponding to the two fundamental complaints that people voice about their undergraduate experience
  • The first complaint is that college did not prepare them for the real world: that the whole exercise—papers, busywork, pointless requirements; siloed disciplines and abstract theory—seemed remote from anything that they actually might want to do with their lives. 
  • Above all, they are student-centered. Participants are enabled (and expected) to direct their education by constructing bespoke curricula out of the resources the program gives them access to. In a word, these endeavors emphasize “engagement.”
  • A student will identify a problem (a human need, an injustice, an instance of underrepresentation), then devise and implement a response (a physical system, a community-facing program, an art project). 
  • Professors were often preoccupied, with little patience for mentorship, the open-ended office-hours exploration. Classes, even in fields like philosophy, felt lifeless, impersonal, like engineering but with words instead of numbers. Worst of all were their fellow undergraduates, those climbers and careerists. “It’s hard to build your soul,” as one of my students once put it to me, “when everyone around you is trying to sell theirs.”
  • Not everything in the world is a problem, and to see the world as a series of problems is to limit the potential of both world and self. What problem does a song address? What problem will reading Voltaire help you solve, in any predictable way? The “problem” approach—the “engagement” approach, the save-the-world approach—leaves out, finally, what I’d call learning.
  • that is the second complaint that graduates tend to express: that they finished college without the feeling that they had learned anything, in this essential sense.
  • That there is a treasure out there—call it the Great Books or just great books, the wisdom of the ages or the best that has been thought and said—that its purpose is to activate the treasure inside them, that they had come to one of these splendid institutions (whose architecture speaks of culture, whose age gives earnest of depth) to be initiated into it, but that they had been denied, deprived. For unclear reasons, cheated.
  • I had students like this at Columbia and Yale. There were never a lot of them, and to judge from what’s been happening to humanities enrollments, there are fewer and fewer. (From 2013 to 2022, the number of people graduating with bachelors degrees in English fell by 36%. As a share of all degrees, it fell by 42%, to less than 1 in 60.)
  • They would tell me—these pilgrims, these intellectuals in embryo, these kindled souls—how hard they were finding it to get the kind of education they had come to college for.
  • what bothers me about this educational approach—the “problem” approach, the “STEAM” (STEM + arts) approach—is what it leaves out. It leaves out the humanities. It leaves out books. It leaves out literature and philosophy, history and art history and the history of religion. It leaves out any mode of inquiry—reflection, speculation, conversation with the past—that cannot be turned to immediate practical ends
  • The Catherine Project sees itself as being in the business of creating “communities of learning”; its principles include “conversation and hospitality, “simplicity [and] transparency.” Classes (called tutorials, in keeping with the practice at St. John’s) are free (BISR’s cost $335), are capped at four to six students (at BISR, the limit is 23), run for two hours a week for twelve weeks, and skew towards the canon: the Greeks and Romans, Pascal and Kierkegaard, Dante and Cervantes (the project also hosts a large number of reading groups, which address a wider range of texts). If BISR aspires to create a fairer market for academic labor—instructors keep the lion’s share of fees—the Catherine Project functions as a gift economy (though plans are to begin to offer tutors modest honoraria).
  • As Russell Jacoby has noted, the migration of intellectuals into universities in the decades after World War II, which he documented in The Last Intellectuals, has more recently reversed itself. The rise, or re-rise, of little magazines (Dissent, Commentary, Partisan Review then; n+1, The New Inquiry, The Point, The Drift, et al. now) is part of the same story. 
  • a fourth factor. If there are students who despair at the condition of the humanities on campus, there are professors who do so as well. Many of her teachers, Hitz told me, have regular ladder appointments: “We draw academics—who attend our groups as well as leading them—because the life of the mind is dying or dead in conventional institutions.” Undergraduate teaching, she added, “is a particularly hard pull,” and the Catherine Project offers faculty the chance to teach people “who actually want to learn.
  • I’d add, who can. Nine years ago, Stephen Greenblatt wrote: “Even the highly gifted students in my Shakespeare classes at Harvard are less likely to be touched by the subtle magic of his words than I was so many years ago or than my students were in the 1980s in Berkeley. … The problem is that their engagement with language … often seems surprisingly shallow or tepid.” By now, of course, the picture is far worse.
  • The response to the announcement of our pilot programs confirmed for me the existence of a large, unmet desire for text-based exploration, touching on the deepest questions, outside the confines of higher education
  • Applicants ranged from graduating college seniors to people in their 70s. They included teachers, artists, scientists, and doctoral students from across the disciplines; a submarine officer, a rabbinical student, an accountant, and a venture capitalist; retirees, parents of small children, and twentysomethings at the crossroads. Forms came in from India, Jordan, Brazil, and nine other foreign countries. The applicants were, as a group, tremendously impressive. If it had been possible, we would have taken many more than fifteen.
  • When asked why they wanted to participate, a number of them spoke about the pathologies of formal education. “We have a really damaged relationship to learning,” said one. “It should be fun, not scary”—as in, you feel that you’re supposed to know the answer, which as a student, as she noted, makes no sense
  • “We need opportunities for reading and exploration that lie outside the credentialing system of the modern university,” he went on, because there’s so much in the latter that cuts against “the slow way that kind of learning unfolds.”
  • “How one might choose to live.” For many of our applicants—and this, of course, is what the program is about, what the humanities are about—learning has, or ought to have, an existential weight.
  • I detected a desire to be free of forces and agendas: the university’s agenda of “relevance,” the professoriate’s agenda of political mobilization, the market’s agenda of productivity, the internet’s agenda of surveillance and addiction. In short, the whole capitalistic algorithmic ideological hairball of coerced homogeneity
  • The desire is to not be recruited, to not be instrumentalized, to remain (or become) an individual, to resist regression toward the mean, or meme.
  • That is why it’s crucial that the Matthew Strother Center has no goal—and this is true of the Catherine Project and other off-campus humanities programs, as well—beyond the pursuit of learning for its own sake.
  • This is freedom. When education isn’t pointed in particular directions, its possibilities are endless
  • The term “deep state” comes from countries like Egypt and Turkey where the security services acted for many years as a shadow government. The United States has never had a deep state in this sense, except in the fevered imaginations of the MAGA right. It does have a permanent civil service that operates at federal, state, and local levels, and it is these that have become a regular conservative punching bag.
  • The Loper Bright decision invalidated a rule issued by the National Marine Fisheries Service requiring Atlantic fishing boats to carry, at their own expense, inspectors judging compliance with rules against overfishing. In ruling in favor of the fishing companies, SCOTUS invalidated the Chevron precedent entirely. This decision built on the same narrative feeding the Project 2025 plan: the administrative state had grown into a monster that made decisions harming the well-being of citizens without any fundamental democratic accountability.
  • The second initiative was the Supreme Court’s Loper Bright v. Raimondo decision issued in late June that abolished the 1984 Chevron Deference precedent. Chevron Deference provided a rule under which the courts would defer to the expert opinions of executive branch agencies in situations where a Congressional mandate was ambiguous or unclear, and the agency position seemed reasonable.
  • At the heart of the conservative critique of the administrative state lies a vision of democratic government “of the people, by the people, and for the people,” in which citizens would deliberate together on policies, and would themselves be responsible for carrying them out much as one imagines occurred in the proverbial New England town hall.
  • The problem, however, is the extreme complexity of the tasks that modern government is expected to accomplish.
  • None of these functions can be performed by ordinary citizens; they must be delegated to experts whose life work centers around the complex tasks they perform.
  • While some local issues could be settled on a local level, modern government does things like manage the money supply, regulate giant international banks, certify the safety and efficacy of drugs, forecast weather, control air traffic, intercept and decrypt the communications of adversaries, perform employment surveys, and monitor fraud in the payment of hundreds of billions of dollars in the Social Security and Medicare programs
  • Substantial delegation is therefore necessary. Some conservatives believe in a Constitutional “non-delegation doctrine,” but Congress has been delegating responsibility for complex tasks ever since Treasury Secretary Alexander Hamilton was given the job of cleaning up Revolutionary War debt by the first Congress of the United States.
  • Nor is it the case that the people’s elected representatives have no means of monitoring and holding accountable the bureaucracy they have created. There are both ex ante and ex post methods for doing this
  • There are, in other words, a huge number of mechanisms by which the political layer can control the administrative layer
  • The problem in these cases was not, however, an out-of-control bureaucracy exerting unaccountable power over citizens. The problem was a failure by plaintiffs to make use of the specific powers—the checks and balances—that the system made available to them. The failures of the early Trump administration to get its way cited in Project 2025 were largely due to the inexperience of that administration’s political appointees.
  • Removal of the property qualification for voting by most U.S. states in the 1820s vastly expanded the franchise to all white men. Politicians soon discovered, as they subsequently did in other new democracies, that the easiest way to get people to the polls was to bribe them—perhaps with a bottle of bourbon, a Christmas turkey, or a job in the post office. Thus began what was known as the patronage or spoils system, under which virtually every job in the civil service was given out by a politician in return for political support
  • The American patronage system was hugely corrupt, and provided opportunities for state capture by big business interests like the railroads that were spreading across the country. Congress did not want to give up its patronage powers, but eventually passed the Pendleton Act in 1883 that created a U.S. Civil Service Commission and established the principle of merit as a condition for hiring and promoting bureaucrats.
  • it was not until the time of the First World War that a majority of federal bureaucrats were appointed under the merit system.
  • The fundamental problem with a new Schedule F, as noted in my previous post, is that it will return the country to the period before the Pendleton Act, when political loyalty rather than merit, skill, or knowledge will be the primary criterion for government service
  • It took President Trump nearly four years (and 44 cabinet secretaries) to rid his administration of seasoned professionals and replace them with loyalists like Kash Patel at Defense or Jeffrey Clark at the Justice Department. This gives us a taste for the quality of officials who are likely to come in under a revived Schedule F. The doors to patronage, incompetence, and corruption will be thrown wide open.
Javier E

How To Look Smart, Ctd - The Daily Dish | By Andrew Sullivan - 0 views

  • The Atlantic Home todaysDate();Tuesday, February 8, 2011Tuesday, February 8, 2011 Go Follow the Atlantic » Politics Presented by When Ronald Reagan Endorsed Ron Paul Joshua Green Epitaph for the DLC Marc Ambinder A Hard Time Raising Concerns About Egypt Chris Good Business Presented by Could a Hybrid Mortgage System Work? Daniel Indiviglio Fighting Bias in Academia Megan McArdle The Tech Revolution For Seniors Derek Thompson Culture Presented By 'Tiger Mother' Creates a New World Order James Fallows Justin Bieber: Daydream Believer James Parker <!-- /li
  • these questions tend to overlook the way IQ tests are designed. As a neuropsychologist who has administered hundreds of these measures, I can tell you that their structures reflect a deeply embedded bias toward intelligence as a function of reading skills
katherineharron

Trump's absurd projection reveals his anxiety (opinion) - CNN - 0 views

  • Let's talk about projection, the psychological impulse to project on other people what you're actually feeling. Webster's dictionary defines it, in part, as: "the externalization of blame, guilt, or responsibility as a defense against anxiety." Here's one recent, relevant example: "The one who's got the problem is Biden, because if you look at what Biden did, Biden did what they would like to have me do except there's one problem: I didn't do it."
  • Cruz was pretty quick to diagnose the problem: "This man is a pathological liar. He doesn't know the difference between truth and lies. He lies practically every word that comes out of his mouth. And he had a pattern that I think is straight out of a psychology textbook. His response is to accuse everybody else of lying."
  • When Clinton raised questions about Trump's erratic and impulsive behavior, he called her unstable, unhinged, lacking the "judgment, temperament and moral character to lead this country." And who can forget his, "No puppet ... You're the puppet," response when she accused him in a debate of being a puppet for Vladimir Putin.
  • ...3 more annotations...
  • Because tone comes from the top, you see the President's surrogates and even Cabinet officials echo it. But sometimes they go too far and give away the game. Case in point, Secretary of State Mike Pompeo on "Face The Nation":
  • "If there was election interference that took place by the Vice President, I think the American people deserve to know."
  • An analysis by The Washington Post's Philip Bump found that his top five insults were "fake," "failed," "dishonest," "weak" and "liar."
Emilio Ergueta

No Consolation For Kalashnikov | Issue 59 | Philosophy Now - 0 views

  • The legendary AK 47 assault rifle was invented in 1946 by Mikhail Kalashnikov. It was issued to the armies of the old Warsaw Pact countries and has been used in many conflicts, eg by the North Vietnamese Army (NVA), Soviet soldiers in Afghanistan, and even this year by Al Qaeda operatives in Iraq.
  • Whatever interpretation one puts on those two conflicts, almost no-one sane would condone the use of the AK 47 in killing civilians, for instance Shiites in Iraq.
  • Mikhail Kalashnikov has come to have some doubts about his invention. He told The Times in June 2006, “I don’t worry when my guns are used for national liberation or defence. But when I see how peaceful people are killed and wounded by these weapons, I get very distressed and upset. I calm down by telling myself that I invented this gun 60 years ago to protect the interests of my country.”
  • ...10 more annotations...
  • Weapons research produces in the first place not guns, bombs, bullets and planes and the various command, control and communications hardware and software needed to use these things, but plans, blueprints and designs – knowledge and know-how. Unless these useful plans are lost or destroyed, they can be implemented or instantiated many times over, and thus project unforeseen into the future.
  • If any one person invented the atomic bomb, it was Leo Szilard. It seems he had the idea, and he made great efforts from 1935 until 1942, when the Manhattan Project was set up, to get the research done that would show whether an atomic bomb was possible; how to make one; and if need be, to provide the basis for actually making one.
  • This perception was greatly strengthened when Hahn and Strassmann discovered nuclear fission in Berlin in 1938. So Szilard, worried about the Nazis getting an atomic bomb, thought that the Allies should do the research to see if and how one could be made, in order to deter or otherwise prevent the Nazis from using one.
  • As far as Szilard and a good number of other atomic scientists were concerned there was no longer a rationale for the bomb project. Szilard, Philip Franck and others wrote The Franck Report in June 1945, which among other things advocated a demonstration of the power of the atomic bomb by dropping one on an uninhabited island. The Franck Report was ignored.The project was not abandoned, of course, and two of its products were used on Japanese cities, to kill mostly Japanese civilians.
  • The point of this example is to show how scientists lose control of their work when they take part in weapons research – they lose control of it in other settings besides, but this case is the most problematic.
  • One way out of the dilemma is to refuse to do war research under any circumstances. I’d like to endorse this option, especially as it does not imply that we should judge Kalashnikov, Szilard, Watson-Watt and other well-intentioned researchers harshly, since we can argue that the dilemma has only become evident recently.
  • Another possibility is to deny that weapons research must take place within history, as a good Marxist might put it. That is, as I would put it: Perhaps weapons research is not an activity that must take account of historical contingencies.
  • We must acknowledge that there is no such thing as an inherently defensive weapon, something that can only be used for the morally acceptable purpose of responding against an aggressor. Doing weapons research for defensive systems is therefore not morally acceptable, as any weapons might feasibly be used as part of an unjust war of aggression.
  • Kalashnikov’s preferred description of what he did when he designed the AK 47 is something like “providing the means for liberation,” or “defending my country,” not “providing the means to kill innocents.” However, he acknowledges that the latter description applies to his situation equally well. Nevertheless, J might try to portray her actions as something like “provide the means for deterrence,” the idea being that what she is helping to create is intended to deter, and hence prevent harm rather than cause it.
  • You might say that this is utopian, and it would never work, but then it might console Kalashnikov, who, after all, was a Marxist, and perhaps also a utopian.
kushnerha

New Critique Sees Flaws in Landmark Analysis of Psychology Studies - The New York Times - 0 views

  • A landmark 2015 report that cast doubt on the results of dozens of published psychology studies has exposed deep divisions in the field, serving as a reality check for many working researchers but as an affront to others who continue to insist the original research was sound.
  • On Thursday, a group of four researchers publicly challenged the report, arguing that it was statistically flawed and, as a result, wrong.The 2015 report, called the Reproducibility Project, found that less than 40 studies in a sample of 100 psychology papers in leading journals held up when retested by an independent team. The new critique by the four researchers countered that when that team’s statistical methodology was adjusted, the rate was closer to 100 percent.Neither the original analysis nor the critique found evidence of fraud or manipulation of data.
  • “That study got so much press, and the wrong conclusions were drawn from it,” said Timothy D. Wilson, a professor of psychology at the University of Virginia and an author of the new critique. “It’s a mistake to make generalizations from something that was done poorly, and this we think was done poorly.”
  • ...6 more annotations...
  • countered that the critique was highly biased: “They are making assumptions based on selectively interpreting data and ignoring data that’s antagonistic to their point of view.”
  • The challenge comes as the field of psychology is facing a generational change, with young researchers beginning to share their data and study designs before publication, to improve transparency. Still, the new critique is likely to feed an already lively debate about how best to conduct and evaluate so-called replication projects of studies. Such projects are underway in several fields, scientists on both sides of the debate said.
  • “On some level, I suppose it is appealing to think everything is fine and there is no reason to change the status quo,” said Sanjay Srivastava, a psychologist at the University of Oregon, who was not a member of either team. “But we know too much, from many other sources, to put too much credence in an analysis that supports that remarkable conclusion.”
  • One issue the critique raised was how faithfully the replication team had adhered to the original design of the 100 studies it retested. Small alterations in design can make the difference between whether a study replicates or not, scientists say.
  • Another issue that the critique raised had to do with statistical methods. When Dr. Nosek began his study, there was no agreed-upon protocol for crunching the numbers. He and his team settled on five measures
  • He said that the original replication paper and the critique use statistical approaches that are “predictably imperfect” for this kind of analysis.One way to think about the dispute, Dr. Simohnson said, is that the original paper found that the glass was about 40 percent full, and the critique argues that it could be 100 percent full. In fact, he said in an email, “State-of-the-art techniques designed to evaluate replications say it is 40 percent full, 30 percent empty, and the remaining 30 percent could be full or empty, we can’t tell till we get more data.”
kushnerha

BBC - Capital - Busy: A badge of honour or a big lie? - 0 views

  • idea of the “busy trap” to the overwhelming feeling many professionals have at the end of each day and week, overload is a real issue.
  • But what if we’re looking at the issue in the wrong way? What if you could reframe your thinking, feel less busy and perhaps get more done?
  • somewhere along the way we were convinced that being ‘busy’ was good for us
  • ...13 more annotations...
  • “We’re not busy … we’re productive,” he wrote. “And yes, there’s a difference.”
  • “Busy paints a picture of people who are either keeping themselves occupied or who don’t have the time to do other things,” Spurlock explained. “Productive describes an environment rich with goals, personal and professional achievements and wrapped in success, a place where you're actually creating&nbsp;something vs just doing something.”
  • “Personal productivity… is the most important one, as it centres around the time that I make to spend with my family, my friends and doing things that fulfil me as a living person,”
  • Financial productivity is an important one, as these are the projects that create consistent revenue
  • “Being busy has somehow become a badge of honor. The prevailing notion is that if you aren’t super busy, you aren’t important or hard working,”
  • busyness makes you less productive
  • “When we think of a super busy person, we think of a ringing phone, a flood of emails and a schedule that’s bursting at the seams with major projects and side-projects hitting simultaneously,” he wrote. “Such a situation inevitably leads to multi-tasking and interruptions, which are both deadly to productivity.”
  • As Socrates said: Beware the barrenness of a busy life.
  • switching what you’re doing mid-task increases the time it takes you to finish both tasks by 25%
  • people an average of 15 minutes to return to their important projects… every time they were interrupted by e-mails, phone calls, or other messages
  • “We’re so enamored with multitasking that we think we’re getting more done, even though our brains aren’t physically capable of this,”
  • most productive when we manage our schedules enough to ensure that we can focus effectively on the task at hand
  • people use busyiness to “hide from… laziness and fear of failure”
Javier E

They're Watching You at Work - Don Peck - The Atlantic - 2 views

  • Predictive statistical analysis, harnessed to big data, appears poised to alter the way millions of people are hired and assessed.
  • By one estimate, more than 98&nbsp;percent of the world’s information is now stored digitally, and the volume of that data has quadrupled since 2007.
  • The application of predictive analytics to people’s careers—an emerging field sometimes called “people analytics”—is enormously challenging, not to mention ethically fraught
  • ...52 more annotations...
  • By the end of World War&nbsp;II, however, American corporations were facing severe talent shortages. Their senior executives were growing old, and a dearth of hiring from the Depression through the war had resulted in a shortfall of able, well-trained managers. Finding people who had the potential to rise quickly through the ranks became an overriding preoccupation of American businesses. They began to devise a formal hiring-and-management system based in part on new studies of human behavior, and in part on military techniques developed during both world wars, when huge mobilization efforts and mass casualties created the need to get the right people into the right roles as efficiently as possible. By the 1950s, it was not unusual for companies to spend days with young applicants for professional jobs, conducting a battery of tests, all with an eye toward corner-office potential.
  • But companies abandoned their hard-edged practices for another important reason: many of their methods of evaluation turned out not to be very scientific.
  • this regime, so widespread in corporate America at mid-century, had almost disappeared by 1990. “I think an HR person from the late 1970s would be stunned to see how casually companies hire now,”
  • Many factors explain the change, he said, and then he ticked off a number of them: Increased job-switching has made it less important and less economical for companies to test so thoroughly. A heightened focus on short-term financial results has led to deep cuts in corporate functions that bear fruit only in the long term. The Civil Rights Act of 1964, which exposed companies to legal liability for discriminatory hiring practices, has made HR departments wary of any broadly applied and clearly scored test that might later be shown to be systematically biased.
  • about a quarter of the country’s corporations were using similar tests to evaluate managers and junior executives, usually to assess whether they were ready for bigger roles.
  • Aptitude, skills, personal history, psychological stability, discretion, loyalty—companies at the time felt they had a need (and the right) to look into them all. That ambit is expanding once again, and this is undeniably unsettling. Should the ideas of scientists be dismissed because of the way they play a game? Should job candidates be ranked by what their Web habits say about them? Should the “data signature” of natural leaders play a role in promotion? These are all live questions today, and they prompt heavy concerns: that we will cede one of the most subtle and human of skills, the evaluation of the gifts and promise of other people, to machines; that the models will get it wrong; that some people will never get a shot in the new workforce.
  • Knack makes app-based video games, among them Dungeon Scrawl, a quest game requiring the player to navigate a maze and solve puzzles, and Wasabi Waiter, which involves delivering the right sushi to the right customer at an increasingly crowded happy hour. These games aren’t just for play: they’ve been designed by a team of neuroscientists, psychologists, and data scientists to suss out human potential. Play one of them for just 20 minutes, says Guy Halfteck, Knack’s founder, and you’ll generate several megabytes of data, exponentially more than what’s collected by the SAT or a personality test. How long you hesitate before taking every action, the sequence of actions you take, how you solve problems—all of these factors and many more are logged as you play, and then are used to analyze your creativity, your persistence, your capacity to learn quickly from mistakes, your ability to prioritize, and even your social intelligence and personality. The end result, Halfteck says, is a high-resolution portrait of your psyche and intellect, and an assessment of your potential as a leader or an innovator.
  • When the results came back, Haringa recalled, his heart began to beat a little faster. Without ever seeing the ideas, without meeting or interviewing the people who’d proposed them, without knowing their title or background or academic pedigree, Knack’s algorithm had identified the people whose ideas had panned out. The top 10&nbsp;percent of the idea generators as predicted by Knack were in fact those who’d gone furthest in the process.
  • What Knack is doing, Haringa told me, “is almost like a paradigm shift.” It offers a way for his GameChanger unit to avoid wasting time on the 80 people out of 100—nearly all of whom look smart, well-trained, and plausible on paper—whose ideas just aren’t likely to work out.
  • He has encouraged the company’s HR executives to think about applying the games to the recruitment and evaluation of all professional workers.
  • scoring distance from work could violate equal-employment-opportunity standards. Marital status? Motherhood? Church membership? “Stuff like that,” Meyerle said, “we just don’t touch”—at least not in the U.S., where the legal environment is strict. Meyerle told me that Evolv has looked into these sorts of factors in its work for clients abroad, and that some of them produce “startling results.”
  • consider the alternative. A mountain of scholarly literature has shown that the intuitive way we now judge professional potential is rife with snap judgments and hidden biases, rooted in our upbringing or in deep neurological connections that doubtless served us well on the savanna but would seem to have less bearing on the world of work.
  • We may like to think that society has become more enlightened since those days, and in many ways it has, but our biases are mostly unconscious, and they can run surprisingly deep. Consider race. For a 2004 study called “Are Emily and Greg More Employable Than Lakisha and Jamal?,” the economists Sendhil Mullainathan and Marianne Bertrand put white-sounding names (Emily Walsh, Greg Baker) or black-sounding names (Lakisha Washington, Jamal Jones) on similar fictitious résumés, which they then sent out to a variety of companies in Boston and Chicago. To get the same number of callbacks, they learned, they needed to either send out half again as many résumés with black names as those with white names, or add eight extra years of relevant work experience to the résumés with black names.
  • a sociologist at Northwestern, spent parts of the three years from 2006 to 2008 interviewing professionals from elite investment banks, consultancies, and law firms about how they recruited, interviewed, and evaluated candidates, and concluded that among the most important factors driving their hiring recommendations were—wait for it—shared leisure interests.
  • Lacking “reliable predictors of future performance,” Rivera writes, “assessors purposefully used their own experiences as models of merit.” Former college athletes “typically prized participation in varsity sports above all other types of involvement.” People who’d majored in engineering gave engineers a leg up, believing they were better prepared.
  • the prevailing system of hiring and management in this country involves a level of dysfunction that should be inconceivable in an economy as sophisticated as ours. Recent survey data collected by the Corporate Executive Board, for example, indicate that nearly a quarter of all new hires leave their company within a year of their start date, and that hiring managers wish they’d never extended an offer to one out of every five members on their team
  • In the late 1990s, as these assessments shifted from paper to digital formats and proliferated, data scientists started doing massive tests of what makes for a successful customer-support technician or salesperson. This has unquestionably improved the quality of the workers at many firms.
  • In 2010, however, Xerox switched to an online evaluation that incorporates personality testing, cognitive-skill assessment, and multiple-choice questions about how the applicant would handle specific scenarios that he or she might encounter on the job. An algorithm behind the evaluation analyzes the responses, along with factual information gleaned from the candidate’s application, and spits out a color-coded rating: red (poor candidate), yellow (middling), or green (hire away). Those candidates who score best, I learned, tend to exhibit a creative but not overly inquisitive personality, and participate in at least one but not more than four social networks, among many other factors. (Previous experience, one of the few criteria that Xerox had explicitly screened for in the past, turns out to have no bearing on either productivity or retention
  • the idea that hiring was a science fell out of favor. But now it’s coming back, thanks to new technologies and methods of analysis that are cheaper, faster, and much-wider-ranging than what we had before
  • Gone are the days, Ostberg told me, when, say, a small survey of college students would be used to predict the statistical validity of an evaluation tool. “We’ve got a data set of 347,000 actual employees who have gone through these different types of assessments or tools,” he told me, “and now we have performance-outcome data, and we can split those and slice and dice by industry and location.”
  • Evolv’s tests allow companies to capture data about everybody who applies for work, and everybody who gets hired—a complete data set from which sample bias, long a major vexation for industrial-organization psychologists, simply disappears. The sheer number of observations that this approach makes possible allows Evolv to say with precision which attributes matter more to the success of retail-sales workers (decisiveness, spatial orientation, persuasiveness) or customer-service personnel at call centers (rapport-building)
  • There are some data that Evolv simply won’t use, out of a concern that the information might lead to systematic bias against whole classes of people
  • When Xerox started using the score in its hiring decisions, the quality of its hires immediately improved. The rate of attrition fell by 20&nbsp;percent in the initial pilot period, and over time, the number of promotions rose. Xerox still interviews all candidates in person before deciding to hire them, Morse told me, but, she added, “We’re getting to the point where some of our hiring managers don’t even want to interview anymore”
  • what most excites him are the possibilities that arise from monitoring the entire life cycle of a worker at any given company.
  • Mullainathan expressed amazement at how little most creative and professional workers (himself included) know about what makes them effective or ineffective in the office. Most of us can’t even say with any certainty how long we’ve spent gathering information for a given project, or our pattern of information-gathering, never mind know which parts of the pattern should be reinforced, and which jettisoned. As Mullainathan put it, we don’t know our own “production function.”
  • What begins with an online screening test for entry-level workers ends with the transformation of nearly every aspect of hiring, performance assessment, and management.
  • I turned to Sandy Pentland, the director of the Human Dynamics Laboratory at MIT. In recent years, Pentland has pioneered the use of specialized electronic “badges” that transmit data about employees’ interactions as they go about their days. The badges capture all sorts of information about formal and informal conversations: their length; the tone of voice and gestures of the people involved; how much those people talk, listen, and interrupt; the degree to which they demonstrate empathy and extroversion; and more. Each badge generates about 100 data points a minute.
  • he tried the badges out on about 2,500 people, in 21 different organizations, and learned a number of interesting lessons. About a third of team performance, he discovered, can usually be predicted merely by the number of face-to-face exchanges among team members. (Too many is as much of a problem as too few.) Using data gathered by the badges, he was able to predict which teams would win a business-plan contest, and which workers would (rightly) say they’d had a “productive” or “creative” day. Not only that, but he claimed that his researchers had discovered the “data signature” of natural leaders, whom he called “charismatic connectors” and all of whom, he reported, circulate actively, give their time democratically to others, engage in brief but energetic conversations, and listen at least as much as they talk.
  • His group is developing apps to allow team members to view their own metrics more or less in real time, so that they can see, relative to the benchmarks of highly successful employees, whether they’re getting out of their offices enough, or listening enough, or spending enough time with people outside their own team.
  • Torrents of data are routinely collected by American companies and now sit on corporate servers, or in the cloud, awaiting analysis. Bloomberg reportedly logs every keystroke of every employee, along with their comings and goings in the office. The Las Vegas casino Harrah’s tracks the smiles of the card dealers and waitstaff on the floor (its analytics team has quantified the impact of smiling on customer satisfaction). E‑mail, of course, presents an especially rich vein to be mined for insights about our productivity, our treatment of co-workers, our willingness to collaborate or lend a hand, our patterns of written language, and what those patterns reveal about our intelligence, social skills, and behavior.
  • people analytics will ultimately have a vastly larger impact on the economy than the algorithms that now trade on Wall Street or figure out which ads to show us. He reminded me that we’ve witnessed this kind of transformation before in the history of management science. Near the turn of the 20th&nbsp;century, both Frederick Taylor and Henry Ford famously paced the factory floor with stopwatches, to improve worker efficiency.
  • “The quantities of data that those earlier generations were working with,” he said, “were infinitesimal compared to what’s available now. There’s been a real sea change in the past five years, where the quantities have just grown so large—petabytes, exabytes, zetta—that you start to be able to do things you never could before.”
  • People analytics will unquestionably provide many workers with more options and more power. Gild, for example, helps companies find undervalued software programmers, working indirectly to raise those people’s pay. Other companies are doing similar work. One called Entelo, for instance, specializes in using algorithms to identify potentially unhappy programmers who might be receptive to a phone cal
  • He sees it not only as a boon to a business’s productivity and overall health but also as an important new tool that individual employees can use for self-improvement: a sort of radically expanded The 7 Habits of Highly Effective People, custom-written for each of us, or at least each type of job, in the workforce.
  • the most exotic development in people analytics today is the creation of algorithms to assess the potential of all workers, across all companies, all the time.
  • The way Gild arrives at these scores is not simple. The company’s algorithms begin by scouring the Web for any and all open-source code, and for the coders who wrote it. They evaluate the code for its simplicity, elegance, documentation, and several other factors, including the frequency with which it’s been adopted by other programmers. For code that was written for paid projects, they look at completion times and other measures of productivity. Then they look at questions and answers on social forums such as Stack Overflow, a popular destination for programmers seeking advice on challenging projects. They consider how popular a given coder’s advice is, and how widely that advice ranges.
  • The algorithms go further still. They assess the way coders use language on social networks from LinkedIn to Twitter; the company has determined that certain phrases and words used in association with one another can distinguish expert programmers from less skilled ones. Gild knows these phrases and words are associated with good coding because it can correlate them with its evaluation of open-source code, and with the language and online behavior of programmers in good positions at prestigious companies.
  • having made those correlations, Gild can then score programmers who haven’t written open-source code at all, by analyzing the host of clues embedded in their online histories. They’re not all obvious, or easy to explain. Vivienne Ming, Gild’s chief scientist, told me that one solid predictor of strong coding is an affinity for a particular Japanese manga site.
  • Gild’s CEO, Sheeroy Desai, told me he believes his company’s approach can be applied to any occupation characterized by large, active online communities, where people post and cite individual work, ask and answer professional questions, and get feedback on projects. Graphic design is one field that the company is now looking at, and many scientific, technical, and engineering roles might also fit the bill. Regardless of their occupation, most people leave “data exhaust” in their wake, a kind of digital aura that can reveal a lot about a potential hire.
  • professionally relevant personality traits can be judged effectively merely by scanning Facebook feeds and photos. LinkedIn, of course, captures an enormous amount of professional data and network information, across just about every profession. A controversial start-up called Klout has made its mission the measurement and public scoring of people’s online social influence.
  • Now the two companies are working together to marry pre-hire assessments to an increasing array of post-hire data: about not only performance and duration of service but also who trained the employees; who has managed them; whether they were promoted to a supervisory role, and how quickly; how they performed in that role; and why they eventually left.
  • Over time, better job-matching technologies are likely to begin serving people directly, helping them see more clearly which jobs might suit them and which companies could use their skills. In the future, Gild plans to let programmers see their own profiles and take skills challenges to try to improve their scores. It intends to show them its estimates of their market value, too, and to recommend coursework that might allow them to raise their scores even more. Not least, it plans to make accessible the scores of typical hires at specific companies, so that software engineers can better see the profile they’d need to land a particular job
  • Knack, for its part, is making some of its video games available to anyone with a smartphone, so people can get a better sense of their strengths, and of the fields in which their strengths would be most valued. (Palo Alto High School recently adopted the games to help students assess careers.) Ultimately, the company hopes to act as matchmaker between a large network of people who play its games (or have ever played its games) and a widening roster of corporate clients, each with its own specific profile for any given type of job.
  • When I began my reporting for this story, I was worried that people analytics, if it worked at all, would only widen the divergent arcs of our professional lives, further gilding the path of the meritocratic elite from cradle to grave, and shutting out some workers more definitively. But I now believe the opposite is likely to happen, and that we’re headed toward a labor market that’s fairer to people at every stage of their careers
  • For decades, as we’ve assessed people’s potential in the professional workforce, the most important piece of data—the one that launches careers or keeps them grounded—has been educational background: typically, whether and where people went to college, and how they did there. Over the past couple of generations, colleges and universities have become the gatekeepers to a prosperous life. A degree has become a signal of intelligence and conscientiousness, one that grows stronger the more selective the school and the higher a student’s GPA, that is easily understood by employers, and that, until the advent of people analytics, was probably unrivaled in its predictive powers.
  • the limitations of that signal—the way it degrades with age, its overall imprecision, its many inherent biases, its extraordinary cost—are obvious. “Academic environments are artificial environments,” Laszlo Bock, Google’s senior vice president of people operations, told The New York Times in June. “People who succeed there are sort of finely trained, they’re conditioned to succeed in that environment,” which is often quite different from the workplace.
  • because one’s college history is such a crucial signal in our labor market, perfectly able people who simply couldn’t sit still in a classroom at the age of 16, or who didn’t have their act together at 18, or who chose not to go to graduate school at 22, routinely get left behind for good. That such early factors so profoundly affect career arcs and hiring decisions made two or three decades later is, on its face, absurd.
  • I spoke with managers at a lot of companies who are using advanced analytics to reevaluate and reshape their hiring, and nearly all of them told me that their research is leading them toward pools of candidates who didn’t attend college—for tech jobs, for high-end sales positions, for some managerial roles. In some limited cases, this is because their analytics revealed no benefit whatsoever to hiring people with college degrees; in other cases, and more often, it’s because they revealed signals that function far better than college history,
  • Google, too, is hiring a growing number of nongraduates. Many of the people I talked with reported that when it comes to high-paying and fast-track jobs, they’re reducing their preference for Ivy Leaguers and graduates of other highly selective schools.
  • This process is just beginning. Online courses are proliferating, and so are online markets that involve crowd-sourcing. Both arenas offer new opportunities for workers to build skills and showcase competence. Neither produces the kind of instantly recognizable signals of potential that a degree from a selective college, or a first job at a prestigious firm, might. That’s a problem for traditional hiring managers, because sifting through lots of small signals is so difficult and time-consuming.
  • all of these new developments raise philosophical questions. As professional performance becomes easier to measure and see, will we become slaves to our own status and potential, ever-focused on the metrics that tell us how and whether we are measuring up? Will too much knowledge about our limitations hinder achievement and stifle our dreams? All I can offer in response to these questions, ironically, is my own gut sense, which leads me to feel cautiously optimistic.
  • Google’s understanding of the promise of analytics is probably better than anybody else’s, and the company has been changing its hiring and management practices as a result of its ongoing analyses. (Brainteasers are no longer used in interviews, because they do not correlate with job success; GPA is not considered for anyone more than two years out of school, for the same reason—the list goes on.) But for all of Google’s technological enthusiasm, these same practices are still deeply human. A real, live person looks at every résumé the company receives. Hiring decisions are made by committee and are based in no small part on opinions formed during structured interviews.
Javier E

MacIntyre | Internet Encyclopedia of Philosophy - 0 views

  • For MacIntyre, “rationality” comprises all the intellectual resources, both formal and substantive, that we use to judge truth and falsity in propositions, and to determine choice-worthiness in courses of action
  • Rationality in this sense is not universal; it differs from community to community and from person to person, and may both develop and regress over the course of a person’s life or a community’s history.
  • So rationality itself, whether theoretical or practical, is a concept with a history: indeed, since there are also a diversity of traditions of enquiry, with histories, there are, so it will turn out, rationalities rather than rationality, just as it will also turn out that there are justices rather than justice
  • ...164 more annotations...
  • Rationality is the collection of theories, beliefs, principles, and facts that the human subject uses to judge the world, and a person’s rationality is, to a large extent, the product of that person’s education and moral formation.
  • To the extent that a person accepts what is handed down from the moral and intellectual traditions of her or his community in learning to judge truth and falsity, good and evil, that person’s rationality is “tradition-constituted.” Tradition-constituted rationality provides the schemata by which we interpret, understand, and judge the world we live in
  • The apparent problem of relativism in MacIntyre’s theory of rationality is much like the problem of relativism in the philosophy of science. Scientific claims develop within larger theoretical frameworks, so that the apparent truth of a scientific claim depends on one’s judgment of the larger framework. The resolution of the problem of relativism therefore appears to hang on the possibility of judging frameworks or rationalities, or judging between frameworks or rationalities from a position that does not presuppose the truth of the framework or rationality, but no such theoretical standpoint is humanly possible.
  • MacIntyre finds that the world itself provides the criterion for the testing of rationalities, and he finds that there is no criterion except the world itself that can stand as the measure of the truth of any philosophical theory.
  • MacIntyre’s philosophy is indebted to the philosophy of science, which recognizes the historicism of scientific enquiry even as it seeks a truthful understanding of the world. MacIntyre’s philosophy does not offer a priori certainty about any theory or principle; it examines the ways in which reflection upon experience supports, challenges, or falsifies theories that have appeared to be the best theories so far to the people who have accepted them so far. MacIntyre’s ideal enquirers remain Hamlets, not Emmas.
  • history shows us that individuals, communities, and even whole nations may commit themselves militantly over long periods of their histories to doctrines that their ideological adversaries find irrational. This qualified relativism of appearances has troublesome implications for anyone who believes that philosophical enquiry can easily provide certain knowledge of the world
  • According to MacIntyre, theories govern the ways that we interpret the world and no theory is ever more than “the best standards so far” (3RV, p. 65). Our theories always remain open to improvement, and when our theories change, the appearances of our world—the apparent truths of claims judged within those theoretical frameworks—change with them.
  • From the subjective standpoint of the human enquirer, MacIntyre finds that theories, concepts, and facts all have histories, and they are all liable to change—for better or for worse.
  • MacIntyre holds that the rationality of individuals is not only tradition-constituted, it is also tradition constitutive, as individuals make their own contributions to their own rationality, and to the rationalities of their communities. Rationality is not fixed, within either the history of a community or the life of a person
  • The modern account of first principles justifies an approach to philosophy that rejects tradition. The modern liberal individualist approach is anti-traditional. It denies that our understanding is tradition-constituted and it denies that different cultures may differ in their standards of rationality and justice:
  • Modernity does not see tradition as the key that unlocks moral and political understanding, but as a superfluous accumulation of opinions that tend to prejudice moral and political reasoning.
  • Although modernity rejects tradition as a method of moral and political enquiry, MacIntyre finds that it nevertheless bears all the characteristics of a moral and political tradition.
  • If historical narratives are only projections of the interests of historians, then it is difficult to see how this historical narrative can claim to be truthful
  • For these post-modern theorists, “if the Enlightenment conceptions of truth and rationality cannot be sustained,” either relativism or perspectivism “is the only possible alternative” (p. 353). MacIntyre rejects both challenges by developing his theory of tradition-constituted and tradition-constitutive rationality on pp. 354-369
  • How, then, is one to settle challenges between two traditions? It depends on whether the adherents of either take the challenges of the other tradition seriously. It depends on whether the adherents of either tradition, on seeing a failure in their own tradition are willing to consider an answer offered by their rival (p. 355)
  • how a person with no traditional affiliation is to deal with the conflicting claims of rival traditions: “The initial answer is: that will depend upon who you are and how you understand yourself. This is not the kind of answer which we have been educated to expect in philosophy”
  • MacIntyre focuses the critique of modernity on the question of rational justification. Modern epistemology stands or falls on the possibility of Cartesian epistemological first principles. MacIntyre’s history exposes that notion of first principle as a fiction, and at the same time demonstrates that rational enquiry advances (or declines) only through tradition
  • MacIntyre cites Foucault’s 1966 book, Les Mots et les choses (The Order of Things, 1970) as an example of the self-subverting character of Genealogical enquiry
  • Foucault’s book reduces history to a procession of “incommensurable ordered schemes of classification and representation” none of which has any greater claim to truth than any other, yet this book “is itself organized as a scheme of classification and representation.”
  • From MacIntyre’s perspective, there is no question of deciding whether or not to work within a tradition; everyone who struggles with practical, moral, and political questions simply does. “There is no standing ground, no place for enquiry . . . apart from that which is provided by some particular tradition or other”
  • Three Rival Versions of Moral Enquiry (1990). The central idea of the Gifford Lectures is that philosophers make progress by addressing the shortcomings of traditional narratives about the world, shortcomings that become visible either through the failure of traditional narratives to make sense of experience, or through the introduction of contradictory narratives that prove impossible to dismiss
  • MacIntyre compares three traditions exemplified by three literary works published near the end of Adam Gifford’s life (1820–1887)
  • The Ninth Edition of the Encyclopaedia Britannica (1875–1889) represents the modern tradition of trying to understand the world objectively without the influence of tradition.
  • The Genealogy of Morals (1887), by Friedrich Nietzsche embodies the post-modern tradition of interpreting all traditions as arbitrary impositions of power.
  • The encyclical letter Aeterni Patris (1879) of Pope Leo XIII exemplifies the approach of acknowledging one’s predecessors within one’s own tradition of enquiry and working to advance or improve that tradition in the pursuit of objective truth.&nbsp;
  • Of the three versions of moral enquiry treated in 3RV, only tradition, exemplified in 3RV by the Aristotelian, Thomistic tradition, understands itself as a tradition that looks backward to predecessors in order to understand present questions and move forward
  • Encyclopaedia obscures the role of tradition by presenting the most current conclusions and convictions of a tradition as if they had no history, and as if they represented the final discovery of unalterable truth
  • Encyclopaedists focus on the present and ignore the past.
  • Genealogists, on the other hand, focus on the past in order to undermine the claims of the present.
  • In short, Genealogy denies the teleology of human enquiry by denying (1) that historical enquiry has been fruitful, (2) that the enquiring person has a real identity, and (3) that enquiry has a real goal. MacIntyre finds this mode of enquiry incoherent.
  • Genealogy is self-deceiving insofar as it ignores the traditional and teleological character of its enquiry.
  • Genealogical moral enquiry must make similar exceptions to its treatments of the unity of the enquiring subject and the teleology of moral enquiry; thus “it seems to be the case that the intelligibility of genealogy requires beliefs and allegiances of a kind precluded by the genealogical stance” (3RV, p. 54-55)
  • MacIntyre uses Thomism because it applies the traditional mode of enquiry in a self-conscious manner. Thomistic students learn the work of philosophical enquiry as apprentices in a craft (3RV, p. 61), and maintain the principles of the tradition in their work to extend the understanding of the tradition, even as they remain open to the criticism of those principles.
  • 3RV uses Thomism as its example of tradition, but this use should not suggest that MacIntyre identifies “tradition” with Thomism or Thomism-as-a-name-for-the-Western-tradition. As noted above, WJWR distinguished four traditions of enquiry within the Western European world alone
  • MacIntyre’s emphasis on the temporality of rationality in traditional enquiry makes tradition incompatible with the epistemological projects of modern philosophy
  • Tradition is not merely conservative; it remains open to improvement,
  • Tradition differs from both encyclopaedia and genealogy in the way it understands the place of its theories in the history of human enquiry. The adherent of a tradition must understand that “the rationality of a craft is justified by its history so far,” thus it “is inseparable from the tradition through which it was achieved”
  • MacIntyre uses Thomas Aquinas to illustrate the revolutionary potential of traditional enquiry. Thomas was educated in Augustinian theology and Aristotelian philosophy, and through this education he began to see not only the contradictions between the two traditions, but also the strengths and weaknesses that each tradition revealed in the other. His education also helped him to discover a host of questions and problems that had to be answered and solved. Many of Thomas Aquinas’ responses to these concerns took the form of disputed questions. “Yet to each question the answer produced by Aquinas as a conclusion is no more than and, given Aquinas’s method, cannot but be no more than, the best answer reached so far. And hence derives the essential incompleteness”
  • argue that the virtues are essential to the practice of independent practical reason. The book is relentlessly practical; its arguments appeal only to experience and to purposes, and to the logic of practical reasoning.
  • Like other intelligent animals, human beings enter life vulnerable, weak, untrained, and unknowing, and face the likelihood of infirmity in sickness and in old age. Like other social animals, humans flourish in groups. We learn to regulate our passions, and to act effectively alone and in concert with others through an education provided within a community. MacIntyre’s position allows him to look to the animal world to find analogies to the role of social relationships in the moral formation of human beings
  • The task for the human child is to make “the transition from the infantile exercise of animal intelligence to the exercise of independent practical reasoning” (DRA, p. 87). For a child to make this transition is “to redirect and transform her or his desires, and subsequently to direct them consistently towards the goods of different stages of her or his life” (DRA, p. 87). The development of independent practical reason in the human agent requires the moral virtues in at least three ways.
  • DRA presents moral knowledge as a “knowing how,” rather than as a “knowing that.” Knowledge of moral rules is not sufficient for a moral life; prudence is required to enable the agent to apply the rules well.
  • “Knowing how to act virtuously always involves more than rule-following” (DRA, p. 93). The prudent person can judge what must be done in the absence of a rule and can also judge when general norms cannot be applied to particular cases.
  • Flourishing as an independent practical reasoner requires the virtues in a second way, simply because sometimes we need our friends to tell us who we really are. Independent practical reasoning also requires self-knowledge, but self-knowledge is impossible without the input of others whose judgment provides a reliable touchstone to test our beliefs about ourselves. Self-knowledge therefore requires the virtues that enable an agent to sustain formative relationships and to accept the criticism of trusted friends
  • Human flourishing requires the virtues in a third way, by making it possible to participate in social and political action. They enable us to “protect ourselves and others against neglect, defective sympathies, stupidity, acquisitiveness, and malice” (DRA, p. 98) by enabling us to form and sustain social relationships through which we may care for one another in our infirmities, and pursue common goods with and for the other members of our societies.
  • MacIntyre argues that it is impossible to find an external standpoint, because rational enquiry is an essentially social work (DRA, p. 156-7). Because it is social, shared rational enquiry requires moral commitment to, and practice of, the virtues to prevent the more complacent members of communities from closing off critical reflection upon “shared politically effective beliefs and concepts”
  • MacIntyre finds himself compelled to answer what may be called the question of moral provincialism: If one is to seek the truth about morality and justice, it seems necessary to “find a standpoint that is sufficiently external to the evaluative attitudes and practices that are to be put to the question.” If it is impossible for the agent to take such an external standpoint, if the agent’s commitments preclude radical criticism of the virtues of the community, does that leave the agent “a prisoner of shared prejudices” (DRA, p. 154)?
  • The book moves from MacIntyre’s assessment of human needs for the virtues to the political implications of that assessment. Social and political institutions that form and enable independent practical reasoning must “satisfy three conditions.” (1) They must enable their members to participate in shared deliberations about the communities’ actions. (2) They must establish norms of justice “consistent with exercise of” the virtue of justice. (3) They must enable the strong “to stand proxy” as advocates for the needs of the weak and the disabled.
  • The social and political institutions that MacIntyre recommends cannot be identified with the modern nation state or the modern nuclear family
  • The political structures necessary for human flourishing are essentially local
  • Yet local communities support human flourishing only when they actively support “the virtues of just generosity and shared deliberation”
  • MacIntyre rejects individualism and insists that we view human beings as members of communities who bear specific debts and responsibilities because of our social identities. The responsibilities one may inherit as a member of a community include debts to one’s forbearers that one can only repay to people in the present and future
  • The constructive argument of the second half of the book begins with traditional accounts of the excellences or virtues of practical reasoning and practical rationality rather than virtues of moral reasoning or morality. These traditional accounts define virtue as arête, as excellence
  • Practices are supported by institutions like chess clubs, hospitals, universities, industrial corporations, sports leagues, and political organizations.
  • Practices exist in tension with these institutions, since the institutions tend to be oriented to goods external to practices. Universities, hospitals, and scholarly societies may value prestige, profitability, or relations with political interest groups above excellence in the practices they are said to support.
  • Personal desires and institutional pressures to pursue external goods may threaten to derail practitioners’ pursuits of the goods internal to practices. MacIntyre defines virtue initially as the quality of character that enables an agent to overcome these temptations:
  • “A virtue is an acquired human quality the possession and exercise of which tends to enable us to achieve those goods which are internal to practices
  • Excellence as a human agent cannot be reduced to excellence in a particular practice (See AV, pp. 204–
  • The virtues therefore are to be understood as those dispositions which will not only sustain practices and enable us to achieve the goods internal to practices, but which will also sustain us in the relevant kind of quest for the good, by enabling us to overcome the harms, dangers, temptations, and distractions which we encounter, and which will furnish us with increasing self-knowledge and increasing knowledge of the good (AV, p. 219).
  • The excellent human agent has the moral qualities to seek what is good and best both in practices and in life as a whole.
  • The virtues find their point and purpose not only in sustaining those relationships necessary if the variety of goods internal to practices are to be achieved and not only in sustaining the form of an individual life in which that individual may seek out his or her good as the good of his or her whole life, but also in sustaining those traditions which provide both practices and individual lives with their necessary historical context (AV, p. 223)
  • Since “goods, and with them the only grounds for the authority of laws and virtues, can only be discovered by entering into those relationships which constitute communities whose central bond is a shared vision of and understanding of goods” (AV, p. 258), any hope for the transformation and renewal of society depends on the development and maintenance of such communities.
  • MacIntyre’s Aristotelian approach to ethics as a study of human action distinguishes him from post-Kantian moral philosophers who approach ethics as a means of determining the demands of objective, impersonal, universal morality
  • This modern approach may be described as moral epistemology. Modern moral philosophy pretends to free the individual to determine for her- or himself what she or he must do in a given situation, irrespective of her or his own desires; it pretends to give knowledge of universal moral laws
  • Aristotelian metaphysicians, particularly Thomists who define virtue in terms of the perfection of nature, rejected MacIntyre’s contention that an adequate Aristotelian account of virtue as excellence in practical reasoning and human action need not appeal to Aristotelian metaphysic
  • one group of critics rejects MacIntyre’s Aristotelianism because they hold that any Aristotelian account of the virtues must first account for the truth about virtue in terms of Aristotle’s philosophy of nature, which MacIntyre had dismissed in AV as “metaphysical biology”
  • Many of those who rejected MacIntyre’s turn to Aristotle define “virtue” primarily along moral lines, as obedience to law or adherence to some kind of natural norm. For these critics, “virtuous” appears synonymous with “morally correct;” their resistance to MacIntyre’s appeal to virtue stems from their difficulties either with what they take to be the shortcomings of MacIntyre’s account of moral correctness or with the notion of moral correctness altogether
  • MacIntyre continues to argue from the experience of practical reasoning to the demands of moral education.
  • Descartes and his successors, by contrast, along with certain “notable Thomists of the last hundred years” (p. 175), have proposed that philosophy begins from knowledge of some “set of necessarily true first principles which any truly rational person is able to evaluate as true” (p. 175). Thus for the moderns, philosophy is a technical rather than moral endeavor
  • MacIntyre distinguishes two related challenges to his position, the “relativist challenge” and the “perspectivist challenge.” These two challenges both acknowledge that the goals of the Enlightenment cannot be met and that, “the only available standards of rationality are those made available by and within traditions” (p. 252); they conclude that nothing can be known to be true or false
  • MacIntyre follows the progress of the Western tradition through “three distinct traditions:” from Homer and Aristotle to Thomas Aquinas, from Augustine to Thomas Aquinas and from Augustine through Calvin to Hume
  • Chapter 17 examines the modern liberal denial of tradition, and the ironic transformation of liberalism into the fourth tradition to be treated in the book.
  • MacIntyre credits John Stuart Mill and Thomas Aquinas as “two philosophers of the kind who by their writing send us beyond philosophy into immediate encounter with the ends of life
  • First, both were engaged by questions about the ends of life as questioning human beings and not just as philosophers. . . .
  • Secondly, both Mill and Aquinas understood their speaking and writing as contributing to an ongoing philosophical conversation. . . .
  • Thirdly, it matters that both the end of the conversation and the good of those who participate in it is truth and that the nature of truth, of good, of rational justification, and of meaning therefore have to be central topics of that conversation (Tasks, pp. 130-1).
  • Without these three characteristics, philosophy is first reduced to “the exercise of a set of analytic and argumentative skills. . . . Secondly, philosophy may thereby become a diversion from asking questions about the ends of life with any seriousness”
  • Neither Rosenzweig nor Lukács made philosophical progress because both failed to relate “their questions about the ends of life to the ends of their philosophical writing”
  • First, any adequate philosophical history or biography must determine whether the authors studied remain engaged with the questions that philosophy studies, or set the questions aside in favor of the answers. Second, any adequate philosophical history or biography must determine whether the authors studied insulated themselves from contact with conflicting worldviews or remained open to learning from every available philosophical approach. Third, any adequate philosophical history or biography must place the authors studied into a broader context that shows what traditions they come from and “whose projects” they are “carrying forward
  • MacIntyre’s recognition of the connection between an author’s pursuit of the ends of life and the same author’s work as a philosophical writer prompts him to finish the essay by demanding three things of philosophical historians and biographers
  • Philosophy is not just a study; it is a practice. Excellence in this practice demands that an author bring her or his struggles with the questions of the ends of philosophy into dialogue with historic and contemporary texts and authors in the hope of making progress in answering those questions
  • MacIntyre defends Thomistic realism as rational enquiry directed to the discovery of truth.
  • The three Thomistic essays in this book challenge those caricatures by presenting Thomism in a way that people outside of contemporary Thomistic scholarship may find surprisingly flexible and open
  • To be a moral agent, (1) one must understand one’s individual identity as transcending all the roles that one fills; (2) one must see oneself as a practically rational individual who can judge and reject unjust social standards; and (3) one must understand oneself as “as accountable to others in respect of the human virtues and not just in respect of [one’s] role-performances
  • J is guilty because he complacently accepted social structures that he should have questioned, structures that undermined his moral agency. This essay shows that MacIntyre’s ethics of human agency is not just a descriptive narrative about the manner of moral education; it is a standard laden account of the demands of moral agency.
  • MacIntyre considers “the case of J” (J, for jemand, the German word for “someone”), a train controller who learned, as a standard for his social role, to take no interest in what his trains carried, even during war time when they carried “munitions and . . . Jews on their way to extermination camps”
  • J had learned to do his work for the railroad according to one set of standards and to live other parts of his life according to other standards, so that this compliant participant in “the final solution” could contend, “You cannot charge me with moral failure” (E&amp;P, p. 187).
  • The epistemological theories of Modern moral philosophy were supposed to provide rational justification for rules, policies, and practical determinations according to abstract universal standards, but MacIntyre has dismissed those theorie
  • Modern metaethics is supposed to enable its practitioners to step away from the conflicting demands of contending moral traditions and to judge those conflicts from a neutral position, but MacIntyre has rejected this project as well
  • In his ethical writings, MacIntyre seeks only to understand how to liberate the human agent from blindness and stupidity, to prepare the human agent to recognize what is good and best to do in the concrete circumstances of that agent’s own life, and to strengthen the agent to follow through on that judgment.
  • In his political writings, MacIntyre investigates the role of communities in the formation of effective rational agents, and the impact of political institutions on the lives of communities. This kind of ethics and politics is appropriately named the ethics of human agency.
  • The purpose of the modern moral philosophy of authors like Kant and Mill was to determine, rationally and universally, what kinds of behavior ought to be performed—not in terms of the agent’s desires or goals, but in terms of universal, rational duties. Those theories purported to let agents know what they ought to do by providing knowledge of duties and obligations, thus they could be described as theories of moral epistemology.
  • Contemporary virtue ethics purports to let agents know what qualities human beings ought to have, and the reasons that we ought to have them, not in terms of our fitness for human agency, but in the same universal, disinterested, non-teleological terms that it inherits from Kant and Mill.
  • For MacIntyre, moral knowledge remains a “knowing how” rather than a “knowing that;” MacIntyre seeks to identify those moral and intellectual excellences that make human beings more effective in our pursuit of the human good.
  • MacIntyre’s purpose in his ethics of human agency is to consider what it means to seek one’s good, what it takes to pursue one’s good, and what kind of a person one must become if one wants to pursue that good effectively as a human agent.
  • As a philosophy of human agency, MacIntyre’s work belongs to the traditions of Aristotle and Thomas Aquinas.
  • in keeping with the insight of Marx’s third thesis on Feuerbach, it maintained the common condition of theorists and people as peers in the pursuit of the good life.
  • He holds that the human good plays a role in our practical reasoning whether we recognize it or not, so that some people may do well without understanding why (E&amp;P, p. 25). He also reads Aristotle as teaching that knowledge of the good can make us better agents
  • AV defines virtue in terms of the practical requirements for excellence in human agency, in an agent’s participation in practices (AV, ch. 14), in an agent’s whole life, and in an agent’s involvement in the life of her or his community
  • MacIntyre’s Aristotelian concept of “human action” opposes the notion of “human behavior” that prevailed among mid-twentieth-century determinist social scientists. Human actions, as MacIntyre understands them, are acts freely chosen by human agents in order to accomplish goals that those agents pursue
  • Human behavior, according to mid-twentieth-century determinist social scientists, is the outward activity of a subject, which is said to be caused entirely by environmental influences beyond the control of the subject.
  • Rejecting crude determinism in social science, and approaches to government and public policy rooted in determinism, MacIntyre sees the renewal of human agency and the liberation of the human agent as central goals for ethics and politics.
  • MacIntyre’s Aristotelian account of “human action” examines the habits that an agent must develop in order to judge and act most effectively in the pursuit of truly choice-worthy ends
  • MacIntyre seeks to understand what it takes for the human person to become the kind of agent who has the practical wisdom to recognize what is good and best to do and the moral freedom to act on her or his best judgment.
  • MacIntyre rejected the determinism of modern social science early in his career (“Determinism,” 1957), yet he recognizes that the ability to judge well and act freely is not simply given; excellence in judgment and action must be developed, and it is the task of moral philosophy to discover how these excellences or virtues of the human agent are established, maintained, and strengthened
  • MacIntyre’s Aristotelian philosophy investigates the conditions that support free and deliberate human action in order to propose a path to the liberation of the human agent through participation in the life of a political community that seeks its common goods through the shared deliberation and action of its members
  • As a classics major at Queen Mary College in the University of London (1945-1949), MacIntyre read the Greek texts of Plato and Aristotle, but his studies were not limited to the grammars of ancient languages. He also examined the ethical theories of Immanuel Kant and John Stuart Mill. He attended the lectures of analytic philosopher A. J. Ayer and of philosopher of science Karl Popper. He read Ludwig Wittgenstein’s Tractatus Logico Philosophicus, Jean-Paul Sartre’s L'existentialisme est un humanisme, and Marx’s Eighteenth Brumaire of Napoleon Bonaparte (What happened, pp. 17-18). MacIntyre met the sociologist Franz Steiner, who helped direct him toward approaching moralities substantively
  • Alasdair MacIntyre’s philosophy builds on an unusual foundation. His early life was shaped by two conflicting systems of values. One was “a Gaelic oral culture of farmers and fishermen, poets and storytellers.” The other was modernity, “The modern world was a culture of theories rather than stories” (MacIntyre Reader, p. 255). MacIntyre embraced both value systems
  • From Marxism, MacIntyre learned to see liberalism as a destructive ideology that undermines communities in the name of individual liberty and consequently undermines the moral formation of human agents
  • For MacIntyre, Marx’s way of seeing through the empty justifications of arbitrary choices to consider the real goals and consequences of political actions in economic and social terms would remain the principal insight of Marxism
  • After his retirement from teaching, MacIntyre has continued his work of promoting a renewal of human agency through an examination of the virtues demanded by practices, integrated human lives, and responsible engagement with community life. He is currently affiliated with the Centre for Contemporary Aristotelian Studies in Ethics and Politics (CASEP) at London Metropolitan University.
  • The second half of AV proposes a conception of practice and practical reasoning and the notion of excellence as a human agent as an alternative to modern moral philosophy
  • AV rejects the view of “modern liberal individualism” in which autonomous individuals use abstract moral principles to determine what they ought to do. The critique of modern normative ethics in the first half of AV rejects modern moral reasoning for its failure to justify its premises, and criticizes the frequent use of the rhetoric of objective morality and scientific necessity to manipulate people to accept arbitrary decisions
  • MacIntyre uses “modern liberal individualism” to name a much broader category that includes both liberals and conservatives in contemporary American political parlance, as well as some Marxists and anarchists (See ASIA, pp. 280-284). Conservatism, liberalism, Marxism, and anarchism all present the autonomous individual as the unit of civil society
  • The sources of modern liberal individualism—Hobbes, Locke, and Rousseau—assert that human life is solitary by nature and social by habituation and convention. MacIntyre’s Aristotelian tradition holds, on the contrary, that human life is social by nature.
  • MacIntyre identifies moral excellence with effective human agency, and seeks a political environment that will help to liberate human agents to recognize and seek their own goods, as components of the common goods of their communities, more effectively. For MacIntyre therefore, ethics and politics are bound together.
  • For MacIntyre ethics is not an application of principles to facts, but a study of moral action. Moral action, free human action, involves decisions to do things in pursuit of goals, and it involves the understanding of the implications of one’s actions for the whole variety of goals that human agents seek
  • In this sense, “To act morally is to know how to act” (SMJ, p. 56). “Morality is not a ‘knowing that’ but a ‘knowing how’”
  • If human action is a ‘knowing how,’ then ethics must also consider how one learns ‘how.’ Like other forms of ‘knowing how,’ MacIntyre finds that one learns how to act morally within a community whose language and shared standards shape our judgment
  • MacIntyre had concluded that ethics is not an abstract exercise in the assessment of facts; it is a study of free human action and of the conditions that enable rational human agency.
  • MacIntyre gives Marx credit for concluding in the third of the Theses on Feuerbach, that the only way to change society is to change ourselves, and that “The coincidence of the changing of human activity or self-changing can only be comprehended and rationally understood as revolutionary practice”
  • MacIntyre distinguishes “religion which is an opiate for the people from religion which is not” (MI, p. 83). He condemns forms of religion that justify social inequities and encourage passivity. He argues that authentic Christian teaching criticizes social structures and encourages action
  • Where “moral philosophy textbooks” discuss the kinds of maxims that should guide “promise-keeping, truth-telling, and the like,” moral maxims do not guide real agents in real life at all. “They do not guide us because we do not need to be guided. We know what to do” (ASIA, p. 106). Sometimes we do this without any maxims at all, or even against all the maxims we know. MacIntyre Illustrates his point with Huckleberry Finn’s decision to help Jim, Miss Watson’s escaped slave, to make his way to freedom
  • MacIntyre develops the ideas that morality emerges from history, and that morality organizes the common life of a community
  • The book concludes that the concepts of morality are neither timeless nor ahistorical, and that understanding the historical development of ethical concepts can liberate us “from any false absolutist claims” (SHE, p. 269). Yet this conclusion need not imply that morality is essentially arbitrary or that one could achieve freedom by liberating oneself from the morality of one’s society.
  • From this “Aristotelian point of view,” “modern morality” begins to go awry when moral norms are separated from the pursuit of human goods and moral behavior is treated as an end in itself. This separation characterizes Christian divine command ethics since the fourteenth century and has remained essential to secularized modern morality since the eighteenth century
  • From MacIntyre’s “Aristotelian point of view,” the autonomy granted to the human agent by modern moral philosophy breaks down natural human communities and isolates the individual from the kinds of formative relationships that are necessary to shape the agent into an independent practical reasoner.
  • the 1977 essay “Epistemological Crises, Dramatic Narrative, and the Philosophy of Science” (Hereafter EC). This essay, MacIntyre reports, “marks a major turning-point in my thought in the 1970s” (The Tasks of Philosophy, p. vii) EC may be described fairly as MacIntyre’s discourse on method
  • First, Philosophy makes progress through the resolution of problems. These problems arise when the theories, histories, doctrines and other narratives that help us to organize our experience of the world fail us, leaving us in “epistemological crises.” Epistemological crises are the aftermath of events that undermine the ways that we interpret our world
  • it presents three general points on the method for philosophy.
  • To live in an epistemological crisis is to be aware that one does not know what one thought one knew about some particular subject and to be anxious to recover certainty about that subject.
  • To resolve an epistemological crisis it is not enough to impose some new way of interpreting our experience, we also need to understand why we were wrong before: “When an epistemological crisis is resolved, it is by the construction of a new narrative which enables the agent to understand both how he or she could intelligibly have held his or her original beliefs and how he or she could have been so drastically misled by them
  • MacIntyre notes, “Philosophers have customarily been Emmas and not Hamlets” (p. 6); that is, philosophers have treated their conclusions as accomplished truths, rather than as “more adequate narratives” (p. 7) that remain open to further improvement.
  • To illustrate his position on the open-endedness of enquiry, MacIntyre compares the title characters of Shakespeare’s Hamlet and Jane Austen’s Emma. When Emma finds that she is deeply misled in her beliefs about the other characters in her story, Mr. Knightly helps her to learn the truth and the story comes to a happy ending (p. 6). Hamlet, by contrast, finds no pat answers to his questions; rival interpretations remain throughout the play, so that directors who would stage the play have to impose their own interpretations on the script
  • Another approach to education is the method of Descartes, who begins by rejecting everything that is not clearly and distinctly true as unreliable and false in order to rebuild his understanding of the world on a foundation of undeniable truth.
  • Descartes presents himself as willfully rejecting everything he had believed, and ignores his obvious debts to the Scholastic tradition, even as he argues his case in French and Latin. For MacIntyre, seeking epistemological certainty through universal doubt as a precondition for enquiry is a mistake: “it is an invitation not to philosophy but to mental breakdown, or rather to philosophy as a means of mental breakdown.
  • MacIntyre contrasts Descartes’ descent into mythical isolation with Galileo, who was able to make progress in astronomy and physics by struggling with the apparently insoluble questions of late medieval astronomy and physics, and radically reinterpreting the issues that constituted those questions
  • To make progress in philosophy one must sort through the narratives that inform one’s understanding, struggle with the questions that those narratives raise, and on occasion, reject, replace, or reinterpret portions of those narratives and propose those changes to the rest of one’s community for assessment. Human enquiry is always situated within the history and life of a community.
  • The third point of EC is that we can learn about progress in philosophy from the philosophy of science
  • Kuhn’s “paradigm shifts,” however, are unlike MacIntyre’s resolutions of epistemological crises in two ways.
  • First they are not rational responses to specific problems. Kuhn compares paradigm shifts to religious conversions (pp. 150, 151, 158), stressing that they are not guided by rational norms and he claims that the “mopping up” phase of a paradigm shift is a matter of convention in the training of new scientists and attrition among the holdouts of the previous paradigm
  • Second, the new paradigm is treated as a closed system of belief that regulates a new period of “normal science”; Kuhn’s revolutionary scientists are Emmas, not Hamlets
  • MacIntyre proposes elements of Imre Lakatos’ philosophy of science as correctives to Kuhn’s. While Lakatos has his own shortcomings, his general account of the methodologies of scientific research programs recognizes the role of reason in the transitions between theories and between research programs (Lakatos’ analog to Kuhn’s paradigms or disciplinary matrices). Lakatos presents science as an open ended enquiry, in which every theory may eventually be replaced by more adequate theories. For Lakatos, unlike Kuhn, rational scientific progress occurs when a new theory can account both for the apparent promise and for the actual failure of the theory it replaces.
  • The third conclusion of MacIntyre’s essay is that decisions to support some theories over others may be justified rationally to the extent that those theories allow us to understand our experience and our history, including the history of the failures of inadequate theories
  • For Aristotle, moral philosophy is a study of practical reasoning, and the excellences or virtues that Aristotle recommends in the Nicomachean Ethics are the intellectual and moral excellences that make a moral agent effective as an independent practical reasoner.
  • MacIntyre also finds that the contending parties have little interest in the rational justification of the principles they use. The language of moral philosophy has become a kind of moral rhetoric to be used to manipulate others in defense of the arbitrary choices of its users
  • examining the current condition of secular moral and political discourse. MacIntyre finds contending parties defending their decisions by appealing to abstract moral principles, but he finds their appeals eclectic, inconsistent, and incoherent.
  • The secular moral philosophers of the eighteenth and nineteenth centuries shared strong and extensive agreements about the content of morality (AV, p. 51) and believed that their moral philosophy could justify the demands of their morality rationally, free from religious authority.
  • MacIntyre traces the lineage of the culture of emotivism to the secularized Protestant cultures of northern Europe
  • Modern moral philosophy had thus set for itself an incoherent goal. It was to vindicate both the moral autonomy of the individual and the objectivity, necessity, and categorical character of the rules of morality
  • MacIntyre turns to an apparent alternative, the pragmatic expertise of professional managers. Managers are expected to appeal to the facts to make their decisions on the objective basis of effectiveness, and their authority to do this is based on their knowledge of the social sciences
  • An examination of the social sciences reveals, however, that many of the facts to which managers appeal depend on sociological theories that lack scientific status. Thus, the predictions and demands of bureaucratic managers are no less liable to ideological manipulation than the determinations of modern moral philosophers.
  • Modern moral philosophy separates moral reasoning about duties and obligations from practical reasoning about ends and practical deliberation about the means to one’s ends, and in doing so it separates morality from practice.
  • Many Europeans also lost the practical justifications for their moral norms as they approached modernity; for these Europeans, claiming that certain practices are “immoral,” and invoking Kant’s categorical imperative or Mill’s principle of utility to explain why those practices are immoral, seems no more adequate than the Polynesian appeal to taboo.
  • MacIntyre sifts these definitions and then gives his own definition of virtue, as excellence in human agency, in terms of practices, whole human lives, and traditions in chapters 14 and 15 of AV.
  • In the most often quoted sentence of AV, MacIntyre defines a practice as (1) a complex social activity that (2) enables participants to gain goods internal to the practice. (3) Participants achieve excellence in practices by gaining the internal goods. When participants achieve excellence, (4) the social understandings of excellence in the practice, of the goods of the practice, and of the possibility of achieving excellence in the practice “are systematically extended”
  • Practices, like chess, medicine, architecture, mechanical engineering, football, or politics, offer their practitioners a variety of goods both internal and external to these practices. The goods internal to practices include forms of understanding or physical abilities that can be acquired only by pursuing excellence in the associated practice
  • Goods external to practices include wealth, fame, prestige, and power; there are many ways to gain these external goods. They can be earned or purchased, either honestly or through deception; thus the pursuit of these external goods may conflict with the pursuit of the goods internal to practices.
  • An intelligent child is given the opportunity to win candy by learning to play chess. As long as the child plays chess only to win candy, he has every reason to cheat if by doing so he can win more candy. If the child begins to desire and pursue the goods internal to chess, however, cheating becomes irrational, because it is impossible to gain the goods internal to chess or any other practice except through an honest pursuit of excellence. Goods external to practices may nevertheless remain tempting to the practitioner.
  • Since MacIntyre finds social identity necessary for the individual, MacIntyre’s definition of the excellence or virtue of the human agent needs a social dimension:
  • These responsibilities also include debts incurred by the unjust actions of ones’ predecessors.
  • The enslavement and oppression of black Americans, the subjugation of Ireland, and the genocide of the Jews in Europe remained quite relevant to the responsibilities of citizens of the United States, England, and Germany in 1981, as they still do today.
  • Thus an American who said “I never owned any slaves,” “the Englishman who says ‘I never did any wrong to Ireland,’” or “the young German who believes that being born after 1945 means that what Nazis did to Jews has no moral relevance to his relationship to his Jewish contemporaries” all exhibit a kind of intellectual and moral failure.
  • “I am born with a past, and to cut myself off from that past in the individualist mode, is to deform my present relationships” (p. 221).&nbsp; For MacIntyre, there is no moral identity for the abstract individual; “The self has to find its moral identity in and through its membership in communities” (p. 221).
knudsenlu

Ambitious neuroscience project to probe how the brain makes decisions | Science | The G... - 0 views

  • World-leading neuroscientists have launched an ambitious project to answer one of the greatest mysteries of all time: how the brain decides what to do.
  • If the researchers can unravel what happens in detail, it would mark a dramatic leap forward in scientists’ understanding of a process that lies at the heart of life, and which ultimately has implications for intelligence and free will.
  • Half of the IBL researchers will perform experiments and the other half will focus on theoretical models of how the brain makes up its mind.
  • ...2 more annotations...
  • The IBL hopes to overcome these flaws. Scientists on the project will work on exactly the same problems in precisely the same way. Animal experiments, for example, will use one strain of mouse, and all will be trained, tested and scored in the same way. It is an obvious strategy, but not a common one in science: in any lab, there is a constant urge to tweak experiments to make them better. “Ultimately, the reason it’s worth addressing is in the proverb: ‘alone we go fast, together we go far’,” said Churchland.
  • Decision-making is a field in itself, so IBL researchers will focus on simple, so-called perceptual decisions: those that involve responding to sights or sounds, for example. In one standard test, scientists will record how neurons fire in mice as they watch faint dots appear on a screen and spin a Lego wheel to indicate if the dots are on the left or the right. The mice make mistakes when the dots are faint, and it is these marginal calls that are most interesting to scientists.
Javier E

Living the life authentic: Bernard Williams on Paul Gauguin | Aeon Essays - 0 views

  • Williams invites us to see Gauguin’s meaning in life as deeply intertwined with his artistic ambition. His art is, to use Williams’s term for such meaning-giving enterprises, his ground project
  • This is what a ground project does, according to Williams: it gives a reason, not just given that you are alive, but a reason to be alive in the first place.
  • The desires and goals at the heart of what Williams calls a ground project form a fundamental part of one’s identity, and in that sense being true to one’s deepest desires is being true to who one is most deeply.
  • ...4 more annotations...
  • We see here the enormously influential cultural ideal mentioned at the outset: the purpose of life is to be authentic, where that means finding out who you are and living accordingly. Gauguin, in other words, was a cultural prototype for a conception of life’s meaning that today has widespread appeal around the world.
  • Williams, however, thinks that Gauguin’s eventual success as a painter constitutes a form of moral luck, in that his artistic achievement justifies what he did. It provides a justification that not everyone will accept, but one that can make sense to Gauguin himself, and perhaps to others
  • In his essay ‘Moral Luck’ (1976), Williams discusses Paul Gauguin’s decision to leave Paris in order to move to Tahiti where he hoped he could become a great painter. Gauguin left behind – basically abandoned – his wife and children
  • If there’s one theme in all my work it’s about authenticity and self-expression,’ said the philosopher Bernard Williams in an interview with The Guardian in 2002
Javier E

How the leading coronavirus vaccines made it to the finish line - The Washington Post - 0 views

  • If, as expected in the next few weeks, regulators give those vaccines the green light, the technology and the precision approach to vaccine design could turn out to be the pandemic’s silver linings: scientific breakthroughs that could begin to change the trajectory of the virus this winter and also pave the way for highly effective vaccines and treatments for other diseases.
  • Vaccine development typically takes years, even decades. The progress of the last 11 months shifts the paradigm for what’s possible, creating a new model for vaccine development and a toolset for a world that will have to fight more never-before-seen viruses in years to come.
  • Long before the pandemic, Graham worked with colleagues there and in academia to create a particularly accurate 3-D version of the spiky proteins that protrude from the surface of coronaviruses — an innovation that was rejected for publication by scientific journals five times because reviewers questioned its relevance.
  • ...26 more annotations...
  • Messenger RNA is a powerful, if fickle, component of life’s building blocks — a workhorse of the cell that is also truly just a messenger, unstable and prone to degrade.
  • . In 1990,
  • That same year, a team at the University of Wisconsin startled the scientific world with a paper that showed it was possible to inject a snippet of messenger RNA into mice and turn their muscle cells into factories, creating proteins on demand.
  • If custom-designed RNA snippets could be used to turn cells into bespoke protein factories, messenger RNA could become a powerful medical tool. It could encode fragments of virus to teach the immune system to defend against pathogens. It could also create whole proteins that are missing or damaged in people with devastating genetic diseases, such as cystic fibrosis.
  • In 2005, the pair discovered a way to modify RNA, chemically tweaking one of the letters of its code, so it didn’t trigger an inflammatory response. Deborah Fuller, a scientist who works on RNA and DNA vaccines at the University of Washington, said that work deserves a Nobel Prize.
  • messenger RNA posed a bigger challenge than other targets.“It’s tougher — it’s a much bigger molecule, it’s much more unstable,”
  • Unlike fields that were sparked by a single powerful insight, Sahin said that the recent success of messenger RNA vaccines is a story of countless improvements that turned an alluring biological idea into a beneficial technology.
  • “This is a field which benefited from hundreds of inventions,” said Sahin, who noted that when he started BioNTech in 2008, he cautioned investors that the technology would not yield a product for at least a decade. He kept his word: Until the coronavirus sped things along, BioNTech projected the launch of its first commercial project in 2023.
  • “It’s new to you,” Fuller said. “But for basic researchers, it’s been long enough. . . . Even before covid, everyone was talking: RNA, RNA, RNA.”
  • All vaccines are based on the same underlying idea: training the immune system to block a virus. Old-fashioned vaccines do this work by injecting dead or weakened viruses
  • ewer vaccines use distinctive bits of the virus, such as proteins on their surface, to teach the lesson. The latest genetic techniques, like messenger RNA, don’t take as long to develop because those virus bits don’t have to be generated in a lab. Instead, the vaccine delivers a genetic code that instructs cells to build those characteristic proteins themselves.
  • Severe acute respiratory syndrome had emerged in 2003. Middle East respiratory syndrome (MERS) broke out in 2012. It seemed clear to Graham and Jason McLellan, a structural biologist now at the University of Texas at Austin, that new coronaviruses were jumping into people on a 10-year-clock and it might be time to brace for the next one.
  • That infection opened Graham’s eyes to an opportunity. HKU1 was merely a nuisance, as opposed to a deadly pneumonia; that meant it would be easier to work with in the lab, since researchers wouldn’t have to don layers of protective gear and work in a pressurized laboratory.
  • They wanted the immune system to learn to recognize the thumb tack spike, so McLellan tasked a scientist in his laboratory with identifying genetic mutations that could anchor the protein into the right configuration. It was a painstaking process for Nianshuang Wang, who now works at a biotechnology company, Regeneron Pharmaceuticals. After trying hundreds of genetic mutations, he found two that worked. Five journals rejected the finding, questioning its significance, before it was published in 2017.
  • Last winter, when Graham heard rumblings of a new coronavirus in China, he brought the team back together. Once its genome was shared online by Chinese scientists, the laboratories in Texas and Maryland designed a vaccine, utilizing the stabilizing mutations and the knowledge they had gained from years of basic research — a weekend project thanks to the dividends of all that past work.
  • Graham needed a technology that could deliver it into the body — and had already been working with Moderna, using its messenger RNA technology to create a vaccine against a different bat virus, Nipah, as a dress rehearsal for a real pandemic. Moderna and NIH set the Nipah project aside and decided to go forward with a coronavirus vaccine.
  • On Jan. 13, Moderna’s Moore came into work and found her team already busy translating the stabilized spike protein into their platform. The company could start making the vaccine almost right away because of its experience manufacturing experimental cancer vaccines, which involves taking tumor samples and developing personalized vaccines in 45 days.
  • At BioNTech, Sahin said that even in the early design phases of its vaccine candidates, he incorporated the slight genetic changes designed in Graham’s lab that would make the spike look more like the real thing. At least two other companies would incorporate that same spike.
  • If all goes well with regulators, the coronavirus vaccines have the makings of a pharmaceutical industry fairy tale. The world faced an unparalleled threat, and companies leaped into the fight. Pfizer plowed $2 billion into the effort. Massive infusions of government cash helped remove the financial risks for Moderna.
  • But the world will also owe their existence to many scientists outside those companies, in government and academia who pursued ideas they thought were important even when the world doubted them
  • “They’re using the technology that [Kariko] and I developed,” he said. “We feel like it’s our vaccine, and we are incredibly excited — at how well it’s going, and how it’s going to be used to get rid of this pandemic.”
  • As executives become billionaires, many scientists think it is fair to earn money from their inventions that can help them do more important work. But McLellan’s laboratory at the University of Texas is proud to have licensed an even more potent version of their spike protein, royalty-free, to be incorporated into a vaccine for low and middle income countries.
  • Some of those scientists will receive remuneration, since their inventions are licensed and integrated into the products that could save the world.
  • “People hear about [vaccine progress] and think someone just thought about it that night. The amount of work — it’s really a beautiful story of fundamental basic research,” Fauci said. “It was chancy, in the sense that [the vaccine technology] was new. We were aware there would be pushback. The proof in the pudding is a spectacular success.”
  • The Vaccine Research Center, where Graham is deputy director, was the brainchild of Anthony S. Fauci, director of the National Institute of Allergy and Infectious Diseases. It was created in 1997 to bring together scientists and physicians from different disciplines to defeat diseases, with a heavy focus on HIV.
  • the pandemic wasn’t a sudden eureka moment — it was a catalyst that helped ignite lines of research that had been moving forward for years, far outside the spotlight of a global crisis.
Javier E

Cognitive Biases and the Human Brain - The Atlantic - 1 views

  • Present bias shows up not just in experiments, of course, but in the real world. Especially in the United States, people egregiously undersave for retirement—even when they make enough money to not spend their whole paycheck on expenses, and even when they work for a company that will kick in additional funds to retirement plans when they contribute.
  • hen people hear the word bias, many if not most will think of either racial prejudice or news organizations that slant their coverage to favor one political position over another. Present bias, by contrast, is an example of cognitive bias—the collection of faulty ways of thinking that is apparently hardwired into the human brain. The collection is large. Wikipedia’s “List of cognitive biases” contains 185 entries, from actor-observer bias (“the tendency for explanations of other individuals’ behaviors to overemphasize the influence of their personality and underemphasize the influence of their situation … and for explanations of one’s own behaviors to do the opposite”) to the Zeigarnik effect (“uncompleted or interrupted tasks are remembered better than completed ones”)
  • If I had to single out a particular bias as the most pervasive and damaging, it would probably be confirmation bias. That’s the effect that leads us to look for evidence confirming what we already think or suspect, to view facts and ideas we encounter as further confirmation, and to discount or ignore any piece of evidence that seems to support an alternate view
  • ...48 more annotations...
  • Confirmation bias shows up most blatantly in our current political divide, where each side seems unable to allow that the other side is right about anything.
  • The whole idea of cognitive biases and faulty heuristics—the shortcuts and rules of thumb by which we make judgments and predictions—was more or less invented in the 1970s by Amos Tversky and Daniel Kahneman
  • versky died in 1996. Kahneman won the 2002 Nobel Prize in Economics for the work the two men did together, which he summarized in his 2011 best seller, Thinking, Fast and Slow. Another best seller, last year’s The Undoing Project, by Michael Lewis, tells the story of the sometimes contentious collaboration between Tversky and Kahneman
  • Another key figure in the field is the University of Chicago economist Richard Thaler. One of the biases he’s most linked with is the endowment effect, which leads us to place an irrationally high value on our possessions.
  • In an experiment conducted by Thaler, Kahneman, and Jack L. Knetsch, half the participants were given a mug and then asked how much they would sell it for. The average answer was $5.78. The rest of the group said they would spend, on average, $2.21 for the same mug. This flew in the face of classic economic theory, which says that at a given time and among a certain population, an item has a market value that does not depend on whether one owns it or not. Thaler won the 2017 Nobel Prize in Economics.
  • “The question that is most often asked about cognitive illusions is whether they can be overcome. The message … is not encouraging.”
  • that’s not so easy in the real world, when we’re dealing with people and situations rather than lines. “Unfortunately, this sensible procedure is least likely to be applied when it is needed most,” Kahneman writes. “We would all like to have a warning bell that rings loudly whenever we are about to make a serious error, but no such bell is available.”
  • At least with the optical illusion, our slow-thinking, analytic mind—what Kahneman calls System 2—will recognize a Müller-Lyer situation and convince itself not to trust the fast-twitch System 1’s perception
  • Kahneman and others draw an analogy based on an understanding of the Müller-Lyer illusion, two parallel lines with arrows at each end. One line’s arrows point in; the other line’s arrows point out. Because of the direction of the arrows, the latter line appears shorter than the former, but in fact the two lines are the same length.
  • Because biases appear to be so hardwired and inalterable, most of the attention paid to countering them hasn’t dealt with the problematic thoughts, judgments, or predictions themselves
  • Is it really impossible, however, to shed or significantly mitigate one’s biases? Some studies have tentatively answered that question in the affirmative.
  • what if the person undergoing the de-biasing strategies was highly motivated and self-selected? In other words, what if it was me?
  • Over an apple pastry and tea with milk, he told me, “Temperament has a lot to do with my position. You won’t find anyone more pessimistic than I am.”
  • I met with Kahneman
  • “I see the picture as unequal lines,” he said. “The goal is not to trust what I think I see. To understand that I shouldn’t believe my lying eyes.” That’s doable with the optical illusion, he said, but extremely difficult with real-world cognitive biases.
  • In this context, his pessimism relates, first, to the impossibility of effecting any changes to System 1—the quick-thinking part of our brain and the one that makes mistaken judgments tantamount to the Müller-Lyer line illusion
  • he most effective check against them, as Kahneman says, is from the outside: Others can perceive our errors more readily than we can.
  • “slow-thinking organizations,” as he puts it, can institute policies that include the monitoring of individual decisions and predictions. They can also require procedures such as checklists and “premortems,”
  • A premortem attempts to counter optimism bias by requiring team members to imagine that a project has gone very, very badly and write a sentence or two describing how that happened. Conducting this exercise, it turns out, helps people think ahead.
  • “My position is that none of these things have any effect on System 1,” Kahneman said. “You can’t improve intuition.
  • Perhaps, with very long-term training, lots of talk, and exposure to behavioral economics, what you can do is cue reasoning, so you can engage System 2 to follow rules. Unfortunately, the world doesn’t provide cues. And for most people, in the heat of argument the rules go out the window.
  • Kahneman describes an even earlier Nisbett article that showed subjects’ disinclination to believe statistical and other general evidence, basing their judgments instead on individual examples and vivid anecdotes. (This bias is known as base-rate neglect.)
  • over the years, Nisbett had come to emphasize in his research and thinking the possibility of training people to overcome or avoid a number of pitfalls, including base-rate neglect, fundamental attribution error, and the sunk-cost fallacy.
  • Nisbett’s second-favorite example is that economists, who have absorbed the lessons of the sunk-cost fallacy, routinely walk out of bad movies and leave bad restaurant meals uneaten.
  • When Nisbett asks the same question of students who have completed the statistics course, about 70 percent give the right answer. He believes this result shows, pace Kahneman, that the law of large numbers can be absorbed into System 2—and maybe into System 1 as well, even when there are minimal cues.
  • about half give the right answer: the law of large numbers, which holds that outlier results are much more frequent when the sample size (at bats, in this case) is small. Over the course of the season, as the number of at bats increases, regression to the mean is inevitabl
  • When Nisbett has to give an example of his approach, he usually brings up the baseball-phenom survey. This involved telephoning University of Michigan students on the pretense of conducting a poll about sports, and asking them why there are always several Major League batters with .450 batting averages early in a season, yet no player has ever finished a season with an average that high.
  • we’ve tested Michigan students over four years, and they show a huge increase in ability to solve problems. Graduate students in psychology also show a huge gain.”
  • , “I know from my own research on teaching people how to reason statistically that just a few examples in two or three domains are sufficient to improve people’s reasoning for an indefinitely large number of events.”
  • isbett suggested another factor: “You and Amos specialized in hard problems for which you were drawn to the wrong answer. I began to study easy problems, which you guys would never get wrong but untutored people routinely do … Then you can look at the effects of instruction on such easy problems, which turn out to be huge.”
  • Nisbett suggested that I take “Mindware: Critical Thinking for the Information Age,” an online Coursera course in which he goes over what he considers the most effective de-biasing skills and concepts. Then, to see how much I had learned, I would take a survey he gives to Michigan undergraduates. So I did.
  • he course consists of eight lessons by Nisbett—who comes across on-screen as the authoritative but approachable psych professor we all would like to have had—interspersed with some graphics and quizzes. I recommend it. He explains the availability heuristic this way: “People are surprised that suicides outnumber homicides, and drownings outnumber deaths by fire. People always think crime is increasing” even if it’s not.
  • When I finished the course, Nisbett sent me the survey he and colleagues administer to Michigan undergrads
  • It contains a few dozen problems meant to measure the subjects’ resistance to cognitive biases
  • I got it right. Indeed, when I emailed my completed test, Nisbett replied, “My guess is that very few if any UM seniors did as well as you. I’m sure at least some psych students, at least after 2 years in school, did as well. But note that you came fairly close to a perfect score.”
  • Nevertheless, I did not feel that reading Mindware and taking the Coursera course had necessarily rid me of my biases
  • For his part, Nisbett insisted that the results were meaningful. “If you’re doing better in a testing context,” he told me, “you’ll jolly well be doing better in the real world.”
  • The New York–based NeuroLeadership Institute offers organizations and individuals a variety of training sessions, webinars, and conferences that promise, among other things, to use brain science to teach participants to counter bias. This year’s two-day summit will be held in New York next month; for $2,845, you could learn, for example, “why are our brains so bad at thinking about the future, and how do we do it better?”
  • Philip E. Tetlock, a professor at the University of Pennsylvania’s Wharton School, and his wife and research partner, Barbara Mellers, have for years been studying what they call “superforecasters”: people who manage to sidestep cognitive biases and predict future events with far more accuracy than the pundits
  • One of the most important ingredients is what Tetlock calls “the outside view.” The inside view is a product of fundamental attribution error, base-rate neglect, and other biases that are constantly cajoling us into resting our judgments and predictions on good or vivid stories instead of on data and statistics
  • In 2006, seeking to prevent another mistake of that magnitude, the U.S. government created the Intelligence Advanced Research Projects Activity (iarpa), an agency designed to use cutting-edge research and technology to improve intelligence-gathering and analysis. In 2011, iarpa initiated a program, Sirius, to fund the development of “serious” video games that could combat or mitigate what were deemed to be the six most damaging biases: confirmation bias, fundamental attribution error, the bias blind spot (the feeling that one is less biased than the average person), the anchoring effect, the representativeness heuristic, and projection bias (the assumption that everybody else’s thinking is the same as one’s own).
  • most promising are a handful of video games. Their genesis was in the Iraq War
  • Together with collaborators who included staff from Creative Technologies, a company specializing in games and other simulations, and Leidos, a defense, intelligence, and health research company that does a lot of government work, Morewedge devised Missing. Some subjects played the game, which takes about three hours to complete, while others watched a video about cognitive bias. All were tested on bias-mitigation skills before the training, immediately afterward, and then finally after eight to 12 weeks had passed.
  • “The literature on training suggests books and classes are fine entertainment but largely ineffectual. But the game has very large effects. It surprised everyone.”
  • he said he saw the results as supporting the research and insights of Richard Nisbett. “Nisbett’s work was largely written off by the field, the assumption being that training can’t reduce bias,
  • even the positive results reminded me of something Daniel Kahneman had told me. “Pencil-and-paper doesn’t convince me,” he said. “A test can be given even a couple of years later. But the test cues the test-taker. It reminds him what it’s all about.”
  • Morewedge told me that some tentative real-world scenarios along the lines of Missing have shown “promising results,” but that it’s too soon to talk about them.
  • In the future, I will monitor my thoughts and reactions as best I can
demetriar

Instagram account of University of Pennsylvania runner showed only part of story - 0 views

  • THE LIFE MADISON projected on her own Instagram feed was filled with shots that seemed to confirm everyone's expectations: Of course she was loving her first year of college. Of course she enjoyed running. Her mom remembers looking at a photo on her feed and saying, "Madison, you look like you're so happy at this party.""Mom," Madison said. "It's just a picture."
  • She seemed acutely aware that the life she was curating online was distinctly different from the one she was actually living. Yet she could not apply that same logic when she looked at the projected lives of others.
  • Everyone presents an edited version of life on social media. People share moments that reflect an ideal life, an ideal self. Hundreds of years ago, we sent letters by horseback, containing only what we wanted the recipient to read. Fifty years ago, we spoke via the telephone, sharing only the details that constructed the self we wanted reflected.
  • ...1 more annotation...
  • Madison's high school friends had told her they were also struggling. Emma Sullivan was running track at Boston College and having a hard time. Another friend, Jackie Reyneke, was playing basketball at Princeton and feeling overwhelmed. They had all shared some form of their struggles with Madison, yet in her mind, the lives her friends were projecting on social media trumped the reality they were privately sharing.
Javier E

Why Science Majors Change Their Minds (It's Just So Darn Hard) - NYTimes.com - 1 views

  • roughly 40 percent of students planning engineering and science majors end up switching to other subjects or failing to get any degree. That increases to as much as 60 percent when pre-medical students, who typically have the strongest SAT scores and high school science preparation, are included
  • the attrition rate can be higher at the most selective schools, where he believes the competition overwhelms even well-qualified students.
  • the main majors are difficult and growing more complex. Some students still lack math preparation or aren’t willing to work hard enough.
  • ...4 more annotations...
  • there could be more subtle problems at work, like the proliferation of grade inflation in the humanities and social sciences, which provides another incentive for students to leave STEM majors. It is no surprise that grades are lower in math and science, where the answers are clear-cut and there are no bonus points for flair. Professors also say they are strict because science and engineering courses build on one another, and a student who fails to absorb the key lessons in one class will flounder in the next.
  • The National Science Board, a public advisory body, warned in the mid-1980s that students were losing sight of why they wanted to be scientists and engineers in the first place. Research confirmed in the 1990s that students learn more by grappling with open-ended problems, like creating a computer game or designing an alternative energy system, than listening to lectures. While the National Science Foundation went on to finance pilot courses that employed interactive projects, when the money dried up, so did most of the courses. Lecture classes are far cheaper to produce, and top professors are focused on bringing in research grants, not teaching undergraduates.
  • Since becoming Notre Dame’s dean in 2008, Dr. Kilpatrick has revamped and expanded a freshman design course that had gotten “a little bit stale.” The students now do four projects. They build Lego robots and design bridges capable of carrying heavy loads at minimal cost. They also create electronic circuit boards and dream up a project of their own.
  • Some new students do not have a good feel for how deeply technical engineering is. Other bright students may have breezed through high school without developing disciplined habits. By contrast, students in China and India focus relentlessly on math and science from an early age.
Javier E

Jordan Peterson Comes to Aspen - The Atlantic - 0 views

  • Peterson is traveling the English-speaking world in order to spread the message of this core conviction: that the way to fix what ails Western societies is a psychological project, targeted at helping individuals to get their lives in order, not a sociological project that seeks to improve society through politics, or popular culture, or by focusing on class, racial, or gender identity.
  • the Aspen Ideas Festival, which is co-sponsored by the Aspen Institute and The Atlantic, was an anomaly in this series of public appearances: a gathering largely populated by people—Democrats and centrist Republicans, corporate leaders, academics, millionaire philanthropists, journalists—invested in the contrary proposition, that the way to fix what ails society is a sociological project, one that effects change by focusing on politics, or changing popular culture, or spurring technological advances, or investing more in diversity and inclusiveness.
  • Many of its attendees, like many journalists, are most interested in Peterson as a political figure at the center of controversies
  • ...21 more annotations...
  • Peterson deserves a full, appropriately complex accounting of his best and worst arguments; I intend to give him one soon. For now, I can only tell you how the Peterson phenomenon manifested one night in Aspen
  • “For the first time in human history the spoken word has the same reach as the written word, and there are no barriers to entry. That’s a Gutenberg revolution,” he said. “That’s a big deal. This is a game changer. The podcast world is also a Gutenberg moment but it’s even more extensive. The problem with books is that you can’t do anything else while you’re reading. But if you’re listening to a podcast you can be driving a tractor or a long haul truck or doing the dishes. So podcasts free up two hours a day for people to engage in educational activity they otherwise wouldn’t be able to engage in. That’s one-eighth of people’s lives. You’re handing people a lot of time back to engage in high-level intellectual education.
  • that technological revolution has revealed something good that we didn’t know before: “The narrow bandwidth of TV has made us think that we are stupider than we are. And people have a real hunger for deep intellectual dialogue.”
  • I’ve known for years that the university underserved the community, because we assumed that university education is for 18- to 22-year-olds, which is a proposition that’s so absurd it is absolutely mind-boggling that anyone ever conceptualized it. Why wouldn’t you take university courses throughout your entire life? What, you stop searching for wisdom when you’re 22? I don’t think so. You don’t even start until you’re like in your mid 20s. So I knew universities were underserving the broader community a long time ago. But there wasn’t a mechanism whereby that could be rectified.
  • Universities are beyond forgiveness, he argued, because due to the growing ranks of administrators, there’s been a radical increase in tuition. “Unsuspecting students are given free access to student loans that will cripple them through their 30s and their 40s, and the universities are enticing them to extend their carefree adolescence for a four year period at the cost of mortgaging their future in a deal that does not allow for escape through bankruptcy,” he complained. “So it’s essentially a form of indentured servitude. There’s no excuse for that … That cripples the economy because the students become overlaid with debt that they’ll never pay off at the time when they should be at the peak of their ability to take entrepreneurial risks. That’s absolutely appalling.”
  • A critique I frequently hear from Peterson’s critics is that everything he says is either obvious or wrong. I think that critique fails insofar as I sometimes see some critics calling one of his statements obvious even as others insist it is obviously wrong.
  • a reliable difference among men and women cross-culturally is that men are more aggressive than women. Now what's the evidence for that? Here's one piece of evidence: There are 10 times as many men in prison. Now is that a sociocultural construct? It's like, no, it's not a sociocultural construct. Okay?
  • Here's another piece of data. Women try to commit suicide more than men by a lot, and that's because women are more prone to depression and anxiety than men are. And there are reasons for that, and that's cross-cultural as well. Now men are way more likely to actually commit suicide. Why? Because they're more aggressive so they use lethal means. So now the question is how much more aggressive are men than women? The answer is not very much. So the claim that men and women are more the same than different is actually true. This is where you have to know something about statistics to understand the way the world works, instead of just applying your a priori ideological presuppositions to things that are too complex to fit in that rubric.
  • So if you draw two people out of a crowd, one man and one woman, and you had to lay a bet on who was more aggressive, and you bet on the woman, you'd win 40 percent of the time. That's quite a lot. It isn't 50 percent of the time which would be no differences. But it’s a lot. There are lots of women who are more aggressive than lots of men. So the curves overlap a lot. There's way more similarity than difference. And this is along the dimension where there's the most difference. But here's the problem. You can take small differences at the average of a distribution. Then the distributions move off to the side. And then all the action is at the tail. So here's the situation. You don't care about how aggressive the average person is. It's not that relevant. What people care about is who is the most aggressive person out of 100, because that's the person you'd better watch out for.
  • Whenever I'm interviewed by journalists who have the scent of blood in their nose, let's say, they're very willing and able to characterize the situation I find myself in as political. But that's because they can't see the world in any other manner. The political is a tiny fraction of the world. And what I'm doing isn't political. It's psychological or theological. The political element is peripheral. And if people come to the live lectures, let's say, that's absolutely self-evident
  • In a New York Times article titled, “Jordan Peterson, Custodian of the Patriarchy,” the writer Nellie Bowles quoted her subject as follows:
  • Violent attacks are what happens when men do not have partners, Mr. Peterson says, and society needs to work to make sure those men are married. “He was angry at God because women were rejecting him,” Mr. Peterson says of the Toronto killer. “The cure for that is enforced monogamy. That’s actually why monogamy emerges.” Mr. Peterson does not pause when he says this. Enforced monogamy is, to him, simply a rational solution. Otherwise women will all only go for the most high-status men, he explains, and that couldn’t make either gender happy in the end.
  • Ever since, some Peterson critics have claimed that Peterson wants to force women to have sex with male incels, or something similarly dystopian.
  • ...it's an anthropological truism generated primarily through scholars on the left, just so everybody is clear about it, that societies that use monogamy as a social norm, which by the way is virtually every human society that ever existed, do that in an attempt to control the aggression that goes along with polygamy. It's like ‘Oh my God, how contentious can you get.’ Well, how many of you are in monogamous relationships? A majority. How is that enforced?...
  • If everyone you talk to is boring it’s not them! And so if you're rejected by the opposite sex, if you’re heterosexual, then you're wrong, they're not wrong, and you've got some work to do, man. You've got some difficult work to do. And there isn't anything I've been telling young men that's clearer than that … What I've been telling people is take the responsibility for failure onto yourself. That's a hint that you've got work to do. It could also be a hint that you're young and useless and why the hell would anybody have anything to do with you because you don't have anything to offer. And that's rectifiable. Maturity helps to rectify that.
  • And what's the gender? Men. Because if you go two standard deviations out from the mean on two curves that overlap but are disjointed, then you derive an overwhelming preponderance of the overrepresented group. That's why men are about 10 times more likely to be in prison.&nbsp;&nbsp;
  • Weiss: You are often characterized, at least in the mainstream press, as being transphobic. If you had a student come to you and say, I was born female, I now identify as male, I want you to call me by male pronouns. Would you say yes to that?
  • Peterson: Well, it would depend on the student and the context and why I thought they were asking me and what I believe their demand actually characterized, and all of that. Because that can be done in a way that is genuine and acceptable, and a way that is manipulative and unacceptable. And if it was genuine and acceptable then I would have no problem with it. And if it was manipulative and unacceptable then not a chance. And you might think, ‘Well, who am I to judge?’ Well, first of all, I am a clinical psychologist, I've talked to people for about 25,000 hours. And I'm responsible for judging how I am going to use my words. I'd judge the same way I judge all my interactions with people, which is to the best of my ability, and characterized by all the errors that I'm prone to. I'm not saying that my judgment would be unerring. I live with the consequences and I'm willing to accept the responsibility.
  • But also to be clear about this, it never happened––I never refused to call anyone by anything they had asked me to call them by, although that's been reported multiple times. It's a complete falsehood. And it had nothing to do with the transgender issue as far as I'm concerned.
  • type one and type two error problem
  • note what his avowed position is: that he has never refused to call a transgender person by their preferred pronoun, that he has done so many times, that he would always try to err on the side of believing a request to be earnest, and that he reserves the right to decline a request he believes to be in bad faith. Whether one finds that to be reasonable or needlessly difficult, it seems irresponsible to tell trans people that a prominent intellectual hates them or is deeply antagonistic to them when the only seeming conflict is utterly hypothetical and ostensibly not even directed against people that Peterson believes to be trans, but only against people whom he does not believe to be trans
grayton downing

Retracing Steps | The Scientist Magazine® - 1 views

  • growing body of research has highlighted scientists’ inability to reproduce one another’s results, including a 2012 study that found only 11 percent of “landmark” cancer studies investigated could be independently confirmed.
  • “Some communities have standards requiring raw data to be deposited at or before publication, but the computer code is generally not made available, typically due to the time it takes to prepare it for release,”
  • Sage’s solution? An open-source computational platform, called Synapse, which enables seamless collaboration among geographically dispersed scientific teams—providing them with the tools to share data, source code, and analysis methods on specific research projects or on any of the 10,000 datasets in the organization’s massive data corpus. Key to these collaborations are tools embedded in Synapse that allow for everything from data “freezing” and versioning controls to graphical provenance records—delineating who did what to which dataset, for example.
  • ...2 more annotations...
  • It was indeed the connecting data framework that held the entire project together,” said Josh Stuart, professor of biomolecular engineering at the University of California, Santa Cruz, who is part of the TCGA-led project.
  • “It provides a framework for the science to be extended upon, instead of publication as a finite endpoint for research,”
sissij

These Wearables Are All About Neuroscience | Big Think - 0 views

  • Artist, writer, and experimental philosopher Jonathon Keats, fresh from his recent Reciprocal Biomimicry project, is back, and this time it’s wearable.
  • It’s clothing designed to alter one’s self-perception.
  • Wearing clothes that make you feel good isn’t new, of course, but Keats’ press release claims to be “applying cutting-edge neuroscience to millennia of costume history.”
  • ...3 more annotations...
  • Superego shades have irises that open and close in sync with the wearer’s breathing, raising his or her consciousness of his or her respiration.
  • The bracelets can encourage the wearer to assume a “power pose,” boosting self-assurance through the release of testosterone.
  • Superego shoes offer heels whose height can be adjusted to ensure the wearer is always taller than anyone with whom he or she is speaking.
  •  
    I think it is very interesting than even those wearable designs can be related to neuroscience. They seem to me that the two subjects are very far away. Those designs are very interesting as it combine some idea in science with artistic designs. As we learned in English when we were having a speech project, power pose is a standing position that can strengthen our confidence and persuasiveness. By having those clothing specially designed, it can force us into such position. I think this is a very fantastic idea. I really like the changing height high heel. As a short person, I know how people feel when they have to raise their heads to talk to people. --Sissi (3/12/2017)
Javier E

Why these friendly robots can't be good friends to our kids - The Washington Post - 0 views

  • before adding a sociable robot to the holiday gift list, parents may want to pause to consider what they would be inviting into their homes. These machines are seductive and offer the wrong payoff: the illusion of companionship without the demands of friendship, the illusion of connection without the reciprocity of a mutual relationship. And interacting with these empathy machines may get in the way of children’s ability to develop a capacity for empathy themselves.
  • In our study, the children were so invested in their relationships with Kismet and Cog that they insisted on understanding the robots as living beings, even when the roboticists explained how the machines worked or when the robots were temporarily broken.
  • The children took the robots’ behavior to signify feelings. When the robots interacted with them, the children interpreted this as evidence that the robots liked them. And when the robots didn’t work on cue, the children likewise took it personally. Their relationships with the robots affected their state of mind and self-esteem.
  • ...14 more annotations...
  • We were led to wonder whether a broken robot can break a child.
  • Kids are central to the sociable-robot project, because its agenda is to make people more comfortable with robots in roles normally reserved for humans, and robotics companies know that children are vulnerable consumers who can bring the whole family along.
  • In October, Mattel scrapped plans for Aristotle — a kind of Alexa for the nursery, designed to accompany children as they progress from lullabies and bedtime stories through high school homework — after lawmakers and child advocacy groups argued that the data the device collected about children could be misused by Mattel, marketers, hackers and other third parties. I was part of that campaign: There is something deeply unsettling about encouraging children to confide in machines that are in turn sharing their conversations with countless others.
  • Recently, I opened my MIT mail and found a “call for subjects” for a study involving sociable robots that will engage children in conversation to “elicit empathy.” What will these children be empathizing with, exactly? Empathy is a capacity that allows us to put ourselves in the place of others, to know what they are feeling. Robots, however, have no emotions to share
  • What they can do is push our buttons. When they make eye contact and gesture toward us, they predispose us to view them as thinking and caring. They are designed to be cute, to provoke a nurturing response. And when it comes to sociable AI, nurturance is the killer app: We nurture what we love, and we love what we nurture. If a computational object or robot asks for our help, asks us to teach it or tend to it, we attach. That is our human vulnerability.
  • digital companions don’t understand our emotional lives. They present themselves as empathy machines, but they are missing the essential equipment: They have not known the arc of a life. They have not been born; they don’t know pain, or mortality, or fear. Simulated thinking may be thinking, but simulated feeling is never feeling, and simulated love is never love.
  • Breazeal’s position is this: People have relationships with many classes of things. They have relationships with children and with adults, with animals and with machines. People, even very little people, are good at this. Now, we are going to add robots to the list of things with which we can have relationships. More powerful than with pets. Less powerful than with people. We’ll figure it out.
  • The nature of the attachments to dolls and sociable machines is different. When children play with dolls, they project thoughts and emotions onto them. A girl who has broken her mother’s crystal will put her Barbies into detention and use them to work on her feelings of guilt. The dolls take the role she needs them to take.
  • Sociable machines, by contrast, have their own agenda. Playing with robots is not about the psychology of projection but the psychology of engagement. Children try to meet the robot’s needs, to understand the robot’s unique nature and wants. There is an attempt to build a mutual relationship.
  • Some people might consider that a good thing: encouraging children to think beyond their own needs and goals. Except the whole commercial program is an exercise in emotional deception.
  • when we offer these robots as pretend friends to our children, it’s not so clear they can wink with us. We embark on an experiment in which our children are the human subjects.
  • it is hard to imagine what those “right types” of ties might be. These robots can’t be in a two-way relationship with a child. They are machines whose art is to put children in a position of pretend empathy. And if we put our children in that position, we shouldn’t expect them to understand what empathy is. If we give them pretend relationships, we shouldn’t expect them to learn how real relationships — messy relationships — work. On the contrary. They will learn something superficial and inauthentic, but mistake it for real connection.
  • In the process, we can forget what is most central to our humanity: truly understanding each other.
  • For so long, we dreamed of artificial intelligence offering us not only instrumental help but the simple salvations of conversation and care. But now that our fantasy is becoming reality, it is time to confront the emotional downside of living with the robots of our dreams.
Javier E

The best time of day - and year - to work most effectively - The Washington Post - 0 views

  • Some of us are larks -- some of us are owls. But if you look at distribution, most of us are a little bit of both — what I call “third birds.”
  • There's a period of day when we’re at our peak, and that's best for doing analytic tasks things like writing a report or auditing a financial statement. There's the trough, which is the dip -- that’s not good for anything. And then there’s recovery, which is less optimal, but we do better at insight and creativity tasks.
  • the bigger issue here is that we have thought of "when" as a second order question. We take questions of how&nbsp;we do things, what we do, and who I do it with very seriously, but we stick the "when" questions over at the kids’ table.
  • ...14 more annotations...
  • What is it about a new year? How does our psychology influence how we think about that and making fresh starts? We do what social psychologists call temporal accounting&nbsp;-- that is, we have a ledger in our head of how we are spending our time. What we’re trying to do, in some cases, is relegate our previous selves to the past:&nbsp;This year we’re going to do a lot better.
  • breaks are much more&nbsp;important than we realize.
  • Many hard-core workplaces think of breaks as a deviation from performance, when in fact the science of breaks tells us they’re a part of performance.
  • Research shows us that social breaks are better than solo breaks -- taking a break with somebody else is more restorative than doing it on&nbsp;your own. A break that involves movement is better than a stationary&nbsp;one. And then there's the restorative power in nature. Simply going outside outside rather than being inside, simply being able to look out a window during a break is better. And there's the importance of being fully detached,
  • Every day I write down two breaks that I’m going to take. I make a 'break list,' and I try to treat&nbsp;them with the same reverence with which I’d treat scheduled meetings. We would never skip a meeting.
  • When&nbsp;you're giving feedback to employees, should you give good news or bad news first?
  • Here’s where you should go first: If you’re not the default choice
  • If you are the default choice, you’re better off not going first. What happens is that early in a process, people are more likely to be open-minded, to challenge assumptions. But over time,&nbsp;they wear out, and they’re more likely to go with the default choice.
  • Also, if you’re operating in an uncertain environment -- and this is actually really important -- where the criteria for selections are not fully fully sharp, you’re better off going&nbsp;at the end.&nbsp;In the beginning, the judges are still trying to figure out what they want.
  • In fact, what&nbsp;researchers have found is that at the beginning, project teams pretty much do nothing. They bicker, they dicker. Yet astonishingly, many project teams she followed ended up really getting started in earnest at the exact midpoint.&nbsp;If you give a team 34 days, they’ll get started in earnest on day 17. This is actually a big shift in the way organizational scholars thought about how teams work.
  • There are two key things a leader can do at a midpoint. One is to identify it to make it salient: Say "ok guys, it’s day 17 of this 35 day project. We better get going."
  • The second comes from research on&nbsp;basketball. It shows that when teams are ahead&nbsp;at the midpoint, they get complacent. When they’re way behind at the midpoint, they get demoralized. But when they’re a little behind, it can be galvanizing. So what leaders can do is suggest hey, we’re a little bit behind.
  • One of the issues you explore is when it pays to go first — whether you’re up for a competitive pitch or trying to get a job. When is it good to go first
  • If you ask people what they prefer, four out of five prefer getting the bad news first. The reason has to do with endings. Given the choice, human beings prefer endings that elevate. We prefer endings that go up, that have a rising sequence rather than a declining sequence.
1 - 20 of 320 Next › Last »
Showing 20 items per page