Skip to main content

Home/ TOK Friends/ Group items tagged standardized testing

Rss Feed Group items tagged

Javier E

The Story Behind the SAT Overhaul - NYTimes.com - 2 views

  • “When you cover too many topics,” Coleman said, “the assessments designed to measure those standards are inevitably superficial.” He pointed to research showing that more students entering college weren’t prepared and were forced into “remediation programs from which they never escape.” In math, for example, if you examined data from top-performing countries, you found an approach that emphasized “far fewer topics, far deeper,” the opposite of the curriculums he found in the United States, which he described as “a mile wide and an inch deep.”
  • The lessons he brought with him from thinking about the Common Core were evident — that American education needed to be more focused and less superficial, and that it should be possible to test the success of the newly defined standards through an exam that reflected the material being taught in the classroom.
  • she and her team had extensive conversations with students, teachers, parents, counselors, admissions officers and college instructors, asking each group to tell them in detail what they wanted from the test. What they arrived at above all was that a test should reflect the most important skills that were imparted by the best teachers
  • ...12 more annotations...
  • for example, a good instructor would teach Martin Luther King Jr.’s “I Have a Dream” speech by encouraging a conversation that involved analyzing the text and identifying the evidence, both factual and rhetorical, that makes it persuasive. “The opposite of what we’d want is a classroom where a teacher might ask only: ‘What was the year the speech was given? Where was it given?’ ”
  • in the past, assembling the SAT focused on making sure the questions performed on technical grounds, meaning: Were they appropriately easy or difficult among a wide range of students, and were they free of bias when tested across ethnic, racial and religious subgroups? The goal was “maximizing differentiation” among kids, which meant finding items that were answered correctly by those students who were expected to get them right and incorrectly by the weaker students. A simple way of achieving this, Coleman said, was to test the kind of obscure vocabulary words for which the SAT was famous
  • In redesigning the test, the College Board shifted its emphasis. It prioritized content, measuring each question against a set of specifications that reflect the kind of reading and math that students would encounter in college and their work lives. Schmeiser and others then spent much of early last year watching students as they answered a set of 20 or so problems, discussing the questions with the students afterward. “The predictive validity is going to come out the same,” she said of the redesigned test. “But in the new test, we have much more control over the content and skills that are being measured.”
  • Evidence-based reading and writing, he said, will replace the current sections on reading and writing. It will use as its source materials pieces of writing — from science articles to historical documents to literature excerpts — which research suggests are important for educated Americans to know and understand deeply. “The Declaration of Independence, the Constitution, the Bill of Rights and the Federalist Papers,” Coleman said, “have managed to inspire an enduring great conversation about freedom, justice, human dignity in this country and the world” — therefore every SAT will contain a passage from either a founding document or from a text (like Lincoln’s Gettysburg Address) that is part of the “great global conversation” the founding documents inspired.
  • The Barbara Jordan vocabulary question would have a follow-up — “How do you know your answer is correct?” — to which students would respond by identifying lines in the passage that supported their answer.
  • The idea is that the test will emphasize words students should be encountering, like “synthesis,” which can have several meanings depending on their context. Instead of encouraging students to memorize flashcards, the test should promote the idea that they must read widely throughout their high-school years.
  • . No longer will it be good enough to focus on tricks and trying to eliminate answer choices. We are not interested in students just picking an answer, but justifying their answers.”
  • the essay portion of the test will also be reformulated so that it will always be the same, some version of: “As you read the passage in front of you, consider how the author uses evidence such as facts or examples; reasoning to develop ideas and to connect claims and evidence; and stylistic or persuasive elements to add power to the ideas expressed. Write an essay in which you explain how the author builds an argument to persuade an audience.”
  • The math section, too, will be predicated on research that shows that there are “a few areas of math that are a prerequisite for a wide range of college courses” and careers. Coleman conceded that some might treat the news that they were shifting away from more obscure math problems to these fewer fundamental skills as a dumbing-down the test, but he was adamant that this was not the case. He explained that there will be three areas of focus: problem solving and data analysis, which will include ratios and percentages and other mathematical reasoning used to solve problems in the real world; the “heart of algebra,” which will test how well students can work with linear equations (“a powerful set of tools that echo throughout many fields of study”); and what will be called the “passport to advanced math,” which will focus on the student’s familiarity with complex equations and their applications in science and social science.
  • “Sometimes in the past, there’s been a feeling that tests were measuring some sort of ineffable entity such as intelligence, whatever that might mean. Or ability, whatever that might mean. What this is is a clear message that good hard work is going to pay off and achievement is going to pay off. This is one of the most significant developments that I have seen in the 40-plus years that I’ve been working in admissions in higher education.”
  • The idea of creating a transparent test and then providing a free website that any student could use — not to learn gimmicks but to get a better grounding and additional practice in the core knowledge that would be tested — was appealing to Coleman.
  • (The College Board won’t pay Khan Academy.) They talked about a hypothetical test-prep experience in which students would log on to a personal dashboard, indicate that they wanted to prepare for the SAT and then work through a series of preliminary questions to demonstrate their initial skill level and identify the gaps in their knowledge. Khan said he could foresee a way to estimate the amount of time it would take to achieve certain benchmarks. “It might go something like, ‘O.K., we think you’ll be able to get to this level within the next month and this level within the next two months if you put in 30 minutes a day,’ ” he said. And he saw no reason the site couldn’t predict for anyone, anywhere the score he or she might hope to achieve with a commitment to a prescribed amount of work.
Javier E

Instagram's Algorithm Delivers Toxic Video Mix to Adults Who Follow Children - WSJ - 0 views

  • Instagram’s Reels video service is designed to show users streams of short videos on topics the system decides will interest them, such as sports, fashion or humor. 
  • The Meta Platforms META -1.04%decrease; red down pointing triangle-owned social app does the same thing for users its algorithm decides might have a prurient interest in children, testing by The Wall Street Journal showed.
  • The Journal sought to determine what Instagram’s Reels algorithm would recommend to test accounts set up to follow only young gymnasts, cheerleaders and other teen and preteen influencers active on the platform.
  • ...30 more annotations...
  • “Our systems are effective at reducing harmful content, and we’ve invested billions in safety, security and brand suitability solutions,” said Samantha Stetson, a Meta vice president who handles relations with the advertising industry. She said the prevalence of inappropriate content on Instagram is low, and that the company invests heavily in reducing it.
  • The Journal set up the test accounts after observing that the thousands of followers of such young people’s accounts often include large numbers of adult men, and that many of the accounts who followed those children also had demonstrated interest in sex content related to both children and adults
  • The Journal also tested what the algorithm would recommend after its accounts followed some of those users as well, which produced more-disturbing content interspersed with ads.
  • The Canadian Centre for Child Protection, a child-protection group, separately ran similar tests on its own, with similar results.
  • Meta said the Journal’s tests produced a manufactured experience that doesn’t represent what billions of users see. The company declined to comment on why the algorithms compiled streams of separate videos showing children, sex and advertisements, but a spokesman said that in October it introduced new brand safety tools that give advertisers greater control over where their ads appear, and that Instagram either removes or reduces the prominence of four million videos suspected of violating its standards each month. 
  • The Journal reported in June that algorithms run by Meta, which owns both Facebook and Instagram, connect large communities of users interested in pedophilic content. The Meta spokesman said a task force set up after the Journal’s article has expanded its automated systems for detecting users who behave suspiciously, taking down tens of thousands of such accounts each month. The company also is participating in a new industry coalition to share signs of potential child exploitation.
  • Following what it described as Meta’s unsatisfactory response to its complaints, Match began canceling Meta advertising for some of its apps, such as Tinder, in October. It has since halted all Reels advertising and stopped promoting its major brands on any of Meta’s platforms. “We have no desire to pay Meta to market our brands to predators or place our ads anywhere near this content,” said Match spokeswoman Justine Sacco.
  • Even before the 2020 launch of Reels, Meta employees understood that the product posed safety concerns, according to former employees.
  • Robbie McKay, a spokesman for Bumble, said it “would never intentionally advertise adjacent to inappropriate content,” and that the company is suspending its ads across Meta’s platforms.
  • Meta created Reels to compete with TikTok, the video-sharing platform owned by Beijing-based ByteDance. Both products feed users a nonstop succession of videos posted by others, and make money by inserting ads among them. Both companies’ algorithms show to a user videos the platforms calculate are most likely to keep that user engaged, based on his or her past viewing behavior
  • The Journal reporters set up the Instagram test accounts as adults on newly purchased devices and followed the gymnasts, cheerleaders and other young influencers. The tests showed that following only the young girls triggered Instagram to begin serving videos from accounts promoting adult sex content alongside ads for major consumer brands, such as one for Walmart that ran after a video of a woman exposing her crotch. 
  • When the test accounts then followed some users who followed those same young people’s accounts, they yielded even more disturbing recommendations. The platform served a mix of adult pornography and child-sexualizing material, such as a video of a clothed girl caressing her torso and another of a child pantomiming a sex act.
  • Experts on algorithmic recommendation systems said the Journal’s tests showed that while gymnastics might appear to be an innocuous topic, Meta’s behavioral tracking has discerned that some Instagram users following preteen girls will want to engage with videos sexualizing children, and then directs such content toward them.
  • Current and former Meta employees said in interviews that the tendency of Instagram algorithms to aggregate child sexualization content from across its platform was known internally to be a problem. Once Instagram pigeonholes a user as interested in any particular subject matter, they said, its recommendation systems are trained to push more related content to them.
  • Preventing the system from pushing noxious content to users interested in it, they said, requires significant changes to the recommendation algorithms that also drive engagement for normal users. Company documents reviewed by the Journal show that the company’s safety staffers are broadly barred from making changes to the platform that might reduce daily active users by any measurable amount.
  • The test accounts showed that advertisements were regularly added to the problematic Reels streams. Ads encouraging users to visit Disneyland for the holidays ran next to a video of an adult acting out having sex with her father, and another of a young woman in lingerie with fake blood dripping from her mouth. An ad for Hims ran shortly after a video depicting an apparently anguished woman in a sexual situation along with a link to what was described as “the full video.”
  • Instagram’s system served jarring doses of salacious content to those test accounts, including risqué footage of children as well as overtly sexual adult videos—and ads for some of the biggest U.S. brands.
  • Part of the problem is that automated enforcement systems have a harder time parsing video content than text or still images. Another difficulty arises from how Reels works: Rather than showing content shared by users’ friends, the way other parts of Instagram and Facebook often do, Reels promotes videos from sources they don’t follow
  • In an analysis conducted shortly before the introduction of Reels, Meta’s safety staff flagged the risk that the product would chain together videos of children and inappropriate content, according to two former staffers. Vaishnavi J, Meta’s former head of youth policy, described the safety review’s recommendation as: “Either we ramp up our content detection capabilities, or we don’t recommend any minor content,” meaning any videos of children.
  • At the time, TikTok was growing rapidly, drawing the attention of Instagram’s young users and the advertisers targeting them. Meta didn’t adopt either of the safety analysis’s recommendations at that time, according to J.
  • Stetson, Meta’s liaison with digital-ad buyers, disputed that Meta had neglected child safety concerns ahead of the product’s launch. “We tested Reels for nearly a year before releasing it widely, with a robust set of safety controls and measures,” she said. 
  • After initially struggling to maximize the revenue potential of its Reels product, Meta has improved how its algorithms recommend content and personalize video streams for users
  • Among the ads that appeared regularly in the Journal’s test accounts were those for “dating” apps and livestreaming platforms featuring adult nudity, massage parlors offering “happy endings” and artificial-intelligence chatbots built for cybersex. Meta’s rules are supposed to prohibit such ads.
  • The Journal informed Meta in August about the results of its testing. In the months since then, tests by both the Journal and the Canadian Centre for Child Protection show that the platform continued to serve up a series of videos featuring young children, adult content and apparent promotions for child sex material hosted elsewhere. 
  • As of mid-November, the center said Instagram is continuing to steadily recommend what the nonprofit described as “adults and children doing sexual posing.”
  • Meta hasn’t offered a timetable for resolving the problem or explained how in the future it would restrict the promotion of inappropriate content featuring children. 
  • The Journal’s test accounts found that the problem even affected Meta-related brands. Ads for the company’s WhatsApp encrypted chat service and Meta’s Ray-Ban Stories glasses appeared next to adult pornography. An ad for Lean In Girls, the young women’s empowerment nonprofit run by former Meta Chief Operating Officer Sheryl Sandberg, ran directly before a promotion for an adult sex-content creator who often appears in schoolgirl attire. Sandberg declined to comment. 
  • Through its own tests, the Canadian Centre for Child Protection concluded that Instagram was regularly serving videos and pictures of clothed children who also appear in the National Center for Missing and Exploited Children’s digital database of images and videos confirmed to be child abuse sexual material. The group said child abusers often use the images of the girls to advertise illegal content for sale in dark-web forums.
  • The nature of the content—sexualizing children without generally showing nudity—reflects the way that social media has changed online child sexual abuse, said Lianna McDonald, executive director for the Canadian center. The group has raised concerns about the ability of Meta’s algorithms to essentially recruit new members of online communities devoted to child sexual abuse, where links to illicit content in more private forums proliferate.
  • “Time and time again, we’ve seen recommendation algorithms drive users to discover and then spiral inside of these online child exploitation communities,” McDonald said, calling it disturbing that ads from major companies were subsidizing that process.
Javier E

Functional medicine: Is it the future of healthcare or just another wellness trend? - I... - 0 views

  • Functional Medicine is the alternative medicine Bill Clinton credits with giving him his life back after his 2004 quadruple heart by-pass surgery. Its ideology is embraced by Oprah and regularly features on Gwyneth Paltrow's Goop.
  • Developed in 1990 by Dr Jeffrey Bland, who in 1991 set up the Institute of Functional Medicine with his wife Susan, today the field is spearheaded by US best-selling author Dr Mark Hyman, adviser to the Clintons and co-director of the controversial Cleveland Clinic for Functional Medicine.
  • "Functional Medicine is not about a test or a supplement or a particular protocol," he adds. "It's really a new paradigm of disease and how it arises and how to restore health. Within it there are many approaches that are effective, it's not exclusive, it doesn't exclude traditional medications, it includes all modalities depending on what's right for that patient."
  • ...31 more annotations...
  • Functional Medicine isn't a protected title and a medical qualification isn't a prerequisite to practice. The result is an unregulated and disparate field, with medical doctors, nutritionists, naturopaths and homeopaths among the many practitioners.
  • Some other chronic illnesses the field claims to treat include heart disease, type 2 diabetes, irritable bowel syndrome, ulcerative colitis, depression, anxiety and arthritis
  • ll kinds of different reasons, some might have gluten issues, gut issues, others might have a deficiency causing neurological issues, MS is a symptom."
  • "There are components of Functional Medicine that absolutely lack an evidence base and there are practitioners of what they call Functional Medicine, they charge people for intravenous nutritional injections, they exaggerate claims, and that is professionally inappropriate, unethical and it lacks evidence.
  • On Dr Mark Hyman's view of MS he says, "there are a lot of terms put together there, all of which individually make a lot of sense, but put together in that way they do not.
  • "What does FM actually mean? It means nothing. It's a gift-gallop of words thrown together. It's criticised by advocates of evidence-based medicine because it's giving a veneer of scientific legitimacy to ideas that are considered pseudoscientific. For example, it'll take alternative medicine modalities like homeopathy and then call them 'bio-infusions' or something similar, rebranding it as something that works.
  • "It's a redundant name, real medicine is functional."
  • Next month the third annual Lifestyle and Functional Medical conference will take place in Salthill, Galway on November 3. Last year's event was attended by more than 500 people and featured a keynote address by honorary consultant cardiologist Dr Aseem Malhotra, author of bestselling The Pioppi Diet (which was named one of the top five worst celebrity diets to avoid in 2018 by the British Dietetic Foundation).
  • Dr David Robert Grimes is physicist and visiting fellow of Oxford and QUB. His research into cancer focuses on modelling tumour metabolism and radiation interactions. For Dr Grimes, the lack of definition, or "double-speak" as he puts it, in FM is troubling.
  • As well as the cost of appointments, FM practitioners commonly charge extra for tests. An omega finger prick test is around €100. A vitamin D test can cost upwards of €60, full thyroid panel more than €150 and a gut function test €400. Prices vary between practitioners.
  • "If I, as a GP, engaged in some of these behaviours I would be struck off." Specifically? "If I was recommending treatments that lacked an evidence base, or if I was promoting diagnostic tests which are expensive and lack an evidence base.
  • GPs engage every year in ongoing continuous professional development, I spend my evenings and my weekends outside of working hours attending educational events, small-group learning, large-group learning, engaging in research. This is an accusation that was levelled at the profession 30 years ago and then it was correct, but the profession has caught up…
  • "Obviously promoting wellness and healthy diet is very welcome but going beyond that and stating that certain aspects of 'functional medicine' can lead to reduced inflammation or prevent cancer, we have to be very careful about those claims.
  • Often the outcome of such tests are seemingly 'benign' prescriptions of vitamins or cleanses. However, dietitian Orla Walsh stresses that even these can have potentially harmful effects, especially on "vulnerable" patients, if not prescribed judiciously.
  • FM has five basic principles. 1. We are all genetically and biochemically unique so it treats the individual, not the disease. 2. It's science-based. 3. The body is intelligent and has the capacity for self-regulation. 4. The body has the ability to heal and prevent nearly all the diseases of ageing. 5. Health is not just the absence of disease, but a state of immense vitality.
  • She began her Functional Medicine career while training as a medical doctor and now travels the world working with high-profile clients. Dr McHale charges €425 for an initial consultation and €175 for follow-up appointments. Straightforward lab tests are €250 to €750, for complex cases testing fees can be up to €2,000.
  • "The term [Functional Medicine] tends to be bandied around quite a bit. Other things people say, such as 'functional nutritionist', can be misleading as a term. Many people are Functional Medicine practitioners but don't have any real medical background at all... I think regulation is always probably the best way forward."
  • "There's an awful lot to it in terms of biochemistry and physiology," she says. "You do need to have a very solid and well ingrained bio-chemistry background. A solely clinical background doesn't equip you with the knowledge to read a test.
  • "Evidence-base is the cornerstone of medicine and that has to be maintained. It becomes problematic in this area because you are looking at personalised medicine and that can be very difficult to evidence-base."
  • GP Christine Ritter travelled from England to attend the Galway conference last year with a view to integrating Functional Medicine into her practice.
  • "It was very motivating," she says. "Where it wasn't perhaps as strong was to find the evidence. The Functional Medicine people would say, 'we've done this study and this trial and we've used this supplement that was successful', but they can't show massive research data which might make it difficult to bring it into the mainstream.
  • "I also know the rigorous standard of trials we have in medicine they're not usually that great either, it's often driven by who's behind the trial and who's paying for it.
  • "Every approach that empowers patient to work on their destiny [is beneficial], but you'd have to be mindful that you're not missing any serious conditions."
  • Dr Hyman is working to grow the evidence-base for Functional Medicine worldwide. "The future is looking very bright," he says. "At the Cleveland Centre we're establishing a research base, building educational platforms, fellowships, residency programmes, rotations. We're advancing the field that's spreading across the world. We're seeing in China the development of a programme of Functional Medicine, South Africa, the UK, in London the Cleveland Clinic will hopefully have a Functional Medicine centre."
  • For Dr Mark Murphy regulation is a moot point as it can only apply once the field meets the standards of evidence-based medicine.
  • "Despite well intentioned calls for regulation, complementary and alternative medical therapies cannot be regulated," he says. "Only therapies that possess an evidence-base can enter our standard regulatory processes, including the Irish Medical Council, the Health Products Regulatory Authority and Irish advertising standards. In situations where complementary and alternative therapies develop an evidence base, they are no longer 'complementary and alternative', but in effect they become part of mainstream 'Medicine'.
  • l What are the principles?
  • "There's a huge variation between therapists, some are brilliant and some are okay, and some are ludicrous snake oil salesmen."
  • He is so concerned that patients' health and wealth are being put at risk by alternative therapies that earlier this year he joined Fine Gael TD Kate O'Connell and the Irish Cancer Society in introducing draft legislation earlier this year making it illegal to sell unproven treatments to cancer patients. Violators face jail and heavy fines.
  • Dr Grimes says criticism of variations in the standards of traditional medical research can be fair, however due to the weight of research it is ultimately self-correcting. He adds, "The reality is that good trials are transparent, independent and pre-registered.
  • "My involvement in shaping the Bill came from seeing first-hand the exploitation of patients and their families. Most patients undergoing treatment will take some alternative modalities in conjunction but a significant portion are talked out of their conventional medicine and seduced by false promises
Javier E

Common Core and the End of History | Alan Singer - 0 views

  • On Monday October 20, 2014, the Regents, as part of their effort to promote new national Common Core standards and mystically prepare students for non-existing 21st century technological careers, voted unanimously that students did not have to pass both United States and Global History exams in order to graduate from high school and maintained that they were actually raising academic standards.
  • The Global History exam will also be modified so that students will only be tested on events after 1750, essentially eliminating topics like the early development of civilizations, ancient empires, the rise of universal religions, the Columbian Exchange, and trans-Atlantic Slave Trade from the test.
  • Mace reports his middle school students have no idea which were the original thirteen colonies, where they were located, or who were the founders and settlers. The students in his honors class report that all they studied in elementary school was English and math. Morning was math; afternoon was ELA. He added, "Teachers were worried that this would happen, and it has."
  • ...12 more annotations...
  • Students will be able to substitute a tech sequence and local test for one of the history exams, however the Regents did not present, design, or even describe what the tech alternative will look like. Although it will be implemented immediately, the Regents left all the details completely up to local initiative.
  • Under the proposal, students can substitute career-focused courses in subjects such as carpentry, advertising or hospitality management rather than one of two history Regents exams that are now required
  • In June 2010 the Regents eliminated 5th and 8th grade social studies, history, and geography assessments so teachers and schools could concentrate on preparing students for high-stakes Common Core standardized reading and math assessments.
  • As a result, social studies is no longer taught in the elementary school grades
  • Merryl Tisch, Chancellor of the State Board of Regents, described the change as an effort to "back-fill opportunities for students with different interests, with different opportunities, with different choice."
  • Mace describes his students as the "common core kids, inundated with common core, but they do not know the history of the United States." The cardinal rule of public education in the 21st Century seems to be that which gets tested is important and that which does not is dropped.
  • "By making state social studies exams optional, we have come to a point where our nation's own history has been marginalized in the classroom and, with it, the means to understand ourselves and the world around us. America's heritage is being eliminated as a requirement for graduation.
  • I am biased. I am a historian, a former social studies teacher, and I help to prepare the next generation of social studies teachers.
  • But these decisions by the Regents are politically motivated, lower graduation standards, and are outright dangerous.
  • The city is under a lot of pressure to support the revised and lower academic standards because in the next few weeks it is required to present plans to the state for turning around as many as 250 schools that are labeled as "failing."
  • Debate over the importance of teaching history and social studies is definitely not new. During World War I, many Americans worried that new immigrants did not understand and value the history and government of the United States so new high school classes and tests that developed into the current classes and tests were put in place.
  • The need to educate immigrants and to understand global issues like ISIS and Ebola remain pressing, but I guess not for New York State high school students. Right now, it looks like social studies advocates have lost the battle and we are finally witnessing the end of history.
Javier E

How to Get Your Mind to Read - The New York Times - 1 views

  • Americans’ trouble with reading predates digital technologies. The problem is not bad reading habits engendered by smartphones, but bad education habits engendered by a misunderstanding of how the mind reads.
  • Just how bad is our reading problem? The last National Assessment of Adult Literacy from 2003 is a bit dated, but it offers a picture of Americans’ ability to read in everyday situations: using an almanac to find a particular fact, for example, or explaining the meaning of a metaphor used in a story. Of those who finished high school but did not continue their education, 13 percent could not perform simple tasks like these.
  • When things got more complex — in comparing two newspaper editorials with different interpretations of scientific evidence or examining a table to evaluate credit card offers — 95 percent failed.
  • ...17 more annotations...
  • poor readers can sound out words from print, so in that sense, they can read. Yet they are functionally illiterate — they comprehend very little of what they can sound out. So what does comprehension require? Broad vocabulary, obviously. Equally important, but more subtle, is the role played by factual knowledge.
  • All prose has factual gaps that must be filled by the reader.
  • Knowledge also provides context.
  • You might think, then, that authors should include all the information needed to understand what they write.
  • But those details would make prose long and tedious for readers who already know the information. “Write for your audience” means, in part, gambling on what they know.
  • students who score well on reading tests are those with broad knowledge; they usually know at least a little about the topics of the passages on the test.
  • One experiment tested 11th graders’ general knowledge with questions from science (“pneumonia affects which part of the body?”), history (“which American president resigned because of the Watergate scandal?”), as well as the arts, civics, geography, athletics and literature. Scores on this general knowledge test were highly associated with reading test scores.
  • Current education practices show that reading comprehension is misunderstood. It’s treated like a general skill that can be applied with equal success to all texts. Rather, comprehension is intimately intertwined with knowledge.
  • That suggests three significant changes in schooling.
  • First, it points to decreasing the time spent on literacy instruction in early grades.
  • Third-graders spend 56 percent of their time on literacy activities but 6 percent each on science and social studies. This disproportionate emphasis on literacy backfires in later grades, when children’s lack of subject matter knowledge impedes comprehension.
  • Another positive step would be to use high-information texts in early elementary grades. Historically, they have been light in content.
  • Second, understanding the importance of knowledge to reading ought to make us think differently about year-end standardized tests. If a child has studied New Zealand, she ought to be good at reading and thinking about passages on New Zealand. Why test her reading with a passage about spiders, or the Titanic?
  • Third, the systematic building of knowledge must be a priority in curriculum design.
  • The Common Core Standards for reading specify nearly nothing by way of content that children are supposed to know — the document valorizes reading skills. State officials should go beyond the Common Core Standards by writing content-rich grade-level standards
  • Don’t blame the internet, or smartphones, or fake news for Americans’ poor reading. Blame ignorance. Turning the tide will require profound changes in how reading is taught, in standardized testing and in school curriculums. Underlying all these changes must be a better understanding of how the mind comprehends what it reads.
  • Daniel T. Willingham (@DTWillingham) is a professor of psychology at the University of Virginia and the author, most recently, of “The Reading Mind: A Cognitive Approach to Understanding How the Mind Reads.”
Javier E

MacIntyre | Internet Encyclopedia of Philosophy - 0 views

  • For MacIntyre, “rationality” comprises all the intellectual resources, both formal and substantive, that we use to judge truth and falsity in propositions, and to determine choice-worthiness in courses of action
  • Rationality in this sense is not universal; it differs from community to community and from person to person, and may both develop and regress over the course of a person’s life or a community’s history.
  • So rationality itself, whether theoretical or practical, is a concept with a history: indeed, since there are also a diversity of traditions of enquiry, with histories, there are, so it will turn out, rationalities rather than rationality, just as it will also turn out that there are justices rather than justice
  • ...164 more annotations...
  • Rationality is the collection of theories, beliefs, principles, and facts that the human subject uses to judge the world, and a person’s rationality is, to a large extent, the product of that person’s education and moral formation.
  • To the extent that a person accepts what is handed down from the moral and intellectual traditions of her or his community in learning to judge truth and falsity, good and evil, that person’s rationality is “tradition-constituted.” Tradition-constituted rationality provides the schemata by which we interpret, understand, and judge the world we live in
  • The apparent problem of relativism in MacIntyre’s theory of rationality is much like the problem of relativism in the philosophy of science. Scientific claims develop within larger theoretical frameworks, so that the apparent truth of a scientific claim depends on one’s judgment of the larger framework. The resolution of the problem of relativism therefore appears to hang on the possibility of judging frameworks or rationalities, or judging between frameworks or rationalities from a position that does not presuppose the truth of the framework or rationality, but no such theoretical standpoint is humanly possible.
  • MacIntyre finds that the world itself provides the criterion for the testing of rationalities, and he finds that there is no criterion except the world itself that can stand as the measure of the truth of any philosophical theory.
  • MacIntyre’s philosophy is indebted to the philosophy of science, which recognizes the historicism of scientific enquiry even as it seeks a truthful understanding of the world. MacIntyre’s philosophy does not offer a priori certainty about any theory or principle; it examines the ways in which reflection upon experience supports, challenges, or falsifies theories that have appeared to be the best theories so far to the people who have accepted them so far. MacIntyre’s ideal enquirers remain Hamlets, not Emmas.
  • history shows us that individuals, communities, and even whole nations may commit themselves militantly over long periods of their histories to doctrines that their ideological adversaries find irrational. This qualified relativism of appearances has troublesome implications for anyone who believes that philosophical enquiry can easily provide certain knowledge of the world
  • According to MacIntyre, theories govern the ways that we interpret the world and no theory is ever more than “the best standards so far” (3RV, p. 65). Our theories always remain open to improvement, and when our theories change, the appearances of our world—the apparent truths of claims judged within those theoretical frameworks—change with them.
  • From the subjective standpoint of the human enquirer, MacIntyre finds that theories, concepts, and facts all have histories, and they are all liable to change—for better or for worse.
  • MacIntyre holds that the rationality of individuals is not only tradition-constituted, it is also tradition constitutive, as individuals make their own contributions to their own rationality, and to the rationalities of their communities. Rationality is not fixed, within either the history of a community or the life of a person
  • The modern account of first principles justifies an approach to philosophy that rejects tradition. The modern liberal individualist approach is anti-traditional. It denies that our understanding is tradition-constituted and it denies that different cultures may differ in their standards of rationality and justice:
  • Modernity does not see tradition as the key that unlocks moral and political understanding, but as a superfluous accumulation of opinions that tend to prejudice moral and political reasoning.
  • Although modernity rejects tradition as a method of moral and political enquiry, MacIntyre finds that it nevertheless bears all the characteristics of a moral and political tradition.
  • If historical narratives are only projections of the interests of historians, then it is difficult to see how this historical narrative can claim to be truthful
  • For these post-modern theorists, “if the Enlightenment conceptions of truth and rationality cannot be sustained,” either relativism or perspectivism “is the only possible alternative” (p. 353). MacIntyre rejects both challenges by developing his theory of tradition-constituted and tradition-constitutive rationality on pp. 354-369
  • How, then, is one to settle challenges between two traditions? It depends on whether the adherents of either take the challenges of the other tradition seriously. It depends on whether the adherents of either tradition, on seeing a failure in their own tradition are willing to consider an answer offered by their rival (p. 355)
  • how a person with no traditional affiliation is to deal with the conflicting claims of rival traditions: “The initial answer is: that will depend upon who you are and how you understand yourself. This is not the kind of answer which we have been educated to expect in philosophy”
  • MacIntyre focuses the critique of modernity on the question of rational justification. Modern epistemology stands or falls on the possibility of Cartesian epistemological first principles. MacIntyre’s history exposes that notion of first principle as a fiction, and at the same time demonstrates that rational enquiry advances (or declines) only through tradition
  • MacIntyre cites Foucault’s 1966 book, Les Mots et les choses (The Order of Things, 1970) as an example of the self-subverting character of Genealogical enquiry
  • Foucault’s book reduces history to a procession of “incommensurable ordered schemes of classification and representation” none of which has any greater claim to truth than any other, yet this book “is itself organized as a scheme of classification and representation.”
  • From MacIntyre’s perspective, there is no question of deciding whether or not to work within a tradition; everyone who struggles with practical, moral, and political questions simply does. “There is no standing ground, no place for enquiry . . . apart from that which is provided by some particular tradition or other”
  • Three Rival Versions of Moral Enquiry (1990). The central idea of the Gifford Lectures is that philosophers make progress by addressing the shortcomings of traditional narratives about the world, shortcomings that become visible either through the failure of traditional narratives to make sense of experience, or through the introduction of contradictory narratives that prove impossible to dismiss
  • MacIntyre compares three traditions exemplified by three literary works published near the end of Adam Gifford’s life (1820–1887)
  • The Ninth Edition of the Encyclopaedia Britannica (1875–1889) represents the modern tradition of trying to understand the world objectively without the influence of tradition.
  • The Genealogy of Morals (1887), by Friedrich Nietzsche embodies the post-modern tradition of interpreting all traditions as arbitrary impositions of power.
  • The encyclical letter Aeterni Patris (1879) of Pope Leo XIII exemplifies the approach of acknowledging one’s predecessors within one’s own tradition of enquiry and working to advance or improve that tradition in the pursuit of objective truth. 
  • Of the three versions of moral enquiry treated in 3RV, only tradition, exemplified in 3RV by the Aristotelian, Thomistic tradition, understands itself as a tradition that looks backward to predecessors in order to understand present questions and move forward
  • Encyclopaedia obscures the role of tradition by presenting the most current conclusions and convictions of a tradition as if they had no history, and as if they represented the final discovery of unalterable truth
  • Encyclopaedists focus on the present and ignore the past.
  • Genealogists, on the other hand, focus on the past in order to undermine the claims of the present.
  • In short, Genealogy denies the teleology of human enquiry by denying (1) that historical enquiry has been fruitful, (2) that the enquiring person has a real identity, and (3) that enquiry has a real goal. MacIntyre finds this mode of enquiry incoherent.
  • Genealogy is self-deceiving insofar as it ignores the traditional and teleological character of its enquiry.
  • Genealogical moral enquiry must make similar exceptions to its treatments of the unity of the enquiring subject and the teleology of moral enquiry; thus “it seems to be the case that the intelligibility of genealogy requires beliefs and allegiances of a kind precluded by the genealogical stance” (3RV, p. 54-55)
  • MacIntyre uses Thomism because it applies the traditional mode of enquiry in a self-conscious manner. Thomistic students learn the work of philosophical enquiry as apprentices in a craft (3RV, p. 61), and maintain the principles of the tradition in their work to extend the understanding of the tradition, even as they remain open to the criticism of those principles.
  • 3RV uses Thomism as its example of tradition, but this use should not suggest that MacIntyre identifies “tradition” with Thomism or Thomism-as-a-name-for-the-Western-tradition. As noted above, WJWR distinguished four traditions of enquiry within the Western European world alone
  • MacIntyre’s emphasis on the temporality of rationality in traditional enquiry makes tradition incompatible with the epistemological projects of modern philosophy
  • Tradition is not merely conservative; it remains open to improvement,
  • Tradition differs from both encyclopaedia and genealogy in the way it understands the place of its theories in the history of human enquiry. The adherent of a tradition must understand that “the rationality of a craft is justified by its history so far,” thus it “is inseparable from the tradition through which it was achieved”
  • MacIntyre uses Thomas Aquinas to illustrate the revolutionary potential of traditional enquiry. Thomas was educated in Augustinian theology and Aristotelian philosophy, and through this education he began to see not only the contradictions between the two traditions, but also the strengths and weaknesses that each tradition revealed in the other. His education also helped him to discover a host of questions and problems that had to be answered and solved. Many of Thomas Aquinas’ responses to these concerns took the form of disputed questions. “Yet to each question the answer produced by Aquinas as a conclusion is no more than and, given Aquinas’s method, cannot but be no more than, the best answer reached so far. And hence derives the essential incompleteness”
  • argue that the virtues are essential to the practice of independent practical reason. The book is relentlessly practical; its arguments appeal only to experience and to purposes, and to the logic of practical reasoning.
  • Like other intelligent animals, human beings enter life vulnerable, weak, untrained, and unknowing, and face the likelihood of infirmity in sickness and in old age. Like other social animals, humans flourish in groups. We learn to regulate our passions, and to act effectively alone and in concert with others through an education provided within a community. MacIntyre’s position allows him to look to the animal world to find analogies to the role of social relationships in the moral formation of human beings
  • The task for the human child is to make “the transition from the infantile exercise of animal intelligence to the exercise of independent practical reasoning” (DRA, p. 87). For a child to make this transition is “to redirect and transform her or his desires, and subsequently to direct them consistently towards the goods of different stages of her or his life” (DRA, p. 87). The development of independent practical reason in the human agent requires the moral virtues in at least three ways.
  • DRA presents moral knowledge as a “knowing how,” rather than as a “knowing that.” Knowledge of moral rules is not sufficient for a moral life; prudence is required to enable the agent to apply the rules well.
  • “Knowing how to act virtuously always involves more than rule-following” (DRA, p. 93). The prudent person can judge what must be done in the absence of a rule and can also judge when general norms cannot be applied to particular cases.
  • Flourishing as an independent practical reasoner requires the virtues in a second way, simply because sometimes we need our friends to tell us who we really are. Independent practical reasoning also requires self-knowledge, but self-knowledge is impossible without the input of others whose judgment provides a reliable touchstone to test our beliefs about ourselves. Self-knowledge therefore requires the virtues that enable an agent to sustain formative relationships and to accept the criticism of trusted friends
  • Human flourishing requires the virtues in a third way, by making it possible to participate in social and political action. They enable us to “protect ourselves and others against neglect, defective sympathies, stupidity, acquisitiveness, and malice” (DRA, p. 98) by enabling us to form and sustain social relationships through which we may care for one another in our infirmities, and pursue common goods with and for the other members of our societies.
  • MacIntyre argues that it is impossible to find an external standpoint, because rational enquiry is an essentially social work (DRA, p. 156-7). Because it is social, shared rational enquiry requires moral commitment to, and practice of, the virtues to prevent the more complacent members of communities from closing off critical reflection upon “shared politically effective beliefs and concepts”
  • MacIntyre finds himself compelled to answer what may be called the question of moral provincialism: If one is to seek the truth about morality and justice, it seems necessary to “find a standpoint that is sufficiently external to the evaluative attitudes and practices that are to be put to the question.” If it is impossible for the agent to take such an external standpoint, if the agent’s commitments preclude radical criticism of the virtues of the community, does that leave the agent “a prisoner of shared prejudices” (DRA, p. 154)?
  • The book moves from MacIntyre’s assessment of human needs for the virtues to the political implications of that assessment. Social and political institutions that form and enable independent practical reasoning must “satisfy three conditions.” (1) They must enable their members to participate in shared deliberations about the communities’ actions. (2) They must establish norms of justice “consistent with exercise of” the virtue of justice. (3) They must enable the strong “to stand proxy” as advocates for the needs of the weak and the disabled.
  • The social and political institutions that MacIntyre recommends cannot be identified with the modern nation state or the modern nuclear family
  • The political structures necessary for human flourishing are essentially local
  • Yet local communities support human flourishing only when they actively support “the virtues of just generosity and shared deliberation”
  • MacIntyre rejects individualism and insists that we view human beings as members of communities who bear specific debts and responsibilities because of our social identities. The responsibilities one may inherit as a member of a community include debts to one’s forbearers that one can only repay to people in the present and future
  • The constructive argument of the second half of the book begins with traditional accounts of the excellences or virtues of practical reasoning and practical rationality rather than virtues of moral reasoning or morality. These traditional accounts define virtue as arête, as excellence
  • Practices are supported by institutions like chess clubs, hospitals, universities, industrial corporations, sports leagues, and political organizations.
  • Practices exist in tension with these institutions, since the institutions tend to be oriented to goods external to practices. Universities, hospitals, and scholarly societies may value prestige, profitability, or relations with political interest groups above excellence in the practices they are said to support.
  • Personal desires and institutional pressures to pursue external goods may threaten to derail practitioners’ pursuits of the goods internal to practices. MacIntyre defines virtue initially as the quality of character that enables an agent to overcome these temptations:
  • “A virtue is an acquired human quality the possession and exercise of which tends to enable us to achieve those goods which are internal to practices
  • Excellence as a human agent cannot be reduced to excellence in a particular practice (See AV, pp. 204–
  • The virtues therefore are to be understood as those dispositions which will not only sustain practices and enable us to achieve the goods internal to practices, but which will also sustain us in the relevant kind of quest for the good, by enabling us to overcome the harms, dangers, temptations, and distractions which we encounter, and which will furnish us with increasing self-knowledge and increasing knowledge of the good (AV, p. 219).
  • The excellent human agent has the moral qualities to seek what is good and best both in practices and in life as a whole.
  • The virtues find their point and purpose not only in sustaining those relationships necessary if the variety of goods internal to practices are to be achieved and not only in sustaining the form of an individual life in which that individual may seek out his or her good as the good of his or her whole life, but also in sustaining those traditions which provide both practices and individual lives with their necessary historical context (AV, p. 223)
  • Since “goods, and with them the only grounds for the authority of laws and virtues, can only be discovered by entering into those relationships which constitute communities whose central bond is a shared vision of and understanding of goods” (AV, p. 258), any hope for the transformation and renewal of society depends on the development and maintenance of such communities.
  • MacIntyre’s Aristotelian approach to ethics as a study of human action distinguishes him from post-Kantian moral philosophers who approach ethics as a means of determining the demands of objective, impersonal, universal morality
  • This modern approach may be described as moral epistemology. Modern moral philosophy pretends to free the individual to determine for her- or himself what she or he must do in a given situation, irrespective of her or his own desires; it pretends to give knowledge of universal moral laws
  • Aristotelian metaphysicians, particularly Thomists who define virtue in terms of the perfection of nature, rejected MacIntyre’s contention that an adequate Aristotelian account of virtue as excellence in practical reasoning and human action need not appeal to Aristotelian metaphysic
  • one group of critics rejects MacIntyre’s Aristotelianism because they hold that any Aristotelian account of the virtues must first account for the truth about virtue in terms of Aristotle’s philosophy of nature, which MacIntyre had dismissed in AV as “metaphysical biology”
  • Many of those who rejected MacIntyre’s turn to Aristotle define “virtue” primarily along moral lines, as obedience to law or adherence to some kind of natural norm. For these critics, “virtuous” appears synonymous with “morally correct;” their resistance to MacIntyre’s appeal to virtue stems from their difficulties either with what they take to be the shortcomings of MacIntyre’s account of moral correctness or with the notion of moral correctness altogether
  • MacIntyre continues to argue from the experience of practical reasoning to the demands of moral education.
  • Descartes and his successors, by contrast, along with certain “notable Thomists of the last hundred years” (p. 175), have proposed that philosophy begins from knowledge of some “set of necessarily true first principles which any truly rational person is able to evaluate as true” (p. 175). Thus for the moderns, philosophy is a technical rather than moral endeavor
  • MacIntyre distinguishes two related challenges to his position, the “relativist challenge” and the “perspectivist challenge.” These two challenges both acknowledge that the goals of the Enlightenment cannot be met and that, “the only available standards of rationality are those made available by and within traditions” (p. 252); they conclude that nothing can be known to be true or false
  • MacIntyre follows the progress of the Western tradition through “three distinct traditions:” from Homer and Aristotle to Thomas Aquinas, from Augustine to Thomas Aquinas and from Augustine through Calvin to Hume
  • Chapter 17 examines the modern liberal denial of tradition, and the ironic transformation of liberalism into the fourth tradition to be treated in the book.
  • MacIntyre credits John Stuart Mill and Thomas Aquinas as “two philosophers of the kind who by their writing send us beyond philosophy into immediate encounter with the ends of life
  • First, both were engaged by questions about the ends of life as questioning human beings and not just as philosophers. . . .
  • Secondly, both Mill and Aquinas understood their speaking and writing as contributing to an ongoing philosophical conversation. . . .
  • Thirdly, it matters that both the end of the conversation and the good of those who participate in it is truth and that the nature of truth, of good, of rational justification, and of meaning therefore have to be central topics of that conversation (Tasks, pp. 130-1).
  • Without these three characteristics, philosophy is first reduced to “the exercise of a set of analytic and argumentative skills. . . . Secondly, philosophy may thereby become a diversion from asking questions about the ends of life with any seriousness”
  • Neither Rosenzweig nor Lukács made philosophical progress because both failed to relate “their questions about the ends of life to the ends of their philosophical writing”
  • First, any adequate philosophical history or biography must determine whether the authors studied remain engaged with the questions that philosophy studies, or set the questions aside in favor of the answers. Second, any adequate philosophical history or biography must determine whether the authors studied insulated themselves from contact with conflicting worldviews or remained open to learning from every available philosophical approach. Third, any adequate philosophical history or biography must place the authors studied into a broader context that shows what traditions they come from and “whose projects” they are “carrying forward
  • MacIntyre’s recognition of the connection between an author’s pursuit of the ends of life and the same author’s work as a philosophical writer prompts him to finish the essay by demanding three things of philosophical historians and biographers
  • Philosophy is not just a study; it is a practice. Excellence in this practice demands that an author bring her or his struggles with the questions of the ends of philosophy into dialogue with historic and contemporary texts and authors in the hope of making progress in answering those questions
  • MacIntyre defends Thomistic realism as rational enquiry directed to the discovery of truth.
  • The three Thomistic essays in this book challenge those caricatures by presenting Thomism in a way that people outside of contemporary Thomistic scholarship may find surprisingly flexible and open
  • To be a moral agent, (1) one must understand one’s individual identity as transcending all the roles that one fills; (2) one must see oneself as a practically rational individual who can judge and reject unjust social standards; and (3) one must understand oneself as “as accountable to others in respect of the human virtues and not just in respect of [one’s] role-performances
  • J is guilty because he complacently accepted social structures that he should have questioned, structures that undermined his moral agency. This essay shows that MacIntyre’s ethics of human agency is not just a descriptive narrative about the manner of moral education; it is a standard laden account of the demands of moral agency.
  • MacIntyre considers “the case of J” (J, for jemand, the German word for “someone”), a train controller who learned, as a standard for his social role, to take no interest in what his trains carried, even during war time when they carried “munitions and . . . Jews on their way to extermination camps”
  • J had learned to do his work for the railroad according to one set of standards and to live other parts of his life according to other standards, so that this compliant participant in “the final solution” could contend, “You cannot charge me with moral failure” (E&P, p. 187).
  • The epistemological theories of Modern moral philosophy were supposed to provide rational justification for rules, policies, and practical determinations according to abstract universal standards, but MacIntyre has dismissed those theorie
  • Modern metaethics is supposed to enable its practitioners to step away from the conflicting demands of contending moral traditions and to judge those conflicts from a neutral position, but MacIntyre has rejected this project as well
  • In his ethical writings, MacIntyre seeks only to understand how to liberate the human agent from blindness and stupidity, to prepare the human agent to recognize what is good and best to do in the concrete circumstances of that agent’s own life, and to strengthen the agent to follow through on that judgment.
  • In his political writings, MacIntyre investigates the role of communities in the formation of effective rational agents, and the impact of political institutions on the lives of communities. This kind of ethics and politics is appropriately named the ethics of human agency.
  • The purpose of the modern moral philosophy of authors like Kant and Mill was to determine, rationally and universally, what kinds of behavior ought to be performed—not in terms of the agent’s desires or goals, but in terms of universal, rational duties. Those theories purported to let agents know what they ought to do by providing knowledge of duties and obligations, thus they could be described as theories of moral epistemology.
  • Contemporary virtue ethics purports to let agents know what qualities human beings ought to have, and the reasons that we ought to have them, not in terms of our fitness for human agency, but in the same universal, disinterested, non-teleological terms that it inherits from Kant and Mill.
  • For MacIntyre, moral knowledge remains a “knowing how” rather than a “knowing that;” MacIntyre seeks to identify those moral and intellectual excellences that make human beings more effective in our pursuit of the human good.
  • MacIntyre’s purpose in his ethics of human agency is to consider what it means to seek one’s good, what it takes to pursue one’s good, and what kind of a person one must become if one wants to pursue that good effectively as a human agent.
  • As a philosophy of human agency, MacIntyre’s work belongs to the traditions of Aristotle and Thomas Aquinas.
  • in keeping with the insight of Marx’s third thesis on Feuerbach, it maintained the common condition of theorists and people as peers in the pursuit of the good life.
  • He holds that the human good plays a role in our practical reasoning whether we recognize it or not, so that some people may do well without understanding why (E&P, p. 25). He also reads Aristotle as teaching that knowledge of the good can make us better agents
  • AV defines virtue in terms of the practical requirements for excellence in human agency, in an agent’s participation in practices (AV, ch. 14), in an agent’s whole life, and in an agent’s involvement in the life of her or his community
  • MacIntyre’s Aristotelian concept of “human action” opposes the notion of “human behavior” that prevailed among mid-twentieth-century determinist social scientists. Human actions, as MacIntyre understands them, are acts freely chosen by human agents in order to accomplish goals that those agents pursue
  • Human behavior, according to mid-twentieth-century determinist social scientists, is the outward activity of a subject, which is said to be caused entirely by environmental influences beyond the control of the subject.
  • Rejecting crude determinism in social science, and approaches to government and public policy rooted in determinism, MacIntyre sees the renewal of human agency and the liberation of the human agent as central goals for ethics and politics.
  • MacIntyre’s Aristotelian account of “human action” examines the habits that an agent must develop in order to judge and act most effectively in the pursuit of truly choice-worthy ends
  • MacIntyre seeks to understand what it takes for the human person to become the kind of agent who has the practical wisdom to recognize what is good and best to do and the moral freedom to act on her or his best judgment.
  • MacIntyre rejected the determinism of modern social science early in his career (“Determinism,” 1957), yet he recognizes that the ability to judge well and act freely is not simply given; excellence in judgment and action must be developed, and it is the task of moral philosophy to discover how these excellences or virtues of the human agent are established, maintained, and strengthened
  • MacIntyre’s Aristotelian philosophy investigates the conditions that support free and deliberate human action in order to propose a path to the liberation of the human agent through participation in the life of a political community that seeks its common goods through the shared deliberation and action of its members
  • As a classics major at Queen Mary College in the University of London (1945-1949), MacIntyre read the Greek texts of Plato and Aristotle, but his studies were not limited to the grammars of ancient languages. He also examined the ethical theories of Immanuel Kant and John Stuart Mill. He attended the lectures of analytic philosopher A. J. Ayer and of philosopher of science Karl Popper. He read Ludwig Wittgenstein’s Tractatus Logico Philosophicus, Jean-Paul Sartre’s L'existentialisme est un humanisme, and Marx’s Eighteenth Brumaire of Napoleon Bonaparte (What happened, pp. 17-18). MacIntyre met the sociologist Franz Steiner, who helped direct him toward approaching moralities substantively
  • Alasdair MacIntyre’s philosophy builds on an unusual foundation. His early life was shaped by two conflicting systems of values. One was “a Gaelic oral culture of farmers and fishermen, poets and storytellers.” The other was modernity, “The modern world was a culture of theories rather than stories” (MacIntyre Reader, p. 255). MacIntyre embraced both value systems
  • From Marxism, MacIntyre learned to see liberalism as a destructive ideology that undermines communities in the name of individual liberty and consequently undermines the moral formation of human agents
  • For MacIntyre, Marx’s way of seeing through the empty justifications of arbitrary choices to consider the real goals and consequences of political actions in economic and social terms would remain the principal insight of Marxism
  • After his retirement from teaching, MacIntyre has continued his work of promoting a renewal of human agency through an examination of the virtues demanded by practices, integrated human lives, and responsible engagement with community life. He is currently affiliated with the Centre for Contemporary Aristotelian Studies in Ethics and Politics (CASEP) at London Metropolitan University.
  • The second half of AV proposes a conception of practice and practical reasoning and the notion of excellence as a human agent as an alternative to modern moral philosophy
  • AV rejects the view of “modern liberal individualism” in which autonomous individuals use abstract moral principles to determine what they ought to do. The critique of modern normative ethics in the first half of AV rejects modern moral reasoning for its failure to justify its premises, and criticizes the frequent use of the rhetoric of objective morality and scientific necessity to manipulate people to accept arbitrary decisions
  • MacIntyre uses “modern liberal individualism” to name a much broader category that includes both liberals and conservatives in contemporary American political parlance, as well as some Marxists and anarchists (See ASIA, pp. 280-284). Conservatism, liberalism, Marxism, and anarchism all present the autonomous individual as the unit of civil society
  • The sources of modern liberal individualism—Hobbes, Locke, and Rousseau—assert that human life is solitary by nature and social by habituation and convention. MacIntyre’s Aristotelian tradition holds, on the contrary, that human life is social by nature.
  • MacIntyre identifies moral excellence with effective human agency, and seeks a political environment that will help to liberate human agents to recognize and seek their own goods, as components of the common goods of their communities, more effectively. For MacIntyre therefore, ethics and politics are bound together.
  • For MacIntyre ethics is not an application of principles to facts, but a study of moral action. Moral action, free human action, involves decisions to do things in pursuit of goals, and it involves the understanding of the implications of one’s actions for the whole variety of goals that human agents seek
  • In this sense, “To act morally is to know how to act” (SMJ, p. 56). “Morality is not a ‘knowing that’ but a ‘knowing how’”
  • If human action is a ‘knowing how,’ then ethics must also consider how one learns ‘how.’ Like other forms of ‘knowing how,’ MacIntyre finds that one learns how to act morally within a community whose language and shared standards shape our judgment
  • MacIntyre had concluded that ethics is not an abstract exercise in the assessment of facts; it is a study of free human action and of the conditions that enable rational human agency.
  • MacIntyre gives Marx credit for concluding in the third of the Theses on Feuerbach, that the only way to change society is to change ourselves, and that “The coincidence of the changing of human activity or self-changing can only be comprehended and rationally understood as revolutionary practice”
  • MacIntyre distinguishes “religion which is an opiate for the people from religion which is not” (MI, p. 83). He condemns forms of religion that justify social inequities and encourage passivity. He argues that authentic Christian teaching criticizes social structures and encourages action
  • Where “moral philosophy textbooks” discuss the kinds of maxims that should guide “promise-keeping, truth-telling, and the like,” moral maxims do not guide real agents in real life at all. “They do not guide us because we do not need to be guided. We know what to do” (ASIA, p. 106). Sometimes we do this without any maxims at all, or even against all the maxims we know. MacIntyre Illustrates his point with Huckleberry Finn’s decision to help Jim, Miss Watson’s escaped slave, to make his way to freedom
  • MacIntyre develops the ideas that morality emerges from history, and that morality organizes the common life of a community
  • The book concludes that the concepts of morality are neither timeless nor ahistorical, and that understanding the historical development of ethical concepts can liberate us “from any false absolutist claims” (SHE, p. 269). Yet this conclusion need not imply that morality is essentially arbitrary or that one could achieve freedom by liberating oneself from the morality of one’s society.
  • From this “Aristotelian point of view,” “modern morality” begins to go awry when moral norms are separated from the pursuit of human goods and moral behavior is treated as an end in itself. This separation characterizes Christian divine command ethics since the fourteenth century and has remained essential to secularized modern morality since the eighteenth century
  • From MacIntyre’s “Aristotelian point of view,” the autonomy granted to the human agent by modern moral philosophy breaks down natural human communities and isolates the individual from the kinds of formative relationships that are necessary to shape the agent into an independent practical reasoner.
  • the 1977 essay “Epistemological Crises, Dramatic Narrative, and the Philosophy of Science” (Hereafter EC). This essay, MacIntyre reports, “marks a major turning-point in my thought in the 1970s” (The Tasks of Philosophy, p. vii) EC may be described fairly as MacIntyre’s discourse on method
  • First, Philosophy makes progress through the resolution of problems. These problems arise when the theories, histories, doctrines and other narratives that help us to organize our experience of the world fail us, leaving us in “epistemological crises.” Epistemological crises are the aftermath of events that undermine the ways that we interpret our world
  • it presents three general points on the method for philosophy.
  • To live in an epistemological crisis is to be aware that one does not know what one thought one knew about some particular subject and to be anxious to recover certainty about that subject.
  • To resolve an epistemological crisis it is not enough to impose some new way of interpreting our experience, we also need to understand why we were wrong before: “When an epistemological crisis is resolved, it is by the construction of a new narrative which enables the agent to understand both how he or she could intelligibly have held his or her original beliefs and how he or she could have been so drastically misled by them
  • MacIntyre notes, “Philosophers have customarily been Emmas and not Hamlets” (p. 6); that is, philosophers have treated their conclusions as accomplished truths, rather than as “more adequate narratives” (p. 7) that remain open to further improvement.
  • To illustrate his position on the open-endedness of enquiry, MacIntyre compares the title characters of Shakespeare’s Hamlet and Jane Austen’s Emma. When Emma finds that she is deeply misled in her beliefs about the other characters in her story, Mr. Knightly helps her to learn the truth and the story comes to a happy ending (p. 6). Hamlet, by contrast, finds no pat answers to his questions; rival interpretations remain throughout the play, so that directors who would stage the play have to impose their own interpretations on the script
  • Another approach to education is the method of Descartes, who begins by rejecting everything that is not clearly and distinctly true as unreliable and false in order to rebuild his understanding of the world on a foundation of undeniable truth.
  • Descartes presents himself as willfully rejecting everything he had believed, and ignores his obvious debts to the Scholastic tradition, even as he argues his case in French and Latin. For MacIntyre, seeking epistemological certainty through universal doubt as a precondition for enquiry is a mistake: “it is an invitation not to philosophy but to mental breakdown, or rather to philosophy as a means of mental breakdown.
  • MacIntyre contrasts Descartes’ descent into mythical isolation with Galileo, who was able to make progress in astronomy and physics by struggling with the apparently insoluble questions of late medieval astronomy and physics, and radically reinterpreting the issues that constituted those questions
  • To make progress in philosophy one must sort through the narratives that inform one’s understanding, struggle with the questions that those narratives raise, and on occasion, reject, replace, or reinterpret portions of those narratives and propose those changes to the rest of one’s community for assessment. Human enquiry is always situated within the history and life of a community.
  • The third point of EC is that we can learn about progress in philosophy from the philosophy of science
  • Kuhn’s “paradigm shifts,” however, are unlike MacIntyre’s resolutions of epistemological crises in two ways.
  • First they are not rational responses to specific problems. Kuhn compares paradigm shifts to religious conversions (pp. 150, 151, 158), stressing that they are not guided by rational norms and he claims that the “mopping up” phase of a paradigm shift is a matter of convention in the training of new scientists and attrition among the holdouts of the previous paradigm
  • Second, the new paradigm is treated as a closed system of belief that regulates a new period of “normal science”; Kuhn’s revolutionary scientists are Emmas, not Hamlets
  • MacIntyre proposes elements of Imre Lakatos’ philosophy of science as correctives to Kuhn’s. While Lakatos has his own shortcomings, his general account of the methodologies of scientific research programs recognizes the role of reason in the transitions between theories and between research programs (Lakatos’ analog to Kuhn’s paradigms or disciplinary matrices). Lakatos presents science as an open ended enquiry, in which every theory may eventually be replaced by more adequate theories. For Lakatos, unlike Kuhn, rational scientific progress occurs when a new theory can account both for the apparent promise and for the actual failure of the theory it replaces.
  • The third conclusion of MacIntyre’s essay is that decisions to support some theories over others may be justified rationally to the extent that those theories allow us to understand our experience and our history, including the history of the failures of inadequate theories
  • For Aristotle, moral philosophy is a study of practical reasoning, and the excellences or virtues that Aristotle recommends in the Nicomachean Ethics are the intellectual and moral excellences that make a moral agent effective as an independent practical reasoner.
  • MacIntyre also finds that the contending parties have little interest in the rational justification of the principles they use. The language of moral philosophy has become a kind of moral rhetoric to be used to manipulate others in defense of the arbitrary choices of its users
  • examining the current condition of secular moral and political discourse. MacIntyre finds contending parties defending their decisions by appealing to abstract moral principles, but he finds their appeals eclectic, inconsistent, and incoherent.
  • The secular moral philosophers of the eighteenth and nineteenth centuries shared strong and extensive agreements about the content of morality (AV, p. 51) and believed that their moral philosophy could justify the demands of their morality rationally, free from religious authority.
  • MacIntyre traces the lineage of the culture of emotivism to the secularized Protestant cultures of northern Europe
  • Modern moral philosophy had thus set for itself an incoherent goal. It was to vindicate both the moral autonomy of the individual and the objectivity, necessity, and categorical character of the rules of morality
  • MacIntyre turns to an apparent alternative, the pragmatic expertise of professional managers. Managers are expected to appeal to the facts to make their decisions on the objective basis of effectiveness, and their authority to do this is based on their knowledge of the social sciences
  • An examination of the social sciences reveals, however, that many of the facts to which managers appeal depend on sociological theories that lack scientific status. Thus, the predictions and demands of bureaucratic managers are no less liable to ideological manipulation than the determinations of modern moral philosophers.
  • Modern moral philosophy separates moral reasoning about duties and obligations from practical reasoning about ends and practical deliberation about the means to one’s ends, and in doing so it separates morality from practice.
  • Many Europeans also lost the practical justifications for their moral norms as they approached modernity; for these Europeans, claiming that certain practices are “immoral,” and invoking Kant’s categorical imperative or Mill’s principle of utility to explain why those practices are immoral, seems no more adequate than the Polynesian appeal to taboo.
  • MacIntyre sifts these definitions and then gives his own definition of virtue, as excellence in human agency, in terms of practices, whole human lives, and traditions in chapters 14 and 15 of AV.
  • In the most often quoted sentence of AV, MacIntyre defines a practice as (1) a complex social activity that (2) enables participants to gain goods internal to the practice. (3) Participants achieve excellence in practices by gaining the internal goods. When participants achieve excellence, (4) the social understandings of excellence in the practice, of the goods of the practice, and of the possibility of achieving excellence in the practice “are systematically extended”
  • Practices, like chess, medicine, architecture, mechanical engineering, football, or politics, offer their practitioners a variety of goods both internal and external to these practices. The goods internal to practices include forms of understanding or physical abilities that can be acquired only by pursuing excellence in the associated practice
  • Goods external to practices include wealth, fame, prestige, and power; there are many ways to gain these external goods. They can be earned or purchased, either honestly or through deception; thus the pursuit of these external goods may conflict with the pursuit of the goods internal to practices.
  • An intelligent child is given the opportunity to win candy by learning to play chess. As long as the child plays chess only to win candy, he has every reason to cheat if by doing so he can win more candy. If the child begins to desire and pursue the goods internal to chess, however, cheating becomes irrational, because it is impossible to gain the goods internal to chess or any other practice except through an honest pursuit of excellence. Goods external to practices may nevertheless remain tempting to the practitioner.
  • Since MacIntyre finds social identity necessary for the individual, MacIntyre’s definition of the excellence or virtue of the human agent needs a social dimension:
  • These responsibilities also include debts incurred by the unjust actions of ones’ predecessors.
  • The enslavement and oppression of black Americans, the subjugation of Ireland, and the genocide of the Jews in Europe remained quite relevant to the responsibilities of citizens of the United States, England, and Germany in 1981, as they still do today.
  • Thus an American who said “I never owned any slaves,” “the Englishman who says ‘I never did any wrong to Ireland,’” or “the young German who believes that being born after 1945 means that what Nazis did to Jews has no moral relevance to his relationship to his Jewish contemporaries” all exhibit a kind of intellectual and moral failure.
  • “I am born with a past, and to cut myself off from that past in the individualist mode, is to deform my present relationships” (p. 221).  For MacIntyre, there is no moral identity for the abstract individual; “The self has to find its moral identity in and through its membership in communities” (p. 221).
Javier E

It's Time for a Real Code of Ethics in Teaching - Noah Berlatsky - The Atlantic - 3 views

  • More 5inShare Email Print A defendant in the Atlanta Public Schools case turns herself in at the Fulton County Jail on April 2. (David Goldman/AP) Earlier this week at The Atlantic, Emily Richmond asked whether high-stakes testing caused the Atlanta schools cheating scandal. The answer, I would argue, is yes... just not in the way you might think. Tests don't cause unethical behavior. But they did cause the Atlanta cheating scandal, and they are doing damage to the teaching profession. The argument that tests do not cause unethical behavior is fairly straightforward, and has been articulated by a number of writers. Jonathan Chait quite correctly points out that unethical behavior occurs in virtually all professions -- and that it occurs particularly when there are clear incentives to succeed. Incentivizing any field increases the impetus to cheat. Suppose journalism worked the way teaching traditionally had. You get hired at a newspaper, and your advancement and pay are dictated almost entirely by your years on the job, with almost no chance of either becoming a star or of getting fired for incompetence. Then imagine journalists changed that and instituted the current system, where you can get really successful if your bosses like you or be fired if they don't. You could look around and see scandal after scandal -- phone hacking! Jayson Blair! NBC's exploding truck! Janet Cooke! Stephen Glass! -- that could plausibly be attributed to this frightening new world in which journalists had an incentive to cheat in order to get ahead. It holds true of any field. If Major League Baseball instituted tenure, and maybe used tee-ball rules where you can't keep score and everybody gets a chance to hit, it could stamp out steroid use. Students have been cheating on tests forever -- massive, systematic cheating, you could say. Why? Because they have an incentive to do well. Give teachers and administrators an incentive for their students to do well, and more of them will cheat. For Chait, then, teaching has just been made more like journalism or baseball; it has gone from an incentiveless occupation to one with incentives.
  • Chait refers to violations of journalistic ethics -- like the phone-hacking scandal -- and suggests they are analogous to Major-League steroid use, and that both are similar to teachers (or students) cheating on tests. But is phone hacking "cheating"
  • Phone hacking was, then, not an example of cheating. It was a violation of professional ethics. And those ethics are not arbitrarily imposed, but are intrinsic to the practice of journalism as a profession committed to public service and to truth.
  • ...8 more annotations...
  • Behaving ethically matters, but how it matters, and what it means, depends strongly on the context in which it occurs.
  • Ethics for teachers is not, apparently, first and foremost about educating their students, or broadening their minds. Rather, ethics for teachers in our current system consists in following the rules. The implicit, linguistic signal being given is that teachers are not like journalists or doctors, committed to a profession and to the moral code needed to achieve their professional goals. Instead, they are like athletes playing games, or (as Chait says) like children taking tests.
  • Using "cheating" as an ethical lens tends to both trivialize and infantilize teacher's work
  • Professions with social respect and social capital, like doctors and lawyers, collaborate in the creation of their own standards. The assumption is that those standards are intrinsic to the profession's goals, and that, therefore, professionals themselves are best equipped to establish and monitor them. Teachers' standards, though, are imposed from outside -- as if teachers are children, or as if teaching is a game.
  • High-stakes testing, then, does leads to cheating. It does not create unethical behavior -- but it does create the particular unethical behavior of "cheating."
  • We have reached a point where we can only talk about the ethics of the profession in terms of cheating or not cheating, as if teachers' main ethical duty is to make sure that scantron bubbles get filled in correctly. Teachers, like journalists, should have a commitment to truth; like doctors, they have a duty of care. Translating those commitments and duties into a bureaucratized measure of cheating-or-not-cheating diminishes ethic
  • For teachers it is, literally, demoralizing. It severs the moral experience of teaching from the moral evaluation of teaching, which makes it almost impossible for good teachers (in all the senses of "good") to stay in the system.
  • We need better ethics for teachers -- ethics that treat them as adults and professionals, not like children playing games.
Javier E

The decline effect and the scientific method : The New Yorker - 3 views

  • The test of replicability, as it’s known, is the foundation of modern research. Replicability is how the community enforces itself. It’s a safeguard for the creep of subjectivity. Most of the time, scientists know what results they want, and that can influence the results they get. The premise of replicability is that the scientific community can correct for these flaws.
  • But now all sorts of well-established, multiply confirmed findings have started to look increasingly uncertain. It’s as if our facts were losing their truth: claims that have been enshrined in textbooks are suddenly unprovable.
  • This phenomenon doesn’t yet have an official name, but it’s occurring across a wide range of fields, from psychology to ecology.
  • ...39 more annotations...
  • If replication is what separates the rigor of science from the squishiness of pseudoscience, where do we put all these rigorously validated findings that can no longer be proved? Which results should we believe?
  • Schooler demonstrated that subjects shown a face and asked to describe it were much less likely to recognize the face when shown it later than those who had simply looked at it. Schooler called the phenomenon “verbal overshadowing.”
  • The most likely explanation for the decline is an obvious one: regression to the mean. As the experiment is repeated, that is, an early statistical fluke gets cancelled out. The extrasensory powers of Schooler’s subjects didn’t decline—they were simply an illusion that vanished over time.
  • yet Schooler has noticed that many of the data sets that end up declining seem statistically solid—that is, they contain enough data that any regression to the mean shouldn’t be dramatic. “These are the results that pass all the tests,” he says. “The odds of them being random are typically quite remote, like one in a million. This means that the decline effect should almost never happen. But it happens all the time!
  • this is why Schooler believes that the decline effect deserves more attention: its ubiquity seems to violate the laws of statistics
  • In 2001, Michael Jennions, a biologist at the Australian National University, set out to analyze “temporal trends” across a wide range of subjects in ecology and evolutionary biology. He looked at hundreds of papers and forty-four meta-analyses (that is, statistical syntheses of related studies), and discovered a consistent decline effect over time, as many of the theories seemed to fade into irrelevance.
  • Jennions admits that his findings are troubling, but expresses a reluctance to talk about them
  • publicly. “This is a very sensitive issue for scientists,” he says. “You know, we’re supposed to be dealing with hard facts, the stuff that’s supposed to stand the test of time. But when you see these trends you become a little more skeptical of things.”
  • While publication bias almost certainly plays a role in the decline effect, it remains an incomplete explanation. For one thing, it fails to account for the initial prevalence of positive results among studies that never even get submitted to journals. It also fails to explain the experience of people like Schooler, who have been unable to replicate their initial data despite their best efforts.
  • Jennions, similarly, argues that the decline effect is largely a product of publication bias, or the tendency of scientists and scientific journals to prefer positive data over null results, which is what happens when no effect is found. The bias was first identified by the statistician Theodore Sterling, in 1959, after he noticed that ninety-seven per cent of all published psychological studies with statistically significant data found the effect they were looking for
  • Sterling saw that if ninety-seven per cent of psychology studies were proving their hypotheses, either psychologists were extraordinarily lucky or they published only the outcomes of successful experiments.
  • One of his most cited papers has a deliberately provocative title: “Why Most Published Research Findings Are False.”
  • suspects that an equally significant issue is the selective reporting of results—the data that scientists choose to document in the first place. Palmer’s most convincing evidence relies on a statistical tool known as a funnel graph. When a large number of studies have been done on a single subject, the data should follow a pattern: studies with a large sample size should all cluster around a common value—the true result—whereas those with a smaller sample size should exhibit a random scattering, since they’re subject to greater sampling error. This pattern gives the graph its name, since the distribution resembles a funnel.
  • after Palmer plotted every study of fluctuating asymmetry, he noticed that the distribution of results with smaller sample sizes wasn’t random at all but instead skewed heavily toward positive results. Palmer has since documented a similar problem in several other contested subject areas. “Once I realized that selective reporting is everywhere in science, I got quite depressed,” Palmer told me. “As a researcher, you’re always aware that there might be some nonrandom patterns, but I had no idea how widespread it is.”
  • Palmer summarized the impact of selective reporting on his field: “We cannot escape the troubling conclusion that some—perhaps many—cherished generalities are at best exaggerated in their biological significance and at worst a collective illusion nurtured by strong a-priori beliefs often repeated.”
  • Palmer emphasizes that selective reporting is not the same as scientific fraud. Rather, the problem seems to be one of subtle omissions and unconscious misperceptions, as researchers struggle to make sense of their results. Stephen Jay Gould referred to this as the “sho
  • horning” process.
  • “A lot of scientific measurement is really hard,” Simmons told me. “If you’re talking about fluctuating asymmetry, then it’s a matter of minuscule differences between the right and left sides of an animal. It’s millimetres of a tail feather. And so maybe a researcher knows that he’s measuring a good male”—an animal that has successfully mated—“and he knows that it’s supposed to be symmetrical. Well, that act of measurement is going to be vulnerable to all sorts of perception biases. That’s not a cynical statement. That’s just the way human beings work.”
  • For Simmons, the steep rise and slow fall of fluctuating asymmetry is a clear example of a scientific paradigm, one of those intellectual fads that both guide and constrain research: after a new paradigm is proposed, the peer-review process is tilted toward positive results. But then, after a few years, the academic incentives shift—the paradigm has become entrenched—so that the most notable results are now those that disprove the theory.
  • John Ioannidis, an epidemiologist at Stanford University, argues that such distortions are a serious issue in biomedical research. “These exaggerations are why the decline has become so common,” he says. “It’d be really great if the initial studies gave us an accurate summary of things. But they don’t. And so what happens is we waste a lot of money treating millions of patients and doing lots of follow-up studies on other themes based on results that are misleading.”
  • In 2005, Ioannidis published an article in the Journal of the American Medical Association that looked at the forty-nine most cited clinical-research studies in three major medical journals.
  • the data Ioannidis found were disturbing: of the thirty-four claims that had been subject to replication, forty-one per cent had either been directly contradicted or had their effect sizes significantly downgraded.
  • the most troubling fact emerged when he looked at the test of replication: out of four hundred and thirty-two claims, only a single one was consistently replicable. “This doesn’t mean that none of these claims will turn out to be true,” he says. “But, given that most of them were done badly, I wouldn’t hold my breath.”
  • According to Ioannidis, the main problem is that too many researchers engage in what he calls “significance chasing,” or finding ways to interpret the data so that it passes the statistical test of significance—the ninety-five-per-cent boundary invented by Ronald Fisher.
  • One of the classic examples of selective reporting concerns the testing of acupuncture in different countries. While acupuncture is widely accepted as a medical treatment in various Asian countries, its use is much more contested in the West. These cultural differences have profoundly influenced the results of clinical trials.
  • The problem of selective reporting is rooted in a fundamental cognitive flaw, which is that we like proving ourselves right and hate being wrong.
  • “It feels good to validate a hypothesis,” Ioannidis said. “It feels even better when you’ve got a financial interest in the idea or your career depends upon it. And that’s why, even after a claim has been systematically disproven”—he cites, for instance, the early work on hormone replacement therapy, or claims involving various vitamins—“you still see some stubborn researchers citing the first few studies
  • That’s why Schooler argues that scientists need to become more rigorous about data collection before they publish. “We’re wasting too much time chasing after bad studies and underpowered experiments,”
  • The current “obsession” with replicability distracts from the real problem, which is faulty design.
  • “Every researcher should have to spell out, in advance, how many subjects they’re going to use, and what exactly they’re testing, and what constitutes a sufficient level of proof. We have the tools to be much more transparent about our experiments.”
  • Schooler recommends the establishment of an open-source database, in which researchers are required to outline their planned investigations and document all their results. “I think this would provide a huge increase in access to scientific work and give us a much better way to judge the quality of an experiment,”
  • scientific research will always be shadowed by a force that can’t be curbed, only contained: sheer randomness. Although little research has been done on the experimental dangers of chance and happenstance, the research that exists isn’t encouraging.
  • The disturbing implication of the Crabbe study is that a lot of extraordinary scientific data are nothing but noise. The hyperactivity of those coked-up Edmonton mice wasn’t an interesting new fact—it was a meaningless outlier, a by-product of invisible variables we don’t understand.
  • The problem, of course, is that such dramatic findings are also the most likely to get published in prestigious journals, since the data are both statistically significant and entirely unexpected
  • This suggests that the decline effect is actually a decline of illusion. While Karl Popper imagined falsification occurring with a single, definitive experiment—Galileo refuted Aristotelian mechanics in an afternoon—the process turns out to be much messier than that.
  • Many scientific theories continue to be considered true even after failing numerous experimental tests.
  • Even the law of gravity hasn’t always been perfect at predicting real-world phenomena. (In one test, physicists measuring gravity by means of deep boreholes in the Nevada desert found a two-and-a-half-per-cent discrepancy between the theoretical predictions and the actual data.)
  • Such anomalies demonstrate the slipperiness of empiricism. Although many scientific ideas generate conflicting results and suffer from falling effect sizes, they continue to get cited in the textbooks and drive standard medical practice. Why? Because these ideas seem true. Because they make sense. Because we can’t bear to let them go. And this is why the decline effect is so troubling. Not because it reveals the human fallibility of science, in which data are tweaked and beliefs shape perceptions. (Such shortcomings aren’t surprising, at least for scientists.) And not because it reveals that many of our most exciting theories are fleeting fads and will soon be rejected. (That idea has been around since Thomas Kuhn.)
  • The decline effect is troubling because it reminds us how difficult it is to prove anything. We like to pretend that our experiments define the truth for us. But that’s often not the case. Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true. When the experiments are done, we still have to choose what to believe. ♦
Javier E

Software Is Smart Enough for SAT, but Still Far From Intelligent - The New York Times - 0 views

  • An artificial intelligence software program capable of seeing and reading has for the first time answered geometry questions from the SAT at the level of an average 11th grader.
  • The software had to combine machine vision to understand diagrams with the ability to read and understand complete sentences; its success represents a breakthrough in artificial intelligence.
  • Despite the advance, however, the researchers acknowledge that the program’s abilities underscore how far scientists have to go to create software capable of mimicking human intelligence.
  • ...9 more annotations...
  • designer of the test-taking program, noted that even a simple task for children, like understanding the meaning of an arrow in the context of a test diagram, was not yet something the most advanced A.I. programs could do reliably.
  • scientific workshops intended to develop more accurate methods than the Turing test for measuring the capabilities of artificial intelligence programs.
  • Researchers in the field are now developing a wide range of gauges to measure intelligence — including the Allen Institute’s standardized-test approach and a task that Dr. Marcus proposed, which he called the “Ikea construction challenge.” That test would provide an A.I. program with a bag of parts and an instruction sheet and require it to assemble a piece of furniture.
  • First proposed in 2011 by Hector Levesque, a University of Toronto computer scientist, the Winograd Schema Challenge would pose questions that require real-world logic to A.I. programs. A question might be: “The trophy would not fit in the brown suitcase because it was too big. What was too big, A: the trophy or B: the suitcase?” Answering this question would require a program to reason spatially and have specific knowledge about the size of objects.
  • Within the A.I. community, discussions about software programs that can reason in a humanlike way are significant because recent progress in the field has been more focused on improving perception, not reasoning.
  • GeoSolver, or GeoS, was described at the Conference on Empirical Methods on Natural Language Processing in Lisbon this weekend. It operates by separately generating a series of logical equations, which serve as components of possible answers, from the text and the diagram in the question. It then weighs the accuracy of the equations and tries to discern whether its interpretation of the diagram and text is strong enough to select one of the multiple-choice answers.
  • Ultimately, Dr. Marcus said, he believed that progress in artificial intelligence would require multiple tests, just as multiple tests are used to assess human performance.
  • “There is no one measure of human intelligence,” he said. “Why should there be just one A.I. test?”
  • In the 1960s, Hubert Dreyfus, a philosophy professor at the University of California, Berkeley, expressed this skepticism most clearly when he wrote, “Believing that writing these types of programs will bring us closer to real artificial intelligence is like believing that someone climbing a tree is making progress toward reaching the moon.”
carolinewren

Research finds college placement tests in need of makeover | Education Dive - 1 views

  • majority of colleges in the United States use a single test to determine what courses incoming students are eligible to take
  • placement tests decide whether students are ready for college-level coursework or if they first need to re-take the fundamentals to prepare.
  • these placement tests aren't necessarily good predictors of success in college courses.
  • ...10 more annotations...
  • researchers are coalescing around the need to rethink the placement process itself.
  • Some are calling for statewide policy changes, others are pushing individual schools or systems to look at their own practices and improve upon them
  • Nearly 70% of community college students are placed into remedial courses each year. Many of these students are from low-income and minority backgrounds, both of which are still highly underrepresented in science, technology, engineering, and math, or STEM, fields
  • recommend colleges incorporate student goals and motivation into their placement decision and acknowledge the realistic math needs of various degree programs.
  • Yet, placement tests are almost universally skewed to measure how well students know algebra.
  • “The way that math placement is done right now hurts a lot of students.”
  • a key recommendation for colleges is to step up professional development with advisors and help them guide students through appropriate course sequences for their end goals
  • In many places the advisor-to-student ratio is too high to allow for proper guidance
  • Placement tests must evaluate the range of skills, however
  • Single standardized tests are easy to administer and cheap to interpret, but they don’t seem to work for placement
Javier E

They're Watching You at Work - Don Peck - The Atlantic - 2 views

  • Predictive statistical analysis, harnessed to big data, appears poised to alter the way millions of people are hired and assessed.
  • By one estimate, more than 98 percent of the world’s information is now stored digitally, and the volume of that data has quadrupled since 2007.
  • The application of predictive analytics to people’s careers—an emerging field sometimes called “people analytics”—is enormously challenging, not to mention ethically fraught
  • ...52 more annotations...
  • By the end of World War II, however, American corporations were facing severe talent shortages. Their senior executives were growing old, and a dearth of hiring from the Depression through the war had resulted in a shortfall of able, well-trained managers. Finding people who had the potential to rise quickly through the ranks became an overriding preoccupation of American businesses. They began to devise a formal hiring-and-management system based in part on new studies of human behavior, and in part on military techniques developed during both world wars, when huge mobilization efforts and mass casualties created the need to get the right people into the right roles as efficiently as possible. By the 1950s, it was not unusual for companies to spend days with young applicants for professional jobs, conducting a battery of tests, all with an eye toward corner-office potential.
  • But companies abandoned their hard-edged practices for another important reason: many of their methods of evaluation turned out not to be very scientific.
  • this regime, so widespread in corporate America at mid-century, had almost disappeared by 1990. “I think an HR person from the late 1970s would be stunned to see how casually companies hire now,”
  • Many factors explain the change, he said, and then he ticked off a number of them: Increased job-switching has made it less important and less economical for companies to test so thoroughly. A heightened focus on short-term financial results has led to deep cuts in corporate functions that bear fruit only in the long term. The Civil Rights Act of 1964, which exposed companies to legal liability for discriminatory hiring practices, has made HR departments wary of any broadly applied and clearly scored test that might later be shown to be systematically biased.
  • about a quarter of the country’s corporations were using similar tests to evaluate managers and junior executives, usually to assess whether they were ready for bigger roles.
  • He has encouraged the company’s HR executives to think about applying the games to the recruitment and evaluation of all professional workers.
  • Knack makes app-based video games, among them Dungeon Scrawl, a quest game requiring the player to navigate a maze and solve puzzles, and Wasabi Waiter, which involves delivering the right sushi to the right customer at an increasingly crowded happy hour. These games aren’t just for play: they’ve been designed by a team of neuroscientists, psychologists, and data scientists to suss out human potential. Play one of them for just 20 minutes, says Guy Halfteck, Knack’s founder, and you’ll generate several megabytes of data, exponentially more than what’s collected by the SAT or a personality test. How long you hesitate before taking every action, the sequence of actions you take, how you solve problems—all of these factors and many more are logged as you play, and then are used to analyze your creativity, your persistence, your capacity to learn quickly from mistakes, your ability to prioritize, and even your social intelligence and personality. The end result, Halfteck says, is a high-resolution portrait of your psyche and intellect, and an assessment of your potential as a leader or an innovator.
  • When the results came back, Haringa recalled, his heart began to beat a little faster. Without ever seeing the ideas, without meeting or interviewing the people who’d proposed them, without knowing their title or background or academic pedigree, Knack’s algorithm had identified the people whose ideas had panned out. The top 10 percent of the idea generators as predicted by Knack were in fact those who’d gone furthest in the process.
  • What Knack is doing, Haringa told me, “is almost like a paradigm shift.” It offers a way for his GameChanger unit to avoid wasting time on the 80 people out of 100—nearly all of whom look smart, well-trained, and plausible on paper—whose ideas just aren’t likely to work out.
  • Aptitude, skills, personal history, psychological stability, discretion, loyalty—companies at the time felt they had a need (and the right) to look into them all. That ambit is expanding once again, and this is undeniably unsettling. Should the ideas of scientists be dismissed because of the way they play a game? Should job candidates be ranked by what their Web habits say about them? Should the “data signature” of natural leaders play a role in promotion? These are all live questions today, and they prompt heavy concerns: that we will cede one of the most subtle and human of skills, the evaluation of the gifts and promise of other people, to machines; that the models will get it wrong; that some people will never get a shot in the new workforce.
  • scoring distance from work could violate equal-employment-opportunity standards. Marital status? Motherhood? Church membership? “Stuff like that,” Meyerle said, “we just don’t touch”—at least not in the U.S., where the legal environment is strict. Meyerle told me that Evolv has looked into these sorts of factors in its work for clients abroad, and that some of them produce “startling results.”
  • consider the alternative. A mountain of scholarly literature has shown that the intuitive way we now judge professional potential is rife with snap judgments and hidden biases, rooted in our upbringing or in deep neurological connections that doubtless served us well on the savanna but would seem to have less bearing on the world of work.
  • We may like to think that society has become more enlightened since those days, and in many ways it has, but our biases are mostly unconscious, and they can run surprisingly deep. Consider race. For a 2004 study called “Are Emily and Greg More Employable Than Lakisha and Jamal?,” the economists Sendhil Mullainathan and Marianne Bertrand put white-sounding names (Emily Walsh, Greg Baker) or black-sounding names (Lakisha Washington, Jamal Jones) on similar fictitious résumés, which they then sent out to a variety of companies in Boston and Chicago. To get the same number of callbacks, they learned, they needed to either send out half again as many résumés with black names as those with white names, or add eight extra years of relevant work experience to the résumés with black names.
  • a sociologist at Northwestern, spent parts of the three years from 2006 to 2008 interviewing professionals from elite investment banks, consultancies, and law firms about how they recruited, interviewed, and evaluated candidates, and concluded that among the most important factors driving their hiring recommendations were—wait for it—shared leisure interests.
  • Lacking “reliable predictors of future performance,” Rivera writes, “assessors purposefully used their own experiences as models of merit.” Former college athletes “typically prized participation in varsity sports above all other types of involvement.” People who’d majored in engineering gave engineers a leg up, believing they were better prepared.
  • the prevailing system of hiring and management in this country involves a level of dysfunction that should be inconceivable in an economy as sophisticated as ours. Recent survey data collected by the Corporate Executive Board, for example, indicate that nearly a quarter of all new hires leave their company within a year of their start date, and that hiring managers wish they’d never extended an offer to one out of every five members on their team
  • In the late 1990s, as these assessments shifted from paper to digital formats and proliferated, data scientists started doing massive tests of what makes for a successful customer-support technician or salesperson. This has unquestionably improved the quality of the workers at many firms.
  • In 2010, however, Xerox switched to an online evaluation that incorporates personality testing, cognitive-skill assessment, and multiple-choice questions about how the applicant would handle specific scenarios that he or she might encounter on the job. An algorithm behind the evaluation analyzes the responses, along with factual information gleaned from the candidate’s application, and spits out a color-coded rating: red (poor candidate), yellow (middling), or green (hire away). Those candidates who score best, I learned, tend to exhibit a creative but not overly inquisitive personality, and participate in at least one but not more than four social networks, among many other factors. (Previous experience, one of the few criteria that Xerox had explicitly screened for in the past, turns out to have no bearing on either productivity or retention
  • When Xerox started using the score in its hiring decisions, the quality of its hires immediately improved. The rate of attrition fell by 20 percent in the initial pilot period, and over time, the number of promotions rose. Xerox still interviews all candidates in person before deciding to hire them, Morse told me, but, she added, “We’re getting to the point where some of our hiring managers don’t even want to interview anymore”
  • Gone are the days, Ostberg told me, when, say, a small survey of college students would be used to predict the statistical validity of an evaluation tool. “We’ve got a data set of 347,000 actual employees who have gone through these different types of assessments or tools,” he told me, “and now we have performance-outcome data, and we can split those and slice and dice by industry and location.”
  • Evolv’s tests allow companies to capture data about everybody who applies for work, and everybody who gets hired—a complete data set from which sample bias, long a major vexation for industrial-organization psychologists, simply disappears. The sheer number of observations that this approach makes possible allows Evolv to say with precision which attributes matter more to the success of retail-sales workers (decisiveness, spatial orientation, persuasiveness) or customer-service personnel at call centers (rapport-building)
  • There are some data that Evolv simply won’t use, out of a concern that the information might lead to systematic bias against whole classes of people
  • the idea that hiring was a science fell out of favor. But now it’s coming back, thanks to new technologies and methods of analysis that are cheaper, faster, and much-wider-ranging than what we had before
  • what most excites him are the possibilities that arise from monitoring the entire life cycle of a worker at any given company.
  • Now the two companies are working together to marry pre-hire assessments to an increasing array of post-hire data: about not only performance and duration of service but also who trained the employees; who has managed them; whether they were promoted to a supervisory role, and how quickly; how they performed in that role; and why they eventually left.
  • What begins with an online screening test for entry-level workers ends with the transformation of nearly every aspect of hiring, performance assessment, and management.
  • I turned to Sandy Pentland, the director of the Human Dynamics Laboratory at MIT. In recent years, Pentland has pioneered the use of specialized electronic “badges” that transmit data about employees’ interactions as they go about their days. The badges capture all sorts of information about formal and informal conversations: their length; the tone of voice and gestures of the people involved; how much those people talk, listen, and interrupt; the degree to which they demonstrate empathy and extroversion; and more. Each badge generates about 100 data points a minute.
  • he tried the badges out on about 2,500 people, in 21 different organizations, and learned a number of interesting lessons. About a third of team performance, he discovered, can usually be predicted merely by the number of face-to-face exchanges among team members. (Too many is as much of a problem as too few.) Using data gathered by the badges, he was able to predict which teams would win a business-plan contest, and which workers would (rightly) say they’d had a “productive” or “creative” day. Not only that, but he claimed that his researchers had discovered the “data signature” of natural leaders, whom he called “charismatic connectors” and all of whom, he reported, circulate actively, give their time democratically to others, engage in brief but energetic conversations, and listen at least as much as they talk.
  • His group is developing apps to allow team members to view their own metrics more or less in real time, so that they can see, relative to the benchmarks of highly successful employees, whether they’re getting out of their offices enough, or listening enough, or spending enough time with people outside their own team.
  • Torrents of data are routinely collected by American companies and now sit on corporate servers, or in the cloud, awaiting analysis. Bloomberg reportedly logs every keystroke of every employee, along with their comings and goings in the office. The Las Vegas casino Harrah’s tracks the smiles of the card dealers and waitstaff on the floor (its analytics team has quantified the impact of smiling on customer satisfaction). E‑mail, of course, presents an especially rich vein to be mined for insights about our productivity, our treatment of co-workers, our willingness to collaborate or lend a hand, our patterns of written language, and what those patterns reveal about our intelligence, social skills, and behavior.
  • people analytics will ultimately have a vastly larger impact on the economy than the algorithms that now trade on Wall Street or figure out which ads to show us. He reminded me that we’ve witnessed this kind of transformation before in the history of management science. Near the turn of the 20th century, both Frederick Taylor and Henry Ford famously paced the factory floor with stopwatches, to improve worker efficiency.
  • “The quantities of data that those earlier generations were working with,” he said, “were infinitesimal compared to what’s available now. There’s been a real sea change in the past five years, where the quantities have just grown so large—petabytes, exabytes, zetta—that you start to be able to do things you never could before.”
  • People analytics will unquestionably provide many workers with more options and more power. Gild, for example, helps companies find undervalued software programmers, working indirectly to raise those people’s pay. Other companies are doing similar work. One called Entelo, for instance, specializes in using algorithms to identify potentially unhappy programmers who might be receptive to a phone cal
  • He sees it not only as a boon to a business’s productivity and overall health but also as an important new tool that individual employees can use for self-improvement: a sort of radically expanded The 7 Habits of Highly Effective People, custom-written for each of us, or at least each type of job, in the workforce.
  • the most exotic development in people analytics today is the creation of algorithms to assess the potential of all workers, across all companies, all the time.
  • The way Gild arrives at these scores is not simple. The company’s algorithms begin by scouring the Web for any and all open-source code, and for the coders who wrote it. They evaluate the code for its simplicity, elegance, documentation, and several other factors, including the frequency with which it’s been adopted by other programmers. For code that was written for paid projects, they look at completion times and other measures of productivity. Then they look at questions and answers on social forums such as Stack Overflow, a popular destination for programmers seeking advice on challenging projects. They consider how popular a given coder’s advice is, and how widely that advice ranges.
  • The algorithms go further still. They assess the way coders use language on social networks from LinkedIn to Twitter; the company has determined that certain phrases and words used in association with one another can distinguish expert programmers from less skilled ones. Gild knows these phrases and words are associated with good coding because it can correlate them with its evaluation of open-source code, and with the language and online behavior of programmers in good positions at prestigious companies.
  • having made those correlations, Gild can then score programmers who haven’t written open-source code at all, by analyzing the host of clues embedded in their online histories. They’re not all obvious, or easy to explain. Vivienne Ming, Gild’s chief scientist, told me that one solid predictor of strong coding is an affinity for a particular Japanese manga site.
  • Gild’s CEO, Sheeroy Desai, told me he believes his company’s approach can be applied to any occupation characterized by large, active online communities, where people post and cite individual work, ask and answer professional questions, and get feedback on projects. Graphic design is one field that the company is now looking at, and many scientific, technical, and engineering roles might also fit the bill. Regardless of their occupation, most people leave “data exhaust” in their wake, a kind of digital aura that can reveal a lot about a potential hire.
  • professionally relevant personality traits can be judged effectively merely by scanning Facebook feeds and photos. LinkedIn, of course, captures an enormous amount of professional data and network information, across just about every profession. A controversial start-up called Klout has made its mission the measurement and public scoring of people’s online social influence.
  • Mullainathan expressed amazement at how little most creative and professional workers (himself included) know about what makes them effective or ineffective in the office. Most of us can’t even say with any certainty how long we’ve spent gathering information for a given project, or our pattern of information-gathering, never mind know which parts of the pattern should be reinforced, and which jettisoned. As Mullainathan put it, we don’t know our own “production function.”
  • Over time, better job-matching technologies are likely to begin serving people directly, helping them see more clearly which jobs might suit them and which companies could use their skills. In the future, Gild plans to let programmers see their own profiles and take skills challenges to try to improve their scores. It intends to show them its estimates of their market value, too, and to recommend coursework that might allow them to raise their scores even more. Not least, it plans to make accessible the scores of typical hires at specific companies, so that software engineers can better see the profile they’d need to land a particular job
  • Knack, for its part, is making some of its video games available to anyone with a smartphone, so people can get a better sense of their strengths, and of the fields in which their strengths would be most valued. (Palo Alto High School recently adopted the games to help students assess careers.) Ultimately, the company hopes to act as matchmaker between a large network of people who play its games (or have ever played its games) and a widening roster of corporate clients, each with its own specific profile for any given type of job.
  • When I began my reporting for this story, I was worried that people analytics, if it worked at all, would only widen the divergent arcs of our professional lives, further gilding the path of the meritocratic elite from cradle to grave, and shutting out some workers more definitively. But I now believe the opposite is likely to happen, and that we’re headed toward a labor market that’s fairer to people at every stage of their careers
  • For decades, as we’ve assessed people’s potential in the professional workforce, the most important piece of data—the one that launches careers or keeps them grounded—has been educational background: typically, whether and where people went to college, and how they did there. Over the past couple of generations, colleges and universities have become the gatekeepers to a prosperous life. A degree has become a signal of intelligence and conscientiousness, one that grows stronger the more selective the school and the higher a student’s GPA, that is easily understood by employers, and that, until the advent of people analytics, was probably unrivaled in its predictive powers.
  • the limitations of that signal—the way it degrades with age, its overall imprecision, its many inherent biases, its extraordinary cost—are obvious. “Academic environments are artificial environments,” Laszlo Bock, Google’s senior vice president of people operations, told The New York Times in June. “People who succeed there are sort of finely trained, they’re conditioned to succeed in that environment,” which is often quite different from the workplace.
  • because one’s college history is such a crucial signal in our labor market, perfectly able people who simply couldn’t sit still in a classroom at the age of 16, or who didn’t have their act together at 18, or who chose not to go to graduate school at 22, routinely get left behind for good. That such early factors so profoundly affect career arcs and hiring decisions made two or three decades later is, on its face, absurd.
  • I spoke with managers at a lot of companies who are using advanced analytics to reevaluate and reshape their hiring, and nearly all of them told me that their research is leading them toward pools of candidates who didn’t attend college—for tech jobs, for high-end sales positions, for some managerial roles. In some limited cases, this is because their analytics revealed no benefit whatsoever to hiring people with college degrees; in other cases, and more often, it’s because they revealed signals that function far better than college history,
  • Google, too, is hiring a growing number of nongraduates. Many of the people I talked with reported that when it comes to high-paying and fast-track jobs, they’re reducing their preference for Ivy Leaguers and graduates of other highly selective schools.
  • This process is just beginning. Online courses are proliferating, and so are online markets that involve crowd-sourcing. Both arenas offer new opportunities for workers to build skills and showcase competence. Neither produces the kind of instantly recognizable signals of potential that a degree from a selective college, or a first job at a prestigious firm, might. That’s a problem for traditional hiring managers, because sifting through lots of small signals is so difficult and time-consuming.
  • all of these new developments raise philosophical questions. As professional performance becomes easier to measure and see, will we become slaves to our own status and potential, ever-focused on the metrics that tell us how and whether we are measuring up? Will too much knowledge about our limitations hinder achievement and stifle our dreams? All I can offer in response to these questions, ironically, is my own gut sense, which leads me to feel cautiously optimistic.
  • Google’s understanding of the promise of analytics is probably better than anybody else’s, and the company has been changing its hiring and management practices as a result of its ongoing analyses. (Brainteasers are no longer used in interviews, because they do not correlate with job success; GPA is not considered for anyone more than two years out of school, for the same reason—the list goes on.) But for all of Google’s technological enthusiasm, these same practices are still deeply human. A real, live person looks at every résumé the company receives. Hiring decisions are made by committee and are based in no small part on opinions formed during structured interviews.
Javier E

To Justify Every 'A,' Some Professors Hand Over Grading Power to Outsiders - Technology... - 0 views

  • The best way to eliminate grade inflation is to take professors out of the grading process: Replace them with professional evaluators who never meet the students, and who don't worry that students will punish harsh grades with poor reviews. That's the argument made by leaders of Western Governors University, which has hired 300 adjunct professors who do nothing but grade student work.
  • These efforts raise the question: What if professors aren't that good at grading? What if the model of giving instructors full control over grades is fundamentally flawed? As more observers call for evidence of college value in an era of ever-rising tuition costs, game-changing models like these are getting serious consideration.
  • Professors do score poorly when it comes to fair grading, according to a study published in July in the journal Teachers College Record. After crunching the numbers on decades' worth of grade reports from about 135 colleges, the researchers found that average grades have risen for 30 years, and that A is now the most common grade given at most colleges. The authors, Stuart Rojstaczer and Christopher Healy, argue that a "consumer-based approach" to higher education has created subtle incentives for professors to give higher marks than deserved. "The standard practice of allowing professors free rein in grading has resulted in grades that bear little relation to actual performance," the two professors concluded.
  • ...13 more annotations...
  • Western Governors is entirely online, for one thing. Technically it doesn't offer courses; instead it provides mentors who help students prepare for a series of high-stakes homework assignments. Those assignments are designed by a team of professional test-makers to prove competence in various subject areas. The idea is that as long as students can leap all of those hurdles, they deserve degrees, whether or not they've ever entered a classroom, watched a lecture video, or participated in any other traditional teaching experience. The model is called "competency-based education."
  • Ms. Johnson explains that Western Governors essentially splits the role of the traditional professor into two jobs. Instructional duties fall to a group the university calls "course mentors," who help students master material. The graders, or evaluators, step in once the homework is filed, with the mind-set of, "OK, the teaching's done, now our job is to find out how much you know," says Ms. Johnson. They log on to a Web site called TaskStream and pluck the first assignment they see. The institution promises that every assignment will be graded within two days of submission.
  • Western Governors requires all evaluators to hold at least a master's degree in the subject they're grading.
  • Evaluators are required to write extensive comments on each task, explaining why the student passed or failed to prove competence in the requisite skill. No letter grades are given—students either pass or fail each task.
  • Another selling point is the software's fast response rate. It can grade a batch of 1,000 essay tests in minutes. Professors can set the software to return the grade immediately and can give students the option of making revisions and resubmitting their work on the spot.
  • The graders must regularly participate in "calibration exercises," in which they grade a simulated assignment to make sure they are all scoring consistently. As the phrase suggests, the process is designed to run like a well-oiled machine.
  • Other evaluators want to push talented students to do more than the university's requirements for a task, or to allow a struggling student to pass if he or she is just under the bar. "Some people just can't acclimate to a competency-based environment," says Ms. Johnson. "I tell them, If they don't buy this, they need to not be here.
  • She and some teaching assistants scored the tests by hand and compared their performance with the computer's.
  • The graduate students became fatigued and made mistakes after grading several tests in a row, she told me, "but the machine was right-on every time."
  • He argues that students like the idea that their tests are being evaluated in a consistent way.
  • All evaluators initially receive a month of training, conducted online, about how to follow each task's grading guidelines, which lay out characteristics of a passing score.
  • He said once students get essays back instantly, they start to view essay tests differently. "It's almost like a big math problem. You don't expect to get everything right the first time, but you work through it.
  • robot grading is the hottest trend in testing circles, says Jacqueline Leighton, a professor of educational psychology at the University of Alberta who edits the journal Educational Measurement: Issues and Practice. Companies building essay-grading robots include the Educational Testing Service, which sells e-rater, and Pearson Education, which makes Intelligent Essay Assessor. "The research is promising, but they're still very much in their infancy," Ms. Leighton says.
charlottedonoho

The Big Problem With the New SAT - NYTimes.com - 1 views

  • AT first glance, the College Board’s revised SAT seems a radical departure from the test’s original focus on students’ general ability or aptitude.
  • The revised SAT takes some important, if partial, steps toward becoming a test of curriculum mastery. In place of the infamously tricky, puzzle-type items, the exam will be a more straightforward test of material that students encounter in the classroom.
  • While a clear improvement, the revised SAT remains problematic. It will still emphasize speed — quick recall and time management — over subject knowledge. Despite evidence that writing is the single most important skill for success in college, the essay will be optional.
  • ...3 more annotations...
  • And the biggest problem is this: While the content will be new, the underlying design will not change. The SAT will remain a “norm-referenced” exam, designed primarily to rank students rather than measure what they actually know.
  • Norm-referenced tests like the SAT and the ACT have contributed enormously to the “educational arms race” — the ferocious competition for admission at top colleges and universities. They do so by exaggerating the importance of small differences in test scores that have only marginal relevance for later success in college. Because of the way such tests are designed, answering even a few more questions correctly can substantially raise students’ scores and thereby their rankings. This creates great pressure on students and their parents to avail themselves of expensive test-prep services in search of any edge. It is also unfair to those who cannot afford such services. Yet research on college admissions has repeatedly confirmed that test scores, as compared to high school grades, are relatively weak predictors of how students actually perform in college.
  • College admissions will never be perfectly fair and rational; the disparities are too deep for that. Yet the process can be fairer and more rational if we rethink the purposes of college-entrance exams.
Javier E

The Wisdom Deficit in Schools - The Atlantic - 0 views

  • When I was in high school, I chose to major in English in college because I wanted to be wiser. That’s the word I used. If I ended up making lots of money or writing a book, great; but really, I liked the prospect of being exposed to great thoughts and deep advice, and the opportunity to apply them to my own life in my own clumsy way. I wanted to live more thoughtfully and purposefully
  • Now I’m a veteran English teacher, reflecting on what’s slowly changed at the typical American public high school—and the word wisdom keeps haunting me. I don’t teach it as much anymore, and I wonder who is.
  • how teachers are now being informed by the Common Core State Standards—the controversial math and English benchmarks that have been adopted in most states—and the writers and thought leaders who shape the assessments matched to those standards. It all amounts to an alphabet soup of bureaucratic expectations and what can feel like soul-less instruction. The Smarter Balanced Assessment Consortium—referred to in education circles simply as "SBAC"—is the association that writes a Common Core-aligned assessment used in 25 states
  • ...17 more annotations...
  • The Common Core promotes 10 so-called "College and Career Readiness Anchor Standards" for reading that emphasize technical skills like analyzing, integrating, and delineating a text. But these expectations deal very little with ensuring students are actually appreciating the literature at hand—and say nothing about the personal engagement and life lessons to which my principal was referring
  • Kate Kinsella, an influential author who consults school districts across the country and is considered "a guiding force on the National Advisory Board for the Consortium on Reading Excellence," recently told me to "ditch literature" since "literary fiction is not critical to college success." Kinsella continued, "What’s represented by the standards is the need to analyze texts rather than respond to literature.
  • As a teacher working within this regimented environment, my classroom objectives have had to shift. I used to feel deeply satisfied facilitating a rich classroom discussion on a Shakespearean play; now, I feel proud when my students explicitly acknowledge the aforementioned "anchor standards" and take the initiative to learn these technical skills.
  • But as a man who used to be a high school student interested in pursuing wisdom, I’m almost startled to find myself up late at night, literally studying these anchor standards instead of Hamlet itself.
  • It just feels like a very slow, gradual cultural shift that I don’t even notice except for sudden moments of nostalgia, like remembering a dream out of nowhere
  • I get it: My job is to teach communication, not values, and maybe that’s reasonable. After all, I’m not sure I would want my daughter gaining her wisdom from a randomly selected high-school teacher just because he passed a few writing and literature courses at a state university (which is what I did). My job description has evolved, and I’m fine with that
  • This arrangement, in theory, allows students to read the literature on their own, when they get their own time—and I’m fine with that. But then, where are they getting the time and space to appreciate the deeper lessons of classic literature, to evaluate its truth and appropriately apply it to their own lives?
  • But where are the students getting their wisdom?
  • I’m not talking about my child, or your child. I’m absolutely positive that my daughter will know the difference between Darcy and Wickham before she’s in eighth grade; and it's likely that people who would gravitate toward this story would appreciate this kind of thinking
  • I’m talking about American children in general—kids whose parents work all day, whose fathers left them or whose mothers died
  • even for the parents who do prioritize the humanities in their households, I’m not sure that one generation is actually sharing culturally relevant wisdom with the next one—not if the general community doesn’t even talk about what that wisdom specifically means. Each family can be responsible for teaching wisdom in their own way, and I’m fine with that. But then, does the idea of cultural wisdom get surrendered in the process?
  • Secular wisdom in the public schools seems like it should inherently spring from the literature that’s shaped American culture. And while the students focus on how Whitman’s "purpose shapes the content and style of his text," they’re obviously exposed to the words that describe his leaves of grass.
  • But there is a noticeable deprioritization of literature, and a crumbling consensus regarding the nation’s idea of classic literature. The Common Core requires only Shakespeare, which is puzzling if only for its singularity
  • The country’s disregard for the institutional transfer of cultural wisdom is evident with this single observation: None of the state assessments has a single question about the content of any classic literature. They only test on reading skills
  • research suggests that a significant majority of teens do not attend church, and youth church attendance has been decreasing over the past few decades. This is fine with me. But then again, where are they getting their wisdom?
  • Admittedly, nothing about the Common Core or any modern shifts in teaching philosophies is forbidding me from sharing deeper lessons found in Plato’s cave or Orwell’s Airstrip One. The fine print of the Common-Core guidelines even mentions a few possible titles. But this comes with constant and pervasive language that favors objective analysis over personal engagement.
  • Later, a kid who reminds me of the teenager I was in high school—a boy who is at different times depressed, excited, naive, and curious—asked me why I became an English teacher. I smiled in self-defense, but I was silent again, not knowing what to say anymore.
Javier E

Stanford Magazine - History Detected - May/June 2013 - 2 views

  • an approach developed at Stanford's Graduate School of Education that's rapidly gaining adherents across the country
  • trial studies of the Stanford program demonstrated that when high school students engage regularly with challenging primary source documents, they not only make significant gains learning and retaining historical material, they also markedly improve their reading comprehension and critical thinking.
  • Colglazier builds his thought-provoking classes using an online tool called Reading Like a Historian. Designed by the Stanford History Education Group under Professor Sam Wineburg, the website offers 87 flexible lesson plans featuring documents from the Library of Congress
  • ...15 more annotations...
  • Teachers can download the lessons and adapt them for their own purposes, free of charge. Students learn how to examine documents critically, just as historians would, in order to answer intriguing questions: Did Pocahontas really rescue John Smith? Was Abraham Lincoln a racist? Who blinked first in the Cuban Missile Crisis, the Russians or the Americans?
  • The website's lessons have been downloaded 800,000 times and spawned a lively online community of history educators grateful for the camaraderie
  • just 30 percent of the people who teach history-related courses in U.S. public high schools both majored in the field and are certified to teach it.
  • " By reading these challenging documents and discovering history for themselves, he says, "not only will they remember the content, they'll develop skills for life."
  • "Textbooks are useful as background narrative. It's difficult to talk about the Gulf of Tonkin Resolution if students don't know where Vietnam is, or the Lincoln-Douglas debates if they don't know who Abe Lincoln was before he was Daniel Day-Lewis.
  • But when a ten-pound textbook becomes the script for a whole year's worth of instruction, a precious learning opportunity is lost. "Many students go through their entire middle and high school and never encounter the actual voice of a historical participant,"
  • Wineburg devoured history books as a kid and did well in Advanced Placement courses at his public high school. But when he entered Brown University, he was shocked at how ill-prepared he was in the subject. Employed after college as a high school history teacher, he saw similar weaknesses in his students. "The best ones could repeat what the text said," he recalls, "but when I asked them to critically examine whether they believed the text, I could have been speaking Martian."
  • Wineburg realized that the art of historical thinking is not something that comes naturally to most people; it has to be cultivated. Students have to be taught to look at the source of a document before reading it, figure out the context in which it was written, and cross-check it with other sources before coming to a conclusion.
  • In 2008, Reisman was ready to conduct a test of the curriculum at five schools in the San Francisco Unified School District. As expected, students in the test classes showed an increased ability to retain historical knowledge, as well as a greater appreciation for history, compared to the control group. What took everyone by surprise, though, was how much the test students advanced in basic reading.
  • Fremont 11th grader Ayanna Black agrees. "In other history courses I have taken, I wasn't able to fully understand what was going on. It seemed that it was just a bunch of words I had to memorize for a future test," she says. "Now that I contextualize the information I am given, it helps me understand not only what is being said but also the reason behind it." The approach, she says, "leads me to remembering the information out of curiosity, rather than trying to pass a test."
  • Scholars in the Stanford History Education Group hope to develop more online lesson plans in world history
  • The Common Core curriculum will bring radical changes in the standardized state tests that youngsters have been taking for decades. Instead of filling in multiple-choice bubbles, they will be expected to write out short answers that demonstrate their ability to analyze texts, and then cite those texts to support arguments—the exact skills that Reading Like a Historian fosters.
  • Wineburg and his PhD students have teamed up with the library on another project: a website called Beyond the Bubble,where teachers can learn how to evaluate their students using short written tests called History Assessments of Thinking. Each HAT asks students to consider a historical document—a letter drawn from the archives of the NAACP, for example—and justify their conclusions about it in three or four sentences. By scanning the responses, teachers can determine quickly whether their pupils are grasping basic concepts.
  • Wineburg hopes to make Reading Like a Historian lesson plans completely paperless, with exercise sheets that can be filled out on a laptop or tablet computer.
  • Though the work has been hard in history this year, she appreciates what it's taught her. "I've learned that you don't just read what is put in front of you and accept it, which is what I had been doing with my textbook all summer," she explains. "It can be frustrating to analyze documents that are contradictory, but I'm coming to appreciate that history is a collection of thousands of accounts and perspectives, and it's our job to interpret it."
Javier E

A New Kind of Tutoring Aims to Make Students Smarter - NYTimes.com - 1 views

  • the goal is to improve cognitive skills. LearningRx is one of a growing number of such commercial services — some online, others offered by psychologists. Unlike traditional tutoring services that seek to help students master a subject, brain training purports to enhance comprehension and the ability to analyze and mentally manipulate concepts, images, sounds and instructions. In a word, it seeks to make students smarter.
  • “The average gain on I.Q. is 15 points after 24 weeks of training, and 20 points in less than 32 weeks.”
  • , “Our users have reported profound benefits that include: clearer and quicker thinking; faster problem-solving skills; increased alertness and awareness; better concentration at work or while driving; sharper memory for names, numbers and directions.”
  • ...8 more annotations...
  • “It used to take me an hour to memorize 20 words. Now I can learn, like, 40 new words in 20 minutes.”
  • “I don’t know if it makes you smarter. But when you get to each new level on the math and reading tasks, it definitely builds up your self-confidence.”
  • . “What you care about is not an intelligence test score, but whether your ability to do an important task has really improved. That’s a chain of evidence that would be really great to have. I haven’t seen it.”
  • Still,a new and growing body of scientific evidence indicates that cognitive training can be effective, including that offered by commercial services.
  • He looked at 340 middle-school students who spent two hours a week for a semester using LearningRx exercises in their schools’ computer labs and an equal number of students who received no such training. Those who played the online games, Dr. Hill found, not only improved significantly on measures of cognitive abilities compared to their peers, but also on Virginia’s annual Standards of Learning exam.
  • I’ve had some kids who not only reported that they had very big changes in the classroom, but when we bring them back in the laboratory to do neuropsychological testing, we also see great changes. They show increases that would be highly unlikely to happen just by chance.”
  • where crosswords and Sudoku are intended to be a diversion, the games here give that same kind of reward, only they’re designed to improve your brain, your memory, your problem-solving skills.”
  • More than 40 games are offered by Lumosity. One, the N-back, is based on a task developed decades ago by psychologists. Created to test working memory, the N-back challenges users to keep track of a continuously updated list and remember which item appeared “n” times ago.
qkirkpatrick

New test uses a single drop of blood to reveal entire history of viral infections | Sci... - 0 views

  • Researchers have developed a cheap and rapid test that reveals a person’s full history of viral infections from a single drop of blood.
  • The test allows doctors to read out a list of the viruses that have infected, or continue to infect, patients even when they have not caused any obvious symptoms. The technology means that GPs could screen patients for all of the viruses capable of infecting people
  • When a droplet of blood from a patient is mixed with the modified viruses, any antibodies they have latch on to human virus proteins they recognise as invaders. The scientists then pull out the antibodies and identify the human viruses from the protein fragments they have stuck to.
  • ...2 more annotations...
  • In a demonstration of the technology, the team analysed blood from 569 people in the US, South Africa, Thailand and Peru. The test found that, on average, people had been infected with 10 species of viruses, though at least two people in the trial had histories of 84 infections from different kinds of viruses.
  • The test could bring about major benefits for organ transplant patients. One problem that can follow transplant surgery is the unexpected reawakening of viruses that have lurked inactive in the patient or donor for years. These viruses can return in force when the patient’s immune system is suppressed with drugs to prevent them rejecting the organ. Standard tests often fail to pick up latent viruses before surgery, but the VirScan procedure could reveal their presence and alert doctors and patients to the danger.
  •  
    How can new technology revolutionize medicine and curing people of diseases?
Javier E

Lies, Damned Lies, and Medical Science - Magazine - The Atlantic - 0 views

  • How should we choose among these dueling, high-profile nutritional findings? Ioannidis suggests a simple approach: ignore them all.
  • even if a study managed to highlight a genuine health connection to some nutrient, you’re unlikely to benefit much from taking more of it, because we consume thousands of nutrients that act together as a sort of network, and changing intake of just one of them is bound to cause ripples throughout the network that are far too complex for these studies to detect, and that may be as likely to harm you as help you
  • studies report average results that typically represent a vast range of individual outcomes.
  • ...17 more annotations...
  • studies usually detect only modest effects that merely tend to whittle your chances of succumbing to a particular disease from small to somewhat smaller
  • The odds that anything useful will survive from any of these studies are poor,” says Ioannidis—dismissing in a breath a good chunk of the research into which we sink about $100 billion a year in the United States alone.
  • nutritional studies aren’t the worst. Drug studies have the added corruptive force of financial conflict of interest.
  • Even when the evidence shows that a particular research idea is wrong, if you have thousands of scientists who have invested their careers in it, they’ll continue to publish papers on it,” he says. “It’s like an epidemic, in the sense that they’re infected with these wrong ideas, and they’re spreading it to other researchers through journals.
  • Nature, the grande dame of science journals, stated in a 2006 editorial, “Scientists understand that peer review per se provides only a minimal assurance of quality, and that the public conception of peer review as a stamp of authentication is far from the truth.
  • The ultimate protection against research error and bias is supposed to come from the way scientists constantly retest each other’s results—except they don’t. Only the most prominent findings are likely to be put to the test, because there’s likely to be publication payoff in firming up the proof, or contradicting it.
  • even for medicine’s most influential studies, the evidence sometimes remains surprisingly narrow. Of those 45 super-cited studies that Ioannidis focused on, 11 had never been retested
  • even when a research error is outed, it typically persists for years or even decades.
  • much, perhaps even most, of what doctors do has never been formally put to the test in credible studies, given that the need to do so became obvious to the field only in the 1990s
  • Other meta-research experts have confirmed that similar issues distort research in all fields of science, from physics to economics (where the highly regarded economists J. Bradford DeLong and Kevin Lang once showed how a remarkably consistent paucity of strong evidence in published economics studies made it unlikely that any of them were right
  • His PLoS Medicine paper is the most downloaded in the journal’s history, and it’s not even Ioannidis’s most-cited work
  • while his fellow researchers seem to be getting the message, he hasn’t necessarily forced anyone to do a better job. He fears he won’t in the end have done much to improve anyone’s health. “There may not be fierce objections to what I’m saying,” he explains. “But it’s difficult to change the way that everyday doctors, patients, and healthy people think and behave.”
  • “Usually what happens is that the doctor will ask for a suite of biochemical tests—liver fat, pancreas function, and so on,” she tells me. “The tests could turn up something, but they’re probably irrelevant. Just having a good talk with the patient and getting a close history is much more likely to tell me what’s wrong.” Of course, the doctors have all been trained to order these tests, she notes, and doing so is a lot quicker than a long bedside chat. They’re also trained to ply the patient with whatever drugs might help whack any errant test numbers back into line.
  • What they’re not trained to do is to go back and look at the research papers that helped make these drugs the standard of care. “When you look the papers up, you often find the drugs didn’t even work better than a placebo. And no one tested how they worked in combination with the other drugs,” she says. “Just taking the patient off everything can improve their health right away.” But not only is checking out the research another time-consuming task, patients often don’t even like it when they’re taken off their drugs, she explains; they find their prescriptions reassuring.
  • Already feeling that they’re fighting to keep patients from turning to alternative medical treatments such as homeopathy, or misdiagnosing themselves on the Internet, or simply neglecting medical treatment altogether, many researchers and physicians aren’t eager to provide even more reason to be skeptical of what doctors do—not to mention how public disenchantment with medicine could affect research funding.
  • We could solve much of the wrongness problem, Ioannidis says, if the world simply stopped expecting scientists to be right. That’s because being wrong in science is fine, and even necessary—as long as scientists recognize that they blew it, report their mistake openly instead of disguising it as a success, and then move on to the next thing, until they come up with the very occasional genuine breakthrough
  • Science is a noble endeavor, but it’s also a low-yield endeavor,” he says. “I’m not sure that more than a very small percentage of medical research is ever likely to lead to major improvements in clinical outcomes and quality of life. We should be very comfortable with that fact.”
Javier E

Thieves of experience: On the rise of surveillance capitalism - 1 views

  • Harvard Business School professor emerita Shoshana Zuboff argues in her new book that the Valley’s wealth and power are predicated on an insidious, essentially pathological form of private enterprise—what she calls “surveillance capitalism.” Pioneered by Google, perfected by Facebook, and now spreading throughout the economy, surveillance capitalism uses human life as its raw material. Our everyday experiences, distilled into data, have become a privately-owned business asset used to predict and mold our behavior, whether we’re shopping or socializing, working or voting.
  • By reengineering the economy and society to their own benefit, Google and Facebook are perverting capitalism in a way that undermines personal freedom and corrodes democracy.
  • Under the Fordist model of mass production and consumption that prevailed for much of the twentieth century, industrial capitalism achieved a relatively benign balance among the contending interests of business owners, workers, and consumers. Enlightened executives understood that good pay and decent working conditions would ensure a prosperous middle class eager to buy the goods and services their companies produced. It was the product itself — made by workers, sold by companies, bought by consumers — that tied the interests of capitalism’s participants together. Economic and social equilibrium was negotiated through the product.
  • ...72 more annotations...
  • By removing the tangible product from the center of commerce, surveillance capitalism upsets the equilibrium. Whenever we use free apps and online services, it’s often said, we become the products, our attention harvested and sold to advertisers
  • this truism gets it wrong. Surveillance capitalism’s real products, vaporous but immensely valuable, are predictions about our future behavior — what we’ll look at, where we’ll go, what we’ll buy, what opinions we’ll hold — that internet companies derive from our personal data and sell to businesses, political operatives, and other bidders.
  • Unlike financial derivatives, which they in some ways resemble, these new data derivatives draw their value, parasite-like, from human experience.To the Googles and Facebooks of the world, we are neither the customer nor the product. We are the source of what Silicon Valley technologists call “data exhaust” — the informational byproducts of online activity that become the inputs to prediction algorithms
  • Another 2015 study, appearing in the Journal of Computer-Mediated Communication, showed that when people hear their phone ring but are unable to answer it, their blood pressure spikes, their pulse quickens, and their problem-solving skills decline.
  • The smartphone has become a repository of the self, recording and dispensing the words, sounds and images that define what we think, what we experience and who we are. In a 2015 Gallup survey, more than half of iPhone owners said that they couldn’t imagine life without the device.
  • So what happens to our minds when we allow a single tool such dominion over our perception and cognition?
  • Not only do our phones shape our thoughts in deep and complicated ways, but the effects persist even when we aren’t using the devices. As the brain grows dependent on the technology, the research suggests, the intellect weakens.
  • he has seen mounting evidence that using a smartphone, or even hearing one ring or vibrate, produces a welter of distractions that makes it harder to concentrate on a difficult problem or job. The division of attention impedes reasoning and performance.
  • internet companies operate in what Zuboff terms “extreme structural independence from people.” When databases displace goods as the engine of the economy, our own interests, as consumers but also as citizens, cease to be part of the negotiation. We are no longer one of the forces guiding the market’s invisible hand. We are the objects of surveillance and control.
  • Social skills and relationships seem to suffer as well.
  • In both tests, the subjects whose phones were in view posted the worst scores, while those who left their phones in a different room did the best. The students who kept their phones in their pockets or bags came out in the middle. As the phone’s proximity increased, brainpower decreased.
  • In subsequent interviews, nearly all the participants said that their phones hadn’t been a distraction—that they hadn’t even thought about the devices during the experiment. They remained oblivious even as the phones disrupted their focus and thinking.
  • The researchers recruited 520 undergraduates at UCSD and gave them two standard tests of intellectual acuity. One test gauged “available working-memory capacity,” a measure of how fully a person’s mind can focus on a particular task. The second assessed “fluid intelligence,” a person’s ability to interpret and solve an unfamiliar problem. The only variable in the experiment was the location of the subjects’ smartphones. Some of the students were asked to place their phones in front of them on their desks; others were told to stow their phones in their pockets or handbags; still others were required to leave their phones in a different room.
  • the “integration of smartphones into daily life” appears to cause a “brain drain” that can diminish such vital mental skills as “learning, logical reasoning, abstract thought, problem solving, and creativity.”
  •  Smartphones have become so entangled with our existence that, even when we’re not peering or pawing at them, they tug at our attention, diverting precious cognitive resources. Just suppressing the desire to check our phone, which we do routinely and subconsciously throughout the day, can debilitate our thinking.
  • They found that students who didn’t bring their phones to the classroom scored a full letter-grade higher on a test of the material presented than those who brought their phones. It didn’t matter whether the students who had their phones used them or not: All of them scored equally poorly.
  • A study of nearly a hundred secondary schools in the U.K., published last year in the journal Labour Economics, found that when schools ban smartphones, students’ examination scores go up substantially, with the weakest students benefiting the most.
  • Data, the novelist and critic Cynthia Ozick once wrote, is “memory without history.” Her observation points to the problem with allowing smartphones to commandeer our brains
  • Because smartphones serve as constant reminders of all the friends we could be chatting with electronically, they pull at our minds when we’re talking with people in person, leaving our conversations shallower and less satisfying.
  • In a 2013 study conducted at the University of Essex in England, 142 participants were divided into pairs and asked to converse in private for ten minutes. Half talked with a phone in the room, half without a phone present. The subjects were then given tests of affinity, trust and empathy. “The mere presence of mobile phones,” the researchers reported in the Journal of Social and Personal Relationships, “inhibited the development of interpersonal closeness and trust” and diminished “the extent to which individuals felt empathy and understanding from their partners.”
  • The evidence that our phones can get inside our heads so forcefully is unsettling. It suggests that our thoughts and feelings, far from being sequestered in our skulls, can be skewed by external forces we’re not even aware o
  •  Scientists have long known that the brain is a monitoring system as well as a thinking system. Its attention is drawn toward any object that is new, intriguing or otherwise striking — that has, in the psychological jargon, “salience.”
  • even in the history of captivating media, the smartphone stands out. It is an attention magnet unlike any our minds have had to grapple with before. Because the phone is packed with so many forms of information and so many useful and entertaining functions, it acts as what Dr. Ward calls a “supernormal stimulus,” one that can “hijack” attention whenever it is part of our surroundings — and it is always part of our surroundings.
  • Imagine combining a mailbox, a newspaper, a TV, a radio, a photo album, a public library and a boisterous party attended by everyone you know, and then compressing them all into a single, small, radiant object. That is what a smartphone represents to us. No wonder we can’t take our minds off it.
  • The irony of the smartphone is that the qualities that make it so appealing to us — its constant connection to the net, its multiplicity of apps, its responsiveness, its portability — are the very ones that give it such sway over our minds.
  • Phone makers like Apple and Samsung and app writers like Facebook, Google and Snap design their products to consume as much of our attention as possible during every one of our waking hours
  • Social media apps were designed to exploit “a vulnerability in human psychology,” former Facebook president Sean Parker said in a recent interview. “[We] understood this consciously. And we did it anyway.”
  • A quarter-century ago, when we first started going online, we took it on faith that the web would make us smarter: More information would breed sharper thinking. We now know it’s not that simple.
  • As strange as it might seem, people’s knowledge and understanding may actually dwindle as gadgets grant them easier access to online data stores
  • In a seminal 2011 study published in Science, a team of researchers — led by the Columbia University psychologist Betsy Sparrow and including the late Harvard memory expert Daniel Wegner — had a group of volunteers read forty brief, factual statements (such as “The space shuttle Columbia disintegrated during re-entry over Texas in Feb. 2003”) and then type the statements into a computer. Half the people were told that the machine would save what they typed; half were told that the statements would be erased.
  • Afterward, the researchers asked the subjects to write down as many of the statements as they could remember. Those who believed that the facts had been recorded in the computer demonstrated much weaker recall than those who assumed the facts wouldn’t be stored. Anticipating that information would be readily available in digital form seemed to reduce the mental effort that people made to remember it
  • The researchers dubbed this phenomenon the “Google effect” and noted its broad implications: “Because search engines are continually available to us, we may often be in a state of not feeling we need to encode the information internally. When we need it, we will look it up.”
  • as the pioneering psychologist and philosopher William James said in an 1892 lecture, “the art of remembering is the art of thinking.”
  • Only by encoding information in our biological memory can we weave the rich intellectual associations that form the essence of personal knowledge and give rise to critical and conceptual thinking. No matter how much information swirls around us, the less well-stocked our memory, the less we have to think with.
  • As Dr. Wegner and Dr. Ward explained in a 2013 Scientific American article, when people call up information through their devices, they often end up suffering from delusions of intelligence. They feel as though “their own mental capacities” had generated the information, not their devices. “The advent of the ‘information age’ seems to have created a generation of people who feel they know more than ever before,” the scholars concluded, even though “they may know ever less about the world around them.”
  • That insight sheds light on society’s current gullibility crisis, in which people are all too quick to credit lies and half-truths spread through social media. If your phone has sapped your powers of discernment, you’ll believe anything it tells you.
  • A second experiment conducted by the researchers produced similar results, while also revealing that the more heavily students relied on their phones in their everyday lives, the greater the cognitive penalty they suffered.
  • When we constrict our capacity for reasoning and recall or transfer those skills to a gadget, we sacrifice our ability to turn information into knowledge. We get the data but lose the meaning
  • We need to give our minds more room to think. And that means putting some distance between ourselves and our phones.
  • Google’s once-patient investors grew restive, demanding that the founders figure out a way to make money, preferably lots of it.
  • nder pressure, Page and Brin authorized the launch of an auction system for selling advertisements tied to search queries. The system was designed so that the company would get paid by an advertiser only when a user clicked on an ad. This feature gave Google a huge financial incentive to make accurate predictions about how users would respond to ads and other online content. Even tiny increases in click rates would bring big gains in income. And so the company began deploying its stores of behavioral data not for the benefit of users but to aid advertisers — and to juice its own profits. Surveillance capitalism had arrived.
  • Google’s business now hinged on what Zuboff calls “the extraction imperative.” To improve its predictions, it had to mine as much information as possible from web users. It aggressively expanded its online services to widen the scope of its surveillance.
  • Through Gmail, it secured access to the contents of people’s emails and address books. Through Google Maps, it gained a bead on people’s whereabouts and movements. Through Google Calendar, it learned what people were doing at different moments during the day and whom they were doing it with. Through Google News, it got a readout of people’s interests and political leanings. Through Google Shopping, it opened a window onto people’s wish lists,
  • The company gave all these services away for free to ensure they’d be used by as many people as possible. It knew the money lay in the data.
  • the organization grew insular and secretive. Seeking to keep the true nature of its work from the public, it adopted what its CEO at the time, Eric Schmidt, called a “hiding strategy” — a kind of corporate omerta backed up by stringent nondisclosure agreements.
  • Page and Brin further shielded themselves from outside oversight by establishing a stock structure that guaranteed their power could never be challenged, neither by investors nor by directors.
  • What’s most remarkable about the birth of surveillance capitalism is the speed and audacity with which Google overturned social conventions and norms about data and privacy. Without permission, without compensation, and with little in the way of resistance, the company seized and declared ownership over everyone’s information
  • The companies that followed Google presumed that they too had an unfettered right to collect, parse, and sell personal data in pretty much any way they pleased. In the smart homes being built today, it’s understood that any and all data will be beamed up to corporate clouds.
  • Google conducted its great data heist under the cover of novelty. The web was an exciting frontier — something new in the world — and few people understood or cared about what they were revealing as they searched and surfed. In those innocent days, data was there for the taking, and Google took it
  • Google also benefited from decisions made by lawmakers, regulators, and judges — decisions that granted internet companies free use of a vast taxpayer-funded communication infrastructure, relieved them of legal and ethical responsibility for the information and messages they distributed, and gave them carte blanche to collect and exploit user data.
  • Consider the terms-of-service agreements that govern the division of rights and the delegation of ownership online. Non-negotiable, subject to emendation and extension at the company’s whim, and requiring only a casual click to bind the user, TOS agreements are parodies of contracts, yet they have been granted legal legitimacy by the court
  • Law professors, writes Zuboff, “call these ‘contracts of adhesion’ because they impose take-it-or-leave-it conditions on users that stick to them whether they like it or not.” Fundamentally undemocratic, the ubiquitous agreements helped Google and other firms commandeer personal data as if by fiat.
  • n the choices we make as consumers and private citizens, we have always traded some of our autonomy to gain other rewards. Many people, it seems clear, experience surveillance capitalism less as a prison, where their agency is restricted in a noxious way, than as an all-inclusive resort, where their agency is restricted in a pleasing way
  • Zuboff makes a convincing case that this is a short-sighted and dangerous view — that the bargain we’ve struck with the internet giants is a Faustian one
  • but her case would have been stronger still had she more fully addressed the benefits side of the ledger.
  • there’s a piece missing. While Zuboff’s assessment of the costs that people incur under surveillance capitalism is exhaustive, she largely ignores the benefits people receive in return — convenience, customization, savings, entertainment, social connection, and so on
  • hat the industries of the future will seek to manufacture is the self.
  • Behavior modification is the thread that ties today’s search engines, social networks, and smartphone trackers to tomorrow’s facial-recognition systems, emotion-detection sensors, and artificial-intelligence bots.
  • All of Facebook’s information wrangling and algorithmic fine-tuning, she writes, “is aimed at solving one problem: how and when to intervene in the state of play that is your daily life in order to modify your behavior and thus sharply increase the predictability of your actions now, soon, and later.”
  • “The goal of everything we do is to change people’s actual behavior at scale,” a top Silicon Valley data scientist told her in an interview. “We can test how actionable our cues are for them and how profitable certain behaviors are for us.”
  • This goal, she suggests, is not limited to Facebook. It is coming to guide much of the economy, as financial and social power shifts to the surveillance capitalists
  • Combining rich information on individuals’ behavioral triggers with the ability to deliver precisely tailored and timed messages turns out to be a recipe for behavior modification on an unprecedented scale.
  • it was Facebook, with its incredibly detailed data on people’s social lives, that grasped digital media’s full potential for behavior modification. By using what it called its “social graph” to map the intentions, desires, and interactions of literally billions of individuals, it saw that it could turn its network into a worldwide Skinner box, employing psychological triggers and rewards to program not only what people see but how they react.
  • spying on the populace is not the end game. The real prize lies in figuring out ways to use the data to shape how people think and act. “The best way to predict the future is to invent it,” the computer scientist Alan Kay once observed. And the best way to predict behavior is to script it.
  • competition for personal data intensified. It was no longer enough to monitor people online; making better predictions required that surveillance be extended into homes, stores, schools, workplaces, and the public squares of cities and towns. Much of the recent innovation in the tech industry has entailed the creation of products and services designed to vacuum up data from every corner of our lives
  • “The typical complaint is that privacy is eroded, but that is misleading,” Zuboff writes. “In the larger societal pattern, privacy is not eroded but redistributed . . . . Instead of people having the rights to decide how and what they will disclose, these rights are concentrated within the domain of surveillance capitalism.” The transfer of decision rights is also a transfer of autonomy and agency, from the citizen to the corporation.
  • What we lose under this regime is something more fundamental than privacy. It’s the right to make our own decisions about privacy — to draw our own lines between those aspects of our lives we are comfortable sharing and those we are not
  • Other possible ways of organizing online markets, such as through paid subscriptions for apps and services, never even got a chance to be tested.
  • Online surveillance came to be viewed as normal and even necessary by politicians, government bureaucrats, and the general public
  • Google and other Silicon Valley companies benefited directly from the government’s new stress on digital surveillance. They earned millions through contracts to share their data collection and analysis techniques with the National Security Agenc
  • As much as the dot-com crash, the horrors of 9/11 set the stage for the rise of surveillance capitalism. Zuboff notes that, in 2000, members of the Federal Trade Commission, frustrated by internet companies’ lack of progress in adopting privacy protections, began formulating legislation to secure people’s control over their online information and severely restrict the companies’ ability to collect and store it. It seemed obvious to the regulators that ownership of personal data should by default lie in the hands of private citizens, not corporations.
  • The 9/11 attacks changed the calculus. The centralized collection and analysis of online data, on a vast scale, came to be seen as essential to national security. “The privacy provisions debated just months earlier vanished from the conversation more or less overnight,”
Javier E

Geology's Timekeepers Are Feuding - The Atlantic - 0 views

  • , in 2000, the Nobel Prize-winning chemist Paul Crutzen won permanent fame for stratigraphy. He proposed that humans had so throughly altered the fundamental processes of the planet—through agriculture, climate change, and nuclear testing, and other phenomena—that a new geological epoch had commenced: the Anthropocene, the age of humans.
  • Zalasiewicz should know. He is the chair of the Anthropocene working group, which the ICS established in 2009 to investigate whether the new epoch deserved a place in stratigraphic time.
  • In 2015, the group announced that the Anthropocene was a plausible new layer and that it should likely follow the Holocene. But the team has yet to propose a “golden spike” for the epoch: a boundary in the sedimentary rock record where the Anthropocene clearly begins.
  • ...12 more annotations...
  • Officially, the Holocene is still running today. You have lived your entire life in the Holocene, and the Holocene has constituted the geological “present” for as long as there have been geologists.But if we now live in a new epoch, the Anthropocene, then the ICS will have to chop the Holocene somewhere. It will have to choose when the Holocene ended, and it will move some amount of time out of the purview of the Holocene working group and into that of the Anthropocene working group.
  • This is politically difficult. And right now, the Anthropocene working group seems intent on not carving too deep into the Holocene. In a paper published earlier this year in Earth-Science Reviews, the Anthropocene working group’s members strongly imply that they will propose starting the new epoch in the mid-20th century.
  • Some geologists argue that the Anthropocene started even earlier: perhaps 4,000 or 6,000 years ago, as farmers began to remake the land surface.“Most of the world’s forests that were going to be converted to cropland and agriculture were already cleared well before 1950,” says Bill Ruddiman, a geology professor at the University of Virginia and an advocate of this extremely early Anthropocene.
  • “Most of the world’s prairies and steppes that were going to be cleared for crops were already gone, by then. How can you argue the Anthropocene started in 1950 when all of the major things that affect Earth’s surface were already over?”Van der Pluijm agreed that the Anthropocene working group was picking 1950 for “not very good reasons.”“Agriculture was the revolution that allowed society to develop,” he said. “That was really when people started to force the land to work for them. That massive land movement—it’s like a landslide, except it’s a humanslide. And it is not, of course, as dramatic as today’s motion of land, but it starts the clock.”
  • This muddle had to stop. The Holocene comes up constantly in discussions of modern global warming. Geologists and climate scientists did not make their jobs any easier by slicing it in different ways and telling contradictory stories about it.
  • This process started almost 10 years ago. For this reason, Zalasiewicz, the chair of the Anthropocene working group, said he wasn’t blindsided by the new subdivisions at all. In fact, he voted to adopt them as a member of the Quaternary working group.“Whether the Anthropocene works with a unified Holocene or one that’s in three parts makes for very little difference,” he told me.In fact, it had made the Anthropocene group’s work easier. “It has been useful to compare the scale of the two climate events that mark the new boundaries [within the Holocene] with the kind of changes that we’re assessing in the Anthropocene. It has been quite useful to have the compare and contrast,” he said. “Our view is that some of the changes in the Anthropocene are rather bigger.”
  • Zalasiewicz said that he and his colleagues were going as fast as they could. When the working group group began its work in 2009, it was “really starting from scratch,” he told me.While other working groups have a large body of stratigraphic research to consider, the Anthropocene working group had nothing. “We had to spend a fair bit of time deciding whether the Anthropocene was geology at all,” he said. Then they had to decide where its signal could show up. Now, they’re looking for evidence that shows it.
  • This cycle of “glacials” and “interglacials” has played out about 50 times over the last several million years. When the Holocene began, it was only another interglacial—albeit the one we live in. Until recently, glaciers were still on schedule to descend in another 30,000 years or so.Yet geologists still call the Holocene an epoch, even though they do not bestow this term on any of the previous 49 interglacials. It get special treatment because we live in it.
  • Much of this science is now moot. Humanity’s vast emissions of greenhouse gas have now so warmed the climate that they have offset the next glaciation. They may even knock us out of the ongoing cycle of Ice Ages, sending the Earth hurtling back toward a “greenhouse” climate after the more amenable “icehouse” climate during which humans evolved.For this reason, van der Pluijm wants the Anthropocene to supplant the Holocene entirely. Humans made their first great change to the environment at the close of the last glaciation, when they seem to have hunted the world’s largest mammals—the wooly mammoth, the saber-toothed tiger—to extinction. Why not start the Anthropocene then?He would even rename the pre-1800 period “the Holocene Age” as a consolation prize:
  • Zalasiewicz said he would not start the Anthropocene too early in time, as it would be too work-intensive for the field to rename such a vast swath of time. “The early-Anthropocene idea would crosscut against the Holocene as it’s seen by Holocene workers,” he said. If other academics didn’t like this, they could create their own timescales and start the Anthropocene Epoch where they choose. “We have no jurisdiction over the word Anthropocene,” he said.
  • Ruddiman, the University of Virginia professor who first argued for a very early Anthropocene, now makes an even broader case. He’s not sure it makes sense to formally define the Anthropocene at all. In a paper published this week, he objects to designating the Anthropocene as starting in the 1950s—and then he objects to delineating the Anthropocene, or indeed any new geological epoch, by name. “Keep the use of the term informal,” he told me. “Don’t make it rigid. Keep it informal so people can say the early-agricultural Anthropocene, or the industrial-era Anthropocene.”
  • “This is the age of geochemical dating,” he said. Geologists have stopped looking to the ICS to place each rock sample into the rock sequence. Instead, field geologists use laboratory techniques to get a precise year or century of origin for each rock sample. “The community just doesn’t care about these definitions,” he said.
1 - 20 of 86 Next › Last »
Showing 20 items per page