Skip to main content

Home/ New Media Ethics 2009 course/ Group items tagged Intellectual

Rss Feed Group items tagged

Weiye Loh

Higher Expectations | The American Prospect - 0 views

  • Higher education in the United States isn't a system, a fact that partly explains its historic success. But in their different ways, The Great American University and Higher Education show that all is not well in the halls of ivy. The biggest need is to open the campus doors to the many who now can't afford to get in. During the past 30 years, average private tuition has gone from 20 percent to 50 percent of median family income. Average public tuition, 4 percent of median income in 1980, is now 11 percent. Meanwhile, more students are graduating from high school unprepared for the demands of college. Though increasing the number of students graduating from college is no cure-all, it's critical to the fortunes of the nation. It might even encourage the next generation to be more intellectually adventurous. Even if we can't all be rich, we can certainly be more interesting.
  •  
    Higher Expectations What are colleges for? Research, economic advancement, or making students more interesting?
Weiye Loh

MacIntyre on money « Prospect Magazine - 0 views

  • MacIntyre has often given the impression of a robe-ripping Savonarola. He has lambasted the heirs to the principal western ethical schools: John Locke’s social contract, Immanuel Kant’s categorical imperative, Jeremy Bentham’s utilitarian “the greatest happiness for the greatest number.” Yet his is not a lone voice in the wilderness. He can claim connections with a trio of 20th-century intellectual heavyweights: the late Elizabeth Anscombe, her surviving husband, Peter Geach, and the Canadian philosopher Charles Taylor, winner in 2007 of the Templeton prize. What all four have in common is their Catholic faith, enthusiasm for Aristotle’s telos (life goals), and promotion of Thomism, the philosophy of St Thomas Aquinas who married Christianity and Aristotle. Leo XIII (pope from 1878 to 1903), who revived Thomism while condemning communism and unfettered capitalism, is also an influence.
  • MacIntyre’s key moral and political idea is that to be human is to be an Aristotelian goal-driven, social animal. Being good, according to Aristotle, consists in a creature (whether plant, animal, or human) acting according to its nature—its telos, or purpose. The telos for human beings is to generate a communal life with others; and the good society is composed of many independent, self-reliant groups.
  • MacIntyre differs from all these influences and alliances, from Leo XIII onwards, in his residual respect for Marx’s critique of capitalism.
  • ...6 more annotations...
  • MacIntyre begins his Cambridge talk by asserting that the 2008 economic crisis was not due to a failure of business ethics.
  • he has argued that moral behaviour begins with the good practice of a profession, trade, or art: playing the violin, cutting hair, brick-laying, teaching philosophy.
  • In other words, the virtues necessary for human flourishing are not a result of the top-down application of abstract ethical principles, but the development of good character in everyday life.
  • After Virtue, which is in essence an attack on the failings of the Enlightenment, has in its sights a catalogue of modern assumptions of beneficence: liberalism, humanism, individualism, capitalism. MacIntyre yearns for a single, shared view of the good life as opposed to modern pluralism’s assumption that there can be many competing views of how to live well.
  • In philosophy he attacks consequentialism, the view that what matters about an action is its consequences, which is usually coupled with utilitarianism’s “greatest happiness” principle. He also rejects Kantianism—the identification of universal ethical maxims based on reason and applied to circumstances top down. MacIntyre’s critique routinely cites the contradictory moral principles adopted by the allies in the second world war. Britain invoked a Kantian reason for declaring war on Germany: that Hitler could not be allowed to invade his neighbours. But the bombing of Dresden (which for a Kantian involved the treatment of people as a means to an end, something that should never be countenanced) was justified under consequentialist or utilitarian arguments: to bring the war to a swift end.
  • MacIntyre seeks to oppose utilitarianism on the grounds that people are called on by their very nature to be good, not merely to perform acts that can be interpreted as good. The most damaging consequence of the Enlightenment, for MacIntyre, is the decline of the idea of a tradition within which an individual’s desires are disciplined by virtue. And that means being guided by internal rather than external “goods.” So the point of being a good footballer is the internal good of playing beautifully and scoring lots of goals, not the external good of earning a lot of money. The trend away from an Aristotelian perspective has been inexorable: from the empiricism of David Hume, to Darwin’s account of nature driven forward without a purpose, to the sterile analytical philosophy of AJ Ayer and the “demolition of metaphysics” in his 1936 book Language, Truth and Logic.
  •  
    The influential moral philosopher Alasdair MacIntyre has long stood outside the mainstream. Has the financial crisis finally vindicated his critique of global capitalism?
Weiye Loh

Genome Biology | Full text | A Faustian bargain - 0 views

  • on October 1st, you announced that the departments of French, Italian, Classics, Russian and Theater Arts were being eliminated. You gave several reasons for your decision, including that 'there are comparatively fewer students enrolled in these degree programs.' Of course, your decision was also, perhaps chiefly, a cost-cutting measure - in fact, you stated that this decision might not have been necessary had the state legislature passed a bill that would have allowed your university to set its own tuition rates. Finally, you asserted that the humanities were a drain on the institution financially, as opposed to the sciences, which bring in money in the form of grants and contracts.
  • I'm sure that relatively few students take classes in these subjects nowadays, just as you say. There wouldn't have been many in my day, either, if universities hadn't required students to take a distribution of courses in many different parts of the academy: humanities, social sciences, the fine arts, the physical and natural sciences, and to attain minimal proficiency in at least one foreign language. You see, the reason that humanities classes have low enrollment is not because students these days are clamoring for more relevant courses; it's because administrators like you, and spineless faculty, have stopped setting distribution requirements and started allowing students to choose their own academic programs - something I feel is a complete abrogation of the duty of university faculty as teachers and mentors. You could fix the enrollment problem tomorrow by instituting a mandatory core curriculum that included a wide range of courses.
  • the vast majority of humanity cannot handle freedom. In giving humans the freedom to choose, Christ has doomed humanity to a life of suffering.
  • ...7 more annotations...
  • in Dostoyevsky's parable of the Grand Inquisitor, which is told in Chapter Five of his great novel, The Brothers Karamazov. In the parable, Christ comes back to earth in Seville at the time of the Spanish Inquisition. He performs several miracles but is arrested by Inquisition leaders and sentenced to be burned at the stake. The Grand Inquisitor visits Him in his cell to tell Him that the Church no longer needs Him. The main portion of the text is the Inquisitor explaining why. The Inquisitor says that Jesus rejected the three temptations of Satan in the desert in favor of freedom, but he believes that Jesus has misjudged human nature.
  • I'm sure the budgetary problems you have to deal with are serious. They certainly are at Brandeis University, where I work. And we, too, faced critical strategic decisions because our income was no longer enough to meet our expenses. But we eschewed your draconian - and authoritarian - solution, and a team of faculty, with input from all parts of the university, came up with a plan to do more with fewer resources. I'm not saying that all the specifics of our solution would fit your institution, but the process sure would have. You did call a town meeting, but it was to discuss your plan, not let the university craft its own. And you called that meeting for Friday afternoon on October 1st, when few of your students or faculty would be around to attend. In your defense, you called the timing 'unfortunate', but pleaded that there was a 'limited availability of appropriate large venue options.' I find that rather surprising. If the President of Brandeis needed a lecture hall on short notice, he would get one. I guess you don't have much clout at your university.
  • As for the argument that the humanities don't pay their own way, well, I guess that's true, but it seems to me that there's a fallacy in assuming that a university should be run like a business. I'm not saying it shouldn't be managed prudently, but the notion that every part of it needs to be self-supporting is simply at variance with what a university is all about.
  • You seem to value entrepreneurial programs and practical subjects that might generate intellectual property more than you do 'old-fashioned' courses of study. But universities aren't just about discovering and capitalizing on new knowledge; they are also about preserving knowledge from being lost over time, and that requires a financial investment.
  • what seems to be archaic today can become vital in the future. I'll give you two examples of that. The first is the science of virology, which in the 1970s was dying out because people felt that infectious diseases were no longer a serious health problem in the developed world and other subjects, such as molecular biology, were much sexier. Then, in the early 1990s, a little problem called AIDS became the world's number 1 health concern. The virus that causes AIDS was first isolated and characterized at the National Institutes of Health in the USA and the Institute Pasteur in France, because these were among the few institutions that still had thriving virology programs. My second example you will probably be more familiar with. Middle Eastern Studies, including the study of foreign languages such as Arabic and Persian, was hardly a hot subject on most campuses in the 1990s. Then came September 11, 2001. Suddenly we realized that we needed a lot more people who understood something about that part of the world, especially its Muslim culture. Those universities that had preserved their Middle Eastern Studies departments, even in the face of declining enrollment, suddenly became very important places. Those that hadn't - well, I'm sure you get the picture.
  • one of your arguments is that not every place should try to do everything. Let other institutions have great programs in classics or theater arts, you say; we will focus on preparing students for jobs in the real world. Well, I hope I've just shown you that the real world is pretty fickle about what it wants. The best way for people to be prepared for the inevitable shock of change is to be as broadly educated as possible, because today's backwater is often tomorrow's hot field. And interdisciplinary research, which is all the rage these days, is only possible if people aren't too narrowly trained. If none of that convinces you, then I'm willing to let you turn your institution into a place that focuses on the practical, but only if you stop calling it a university and yourself the President of one. You see, the word 'university' derives from the Latin 'universitas', meaning 'the whole'. You can't be a university without having a thriving humanities program. You will need to call SUNY Albany a trade school, or perhaps a vocational college, but not a university. Not anymore.
  • I started out as a classics major. I'm now Professor of Biochemistry and Chemistry. Of all the courses I took in college and graduate school, the ones that have benefited me the most in my career as a scientist are the courses in classics, art history, sociology, and English literature. These courses didn't just give me a much better appreciation for my own culture; they taught me how to think, to analyze, and to write clearly. None of my sciences courses did any of that.
Weiye Loh

Science Warriors' Ego Trips - The Chronicle Review - The Chronicle of Higher Education - 0 views

  • By Carlin Romano Standing up for science excites some intellectuals the way beautiful actresses arouse Warren Beatty, or career liberals boil the blood of Glenn Beck and Rush Limbaugh. It's visceral.
  • A brave champion of beleaguered science in the modern age of pseudoscience, this Ayn Rand protagonist sarcastically derides the benighted irrationalists and glows with a self-anointed superiority. Who wouldn't want to feel that sense of power and rightness?
  • You hear the voice regularly—along with far more sensible stuff—in the latest of a now common genre of science patriotism, Nonsense on Stilts: How to Tell Science From Bunk (University of Chicago Press), by Massimo Pigliucci, a philosophy professor at the City University of New York.
  • ...24 more annotations...
  • it mixes eminent common sense and frequent good reporting with a cocksure hubris utterly inappropriate to the practice it apotheosizes.
  • According to Pigliucci, both Freudian psychoanalysis and Marxist theory of history "are too broad, too flexible with regard to observations, to actually tell us anything interesting." (That's right—not one "interesting" thing.) The idea of intelligent design in biology "has made no progress since its last serious articulation by natural theologian William Paley in 1802," and the empirical evidence for evolution is like that for "an open-and-shut murder case."
  • Pigliucci offers more hero sandwiches spiced with derision and certainty. Media coverage of science is "characterized by allegedly serious journalists who behave like comedians." Commenting on the highly publicized Dover, Pa., court case in which U.S. District Judge John E. Jones III ruled that intelligent-design theory is not science, Pigliucci labels the need for that judgment a "bizarre" consequence of the local school board's "inane" resolution. Noting the complaint of intelligent-design advocate William Buckingham that an approved science textbook didn't give creationism a fair shake, Pigliucci writes, "This is like complaining that a textbook in astronomy is too focused on the Copernican theory of the structure of the solar system and unfairly neglects the possibility that the Flying Spaghetti Monster is really pulling each planet's strings, unseen by the deluded scientists."
  • Or is it possible that the alternate view unfairly neglected could be more like that of Harvard scientist Owen Gingerich, who contends in God's Universe (Harvard University Press, 2006) that it is partly statistical arguments—the extraordinary unlikelihood eons ago of the physical conditions necessary for self-conscious life—that support his belief in a universe "congenially designed for the existence of intelligent, self-reflective life"?
  • Even if we agree that capital "I" and "D" intelligent-design of the scriptural sort—what Gingerich himself calls "primitive scriptural literalism"—is not scientifically credible, does that make Gingerich's assertion, "I believe in intelligent design, lowercase i and lowercase d," equivalent to Flying-Spaghetti-Monsterism? Tone matters. And sarcasm is not science.
  • The problem with polemicists like Pigliucci is that a chasm has opened up between two groups that might loosely be distinguished as "philosophers of science" and "science warriors."
  • Philosophers of science, often operating under the aegis of Thomas Kuhn, recognize that science is a diverse, social enterprise that has changed over time, developed different methodologies in different subsciences, and often advanced by taking putative pseudoscience seriously, as in debunking cold fusion
  • The science warriors, by contrast, often write as if our science of the moment is isomorphic with knowledge of an objective world-in-itself—Kant be damned!—and any form of inquiry that doesn't fit the writer's criteria of proper science must be banished as "bunk." Pigliucci, typically, hasn't much sympathy for radical philosophies of science. He calls the work of Paul Feyerabend "lunacy," deems Bruno Latour "a fool," and observes that "the great pronouncements of feminist science have fallen as flat as the similarly empty utterances of supporters of intelligent design."
  • It doesn't have to be this way. The noble enterprise of submitting nonscientific knowledge claims to critical scrutiny—an activity continuous with both philosophy and science—took off in an admirable way in the late 20th century when Paul Kurtz, of the University at Buffalo, established the Committee for the Scientific Investigation of Claims of the Paranormal (Csicop) in May 1976. Csicop soon after launched the marvelous journal Skeptical Inquirer
  • Although Pigliucci himself publishes in Skeptical Inquirer, his contributions there exhibit his signature smugness. For an antidote to Pigliucci's overweening scientism 'tude, it's refreshing to consult Kurtz's curtain-raising essay, "Science and the Public," in Science Under Siege (Prometheus Books, 2009, edited by Frazier)
  • Kurtz's commandment might be stated, "Don't mock or ridicule—investigate and explain." He writes: "We attempted to make it clear that we were interested in fair and impartial inquiry, that we were not dogmatic or closed-minded, and that skepticism did not imply a priori rejection of any reasonable claim. Indeed, I insisted that our skepticism was not totalistic or nihilistic about paranormal claims."
  • Kurtz combines the ethos of both critical investigator and philosopher of science. Describing modern science as a practice in which "hypotheses and theories are based upon rigorous methods of empirical investigation, experimental confirmation, and replication," he notes: "One must be prepared to overthrow an entire theoretical framework—and this has happened often in the history of science ... skeptical doubt is an integral part of the method of science, and scientists should be prepared to question received scientific doctrines and reject them in the light of new evidence."
  • Pigliucci, alas, allows his animus against the nonscientific to pull him away from sensitive distinctions among various sciences to sloppy arguments one didn't see in such earlier works of science patriotism as Carl Sagan's The Demon-Haunted World: Science as a Candle in the Dark (Random House, 1995). Indeed, he probably sets a world record for misuse of the word "fallacy."
  • To his credit, Pigliucci at times acknowledges the nondogmatic spine of science. He concedes that "science is characterized by a fuzzy borderline with other types of inquiry that may or may not one day become sciences." Science, he admits, "actually refers to a rather heterogeneous family of activities, not to a single and universal method." He rightly warns that some pseudoscience—for example, denial of HIV-AIDS causation—is dangerous and terrible.
  • But at other points, Pigliucci ferociously attacks opponents like the most unreflective science fanatic
  • He dismisses Feyerabend's view that "science is a religion" as simply "preposterous," even though he elsewhere admits that "methodological naturalism"—the commitment of all scientists to reject "supernatural" explanations—is itself not an empirically verifiable principle or fact, but rather an almost Kantian precondition of scientific knowledge. An article of faith, some cold-eyed Feyerabend fans might say.
  • He writes, "ID is not a scientific theory at all because there is no empirical observation that can possibly contradict it. Anything we observe in nature could, in principle, be attributed to an unspecified intelligent designer who works in mysterious ways." But earlier in the book, he correctly argues against Karl Popper that susceptibility to falsification cannot be the sole criterion of science, because science also confirms. It is, in principle, possible that an empirical observation could confirm intelligent design—i.e., that magic moment when the ultimate UFO lands with representatives of the intergalactic society that planted early life here, and we accept their evidence that they did it.
  • "As long as we do not venture to make hypotheses about who the designer is and why and how she operates," he writes, "there are no empirical constraints on the 'theory' at all. Anything goes, and therefore nothing holds, because a theory that 'explains' everything really explains nothing."
  • Here, Pigliucci again mixes up what's likely or provable with what's logically possible or rational. The creation stories of traditional religions and scriptures do, in effect, offer hypotheses, or claims, about who the designer is—e.g., see the Bible.
  • Far from explaining nothing because it explains everything, such an explanation explains a lot by explaining everything. It just doesn't explain it convincingly to a scientist with other evidentiary standards.
  • A sensible person can side with scientists on what's true, but not with Pigliucci on what's rational and possible. Pigliucci occasionally recognizes that. Late in his book, he concedes that "nonscientific claims may be true and still not qualify as science." But if that's so, and we care about truth, why exalt science to the degree he does? If there's really a heaven, and science can't (yet?) detect it, so much the worse for science.
  • Pigliucci quotes a line from Aristotle: "It is the mark of an educated mind to be able to entertain a thought without accepting it." Science warriors such as Pigliucci, or Michael Ruse in his recent clash with other philosophers in these pages, should reflect on a related modern sense of "entertain." One does not entertain a guest by mocking, deriding, and abusing the guest. Similarly, one does not entertain a thought or approach to knowledge by ridiculing it.
  • Long live Skeptical Inquirer! But can we deep-six the egomania and unearned arrogance of the science patriots? As Descartes, that immortal hero of scientists and skeptics everywhere, pointed out, true skepticism, like true charity, begins at home.
  • Carlin Romano, critic at large for The Chronicle Review, teaches philosophy and media theory at the University of Pennsylvania.
  •  
    April 25, 2010 Science Warriors' Ego Trips
Weiye Loh

The hidden philosophy of David Foster Wallace - Salon.com Mobile - 0 views

  • Taylor's argument, which he himself found distasteful, was that certain logical and seemingly unarguable premises lead to the conclusion that even in matters of human choice, the future is as set in stone as the past. We may think we can affect it, but we can't.
  • human responsibility — that, with advances in neuroscience, is of increasing urgency in jurisprudence, social codes and personal conduct. And it also shows a brilliant young man struggling against fatalism, performing exquisite exercises to convince others, and maybe himself, that what we choose to do is what determines the future, rather than the future more or less determining what we choose to do. This intellectual struggle on Wallace's part seems now a kind of emotional foreshadowing of his suicide. He was a victim of depression from an early age — even during his undergraduate years — and the future never looks more intractable than it does to someone who is depressed.
  • "Fate, Time, and Language" reminded me of how fond philosophers are of extreme situations in creating their thought experiments. In this book alone we find a naval battle, the gallows, a shotgun, poison, an accident that leads to paraplegia, somebody stabbed and killed, and so on. Why not say "I have a pretzel in my hand today. Tomorrow I will have eaten it or not eaten it" instead of "I have a gun in my hand and I will either shoot you through the heart and feast on your flesh or I won't"? Well, OK — the answer is easy: The extreme and violent scenarios catch our attention more forcefully than pretzels do. Also, philosophers, sequestered and meditative as they must be, may long for real action — beyond beekeeping.
  • ...1 more annotation...
  • Wallace, in his essay, at the very center of trying to show that we can indeed make meaningful choices, places a terrorist in the middle of Amherst's campus with his finger on the trigger mechanism of a nuclear weapon. It is by far the most narratively arresting moment in all of this material, and it says far more about the author's approaching antiestablishment explosions of prose and his extreme emotional makeup than it does about tweedy profs fantasizing about ordering their ships into battle. For, after all, who, besides everyone around him, would the terrorist have killed?
  •  
    In 1962, a philosopher (and world-famous beekeeper) named Richard Taylor published a soon-to-be-notorious essay called "Fatalism" in the Philosophical Review.
Weiye Loh

The Decline Effect and the Scientific Method : The New Yorker - 0 views

  • On September 18, 2007, a few dozen neuroscientists, psychiatrists, and drug-company executives gathered in a hotel conference room in Brussels to hear some startling news. It had to do with a class of drugs known as atypical or second-generation antipsychotics, which came on the market in the early nineties.
  • the therapeutic power of the drugs appeared to be steadily waning. A recent study showed an effect that was less than half of that documented in the first trials, in the early nineteen-nineties. Many researchers began to argue that the expensive pharmaceuticals weren’t any better than first-generation antipsychotics, which have been in use since the fifties. “In fact, sometimes they now look even worse,” John Davis, a professor of psychiatry at the University of Illinois at Chicago, told me.
  • Before the effectiveness of a drug can be confirmed, it must be tested and tested again. Different scientists in different labs need to repeat the protocols and publish their results. The test of replicability, as it’s known, is the foundation of modern research. Replicability is how the community enforces itself. It’s a safeguard for the creep of subjectivity. Most of the time, scientists know what results they want, and that can influence the results they get. The premise of replicability is that the scientific community can correct for these flaws.
  • ...30 more annotations...
  • But now all sorts of well-established, multiply confirmed findings have started to look increasingly uncertain. It’s as if our facts were losing their truth: claims that have been enshrined in textbooks are suddenly unprovable. This phenomenon doesn’t yet have an official name, but it’s occurring across a wide range of fields, from psychology to ecology. In the field of medicine, the phenomenon seems extremely widespread, affecting not only antipsychotics but also therapies ranging from cardiac stents to Vitamin E and antidepressants: Davis has a forthcoming analysis demonstrating that the efficacy of antidepressants has gone down as much as threefold in recent decades.
  • the effect is especially troubling because of what it exposes about the scientific process. If replication is what separates the rigor of science from the squishiness of pseudoscience, where do we put all these rigorously validated findings that can no longer be proved? Which results should we believe? Francis Bacon, the early-modern philosopher and pioneer of the scientific method, once declared that experiments were essential, because they allowed us to “put nature to the question.” But it appears that nature often gives us different answers.
  • At first, he assumed that he’d made an error in experimental design or a statistical miscalculation. But he couldn’t find anything wrong with his research. He then concluded that his initial batch of research subjects must have been unusually susceptible to verbal overshadowing. (John Davis, similarly, has speculated that part of the drop-off in the effectiveness of antipsychotics can be attributed to using subjects who suffer from milder forms of psychosis which are less likely to show dramatic improvement.) “It wasn’t a very satisfying explanation,” Schooler says. “One of my mentors told me that my real mistake was trying to replicate my work. He told me doing that was just setting myself up for disappointment.”
  • In private, Schooler began referring to the problem as “cosmic habituation,” by analogy to the decrease in response that occurs when individuals habituate to particular stimuli. “Habituation is why you don’t notice the stuff that’s always there,” Schooler says. “It’s an inevitable process of adjustment, a ratcheting down of excitement. I started joking that it was like the cosmos was habituating to my ideas. I took it very personally.”
  • The most likely explanation for the decline is an obvious one: regression to the mean. As the experiment is repeated, that is, an early statistical fluke gets cancelled out. The extrasensory powers of Schooler’s subjects didn’t decline—they were simply an illusion that vanished over time. And yet Schooler has noticed that many of the data sets that end up declining seem statistically solid—that is, they contain enough data that any regression to the mean shouldn’t be dramatic. “These are the results that pass all the tests,” he says. “The odds of them being random are typically quite remote, like one in a million. This means that the decline effect should almost never happen. But it happens all the time!
  • this is why Schooler believes that the decline effect deserves more attention: its ubiquity seems to violate the laws of statistics. “Whenever I start talking about this, scientists get very nervous,” he says. “But I still want to know what happened to my results. Like most scientists, I assumed that it would get easier to document my effect over time. I’d get better at doing the experiments, at zeroing in on the conditions that produce verbal overshadowing. So why did the opposite happen? I’m convinced that we can use the tools of science to figure this out. First, though, we have to admit that we’ve got a problem.”
  • In 2001, Michael Jennions, a biologist at the Australian National University, set out to analyze “temporal trends” across a wide range of subjects in ecology and evolutionary biology. He looked at hundreds of papers and forty-four meta-analyses (that is, statistical syntheses of related studies), and discovered a consistent decline effect over time, as many of the theories seemed to fade into irrelevance. In fact, even when numerous variables were controlled for—Jennions knew, for instance, that the same author might publish several critical papers, which could distort his analysis—there was still a significant decrease in the validity of the hypothesis, often within a year of publication. Jennions admits that his findings are troubling, but expresses a reluctance to talk about them publicly. “This is a very sensitive issue for scientists,” he says. “You know, we’re supposed to be dealing with hard facts, the stuff that’s supposed to stand the test of time. But when you see these trends you become a little more skeptical of things.”
  • the worst part was that when I submitted these null results I had difficulty getting them published. The journals only wanted confirming data. It was too exciting an idea to disprove, at least back then.
  • the steep rise and slow fall of fluctuating asymmetry is a clear example of a scientific paradigm, one of those intellectual fads that both guide and constrain research: after a new paradigm is proposed, the peer-review process is tilted toward positive results. But then, after a few years, the academic incentives shift—the paradigm has become entrenched—so that the most notable results are now those that disprove the theory.
  • Jennions, similarly, argues that the decline effect is largely a product of publication bias, or the tendency of scientists and scientific journals to prefer positive data over null results, which is what happens when no effect is found. The bias was first identified by the statistician Theodore Sterling, in 1959, after he noticed that ninety-seven per cent of all published psychological studies with statistically significant data found the effect they were looking for. A “significant” result is defined as any data point that would be produced by chance less than five per cent of the time. This ubiquitous test was invented in 1922 by the English mathematician Ronald Fisher, who picked five per cent as the boundary line, somewhat arbitrarily, because it made pencil and slide-rule calculations easier. Sterling saw that if ninety-seven per cent of psychology studies were proving their hypotheses, either psychologists were extraordinarily lucky or they published only the outcomes of successful experiments. In recent years, publication bias has mostly been seen as a problem for clinical trials, since pharmaceutical companies are less interested in publishing results that aren’t favorable. But it’s becoming increasingly clear that publication bias also produces major distortions in fields without large corporate incentives, such as psychology and ecology.
  • While publication bias almost certainly plays a role in the decline effect, it remains an incomplete explanation. For one thing, it fails to account for the initial prevalence of positive results among studies that never even get submitted to journals. It also fails to explain the experience of people like Schooler, who have been unable to replicate their initial data despite their best efforts
  • an equally significant issue is the selective reporting of results—the data that scientists choose to document in the first place. Palmer’s most convincing evidence relies on a statistical tool known as a funnel graph. When a large number of studies have been done on a single subject, the data should follow a pattern: studies with a large sample size should all cluster around a common value—the true result—whereas those with a smaller sample size should exhibit a random scattering, since they’re subject to greater sampling error. This pattern gives the graph its name, since the distribution resembles a funnel.
  • The funnel graph visually captures the distortions of selective reporting. For instance, after Palmer plotted every study of fluctuating asymmetry, he noticed that the distribution of results with smaller sample sizes wasn’t random at all but instead skewed heavily toward positive results.
  • Palmer has since documented a similar problem in several other contested subject areas. “Once I realized that selective reporting is everywhere in science, I got quite depressed,” Palmer told me. “As a researcher, you’re always aware that there might be some nonrandom patterns, but I had no idea how widespread it is.” In a recent review article, Palmer summarized the impact of selective reporting on his field: “We cannot escape the troubling conclusion that some—perhaps many—cherished generalities are at best exaggerated in their biological significance and at worst a collective illusion nurtured by strong a-priori beliefs often repeated.”
  • Palmer emphasizes that selective reporting is not the same as scientific fraud. Rather, the problem seems to be one of subtle omissions and unconscious misperceptions, as researchers struggle to make sense of their results. Stephen Jay Gould referred to this as the “shoehorning” process. “A lot of scientific measurement is really hard,” Simmons told me. “If you’re talking about fluctuating asymmetry, then it’s a matter of minuscule differences between the right and left sides of an animal. It’s millimetres of a tail feather. And so maybe a researcher knows that he’s measuring a good male”—an animal that has successfully mated—“and he knows that it’s supposed to be symmetrical. Well, that act of measurement is going to be vulnerable to all sorts of perception biases. That’s not a cynical statement. That’s just the way human beings work.”
  • One of the classic examples of selective reporting concerns the testing of acupuncture in different countries. While acupuncture is widely accepted as a medical treatment in various Asian countries, its use is much more contested in the West. These cultural differences have profoundly influenced the results of clinical trials. Between 1966 and 1995, there were forty-seven studies of acupuncture in China, Taiwan, and Japan, and every single trial concluded that acupuncture was an effective treatment. During the same period, there were ninety-four clinical trials of acupuncture in the United States, Sweden, and the U.K., and only fifty-six per cent of these studies found any therapeutic benefits. As Palmer notes, this wide discrepancy suggests that scientists find ways to confirm their preferred hypothesis, disregarding what they don’t want to see. Our beliefs are a form of blindness.
  • John Ioannidis, an epidemiologist at Stanford University, argues that such distortions are a serious issue in biomedical research. “These exaggerations are why the decline has become so common,” he says. “It’d be really great if the initial studies gave us an accurate summary of things. But they don’t. And so what happens is we waste a lot of money treating millions of patients and doing lots of follow-up studies on other themes based on results that are misleading.”
  • In 2005, Ioannidis published an article in the Journal of the American Medical Association that looked at the forty-nine most cited clinical-research studies in three major medical journals. Forty-five of these studies reported positive results, suggesting that the intervention being tested was effective. Because most of these studies were randomized controlled trials—the “gold standard” of medical evidence—they tended to have a significant impact on clinical practice, and led to the spread of treatments such as hormone replacement therapy for menopausal women and daily low-dose aspirin to prevent heart attacks and strokes. Nevertheless, the data Ioannidis found were disturbing: of the thirty-four claims that had been subject to replication, forty-one per cent had either been directly contradicted or had their effect sizes significantly downgraded.
  • The situation is even worse when a subject is fashionable. In recent years, for instance, there have been hundreds of studies on the various genes that control the differences in disease risk between men and women. These findings have included everything from the mutations responsible for the increased risk of schizophrenia to the genes underlying hypertension. Ioannidis and his colleagues looked at four hundred and thirty-two of these claims. They quickly discovered that the vast majority had serious flaws. But the most troubling fact emerged when he looked at the test of replication: out of four hundred and thirty-two claims, only a single one was consistently replicable. “This doesn’t mean that none of these claims will turn out to be true,” he says. “But, given that most of them were done badly, I wouldn’t hold my breath.”
  • the main problem is that too many researchers engage in what he calls “significance chasing,” or finding ways to interpret the data so that it passes the statistical test of significance—the ninety-five-per-cent boundary invented by Ronald Fisher. “The scientists are so eager to pass this magical test that they start playing around with the numbers, trying to find anything that seems worthy,” Ioannidis says. In recent years, Ioannidis has become increasingly blunt about the pervasiveness of the problem. One of his most cited papers has a deliberately provocative title: “Why Most Published Research Findings Are False.”
  • The problem of selective reporting is rooted in a fundamental cognitive flaw, which is that we like proving ourselves right and hate being wrong. “It feels good to validate a hypothesis,” Ioannidis said. “It feels even better when you’ve got a financial interest in the idea or your career depends upon it. And that’s why, even after a claim has been systematically disproven”—he cites, for instance, the early work on hormone replacement therapy, or claims involving various vitamins—“you still see some stubborn researchers citing the first few studies that show a strong effect. They really want to believe that it’s true.”
  • scientists need to become more rigorous about data collection before they publish. “We’re wasting too much time chasing after bad studies and underpowered experiments,” he says. The current “obsession” with replicability distracts from the real problem, which is faulty design. He notes that nobody even tries to replicate most science papers—there are simply too many. (According to Nature, a third of all studies never even get cited, let alone repeated.)
  • Schooler recommends the establishment of an open-source database, in which researchers are required to outline their planned investigations and document all their results. “I think this would provide a huge increase in access to scientific work and give us a much better way to judge the quality of an experiment,” Schooler says. “It would help us finally deal with all these issues that the decline effect is exposing.”
  • Although such reforms would mitigate the dangers of publication bias and selective reporting, they still wouldn’t erase the decline effect. This is largely because scientific research will always be shadowed by a force that can’t be curbed, only contained: sheer randomness. Although little research has been done on the experimental dangers of chance and happenstance, the research that exists isn’t encouraging
  • John Crabbe, a neuroscientist at the Oregon Health and Science University, conducted an experiment that showed how unknowable chance events can skew tests of replicability. He performed a series of experiments on mouse behavior in three different science labs: in Albany, New York; Edmonton, Alberta; and Portland, Oregon. Before he conducted the experiments, he tried to standardize every variable he could think of. The same strains of mice were used in each lab, shipped on the same day from the same supplier. The animals were raised in the same kind of enclosure, with the same brand of sawdust bedding. They had been exposed to the same amount of incandescent light, were living with the same number of littermates, and were fed the exact same type of chow pellets. When the mice were handled, it was with the same kind of surgical glove, and when they were tested it was on the same equipment, at the same time in the morning.
  • The premise of this test of replicability, of course, is that each of the labs should have generated the same pattern of results. “If any set of experiments should have passed the test, it should have been ours,” Crabbe says. “But that’s not the way it turned out.” In one experiment, Crabbe injected a particular strain of mouse with cocaine. In Portland the mice given the drug moved, on average, six hundred centimetres more than they normally did; in Albany they moved seven hundred and one additional centimetres. But in the Edmonton lab they moved more than five thousand additional centimetres. Similar deviations were observed in a test of anxiety. Furthermore, these inconsistencies didn’t follow any detectable pattern. In Portland one strain of mouse proved most anxious, while in Albany another strain won that distinction.
  • The disturbing implication of the Crabbe study is that a lot of extraordinary scientific data are nothing but noise. The hyperactivity of those coked-up Edmonton mice wasn’t an interesting new fact—it was a meaningless outlier, a by-product of invisible variables we don’t understand. The problem, of course, is that such dramatic findings are also the most likely to get published in prestigious journals, since the data are both statistically significant and entirely unexpected. Grants get written, follow-up studies are conducted. The end result is a scientific accident that can take years to unravel.
  • This suggests that the decline effect is actually a decline of illusion.
  • While Karl Popper imagined falsification occurring with a single, definitive experiment—Galileo refuted Aristotelian mechanics in an afternoon—the process turns out to be much messier than that. Many scientific theories continue to be considered true even after failing numerous experimental tests. Verbal overshadowing might exhibit the decline effect, but it remains extensively relied upon within the field. The same holds for any number of phenomena, from the disappearing benefits of second-generation antipsychotics to the weak coupling ratio exhibited by decaying neutrons, which appears to have fallen by more than ten standard deviations between 1969 and 2001. Even the law of gravity hasn’t always been perfect at predicting real-world phenomena. (In one test, physicists measuring gravity by means of deep boreholes in the Nevada desert found a two-and-a-half-per-cent discrepancy between the theoretical predictions and the actual data.) Despite these findings, second-generation antipsychotics are still widely prescribed, and our model of the neutron hasn’t changed. The law of gravity remains the same.
  • Such anomalies demonstrate the slipperiness of empiricism. Although many scientific ideas generate conflicting results and suffer from falling effect sizes, they continue to get cited in the textbooks and drive standard medical practice. Why? Because these ideas seem true. Because they make sense. Because we can’t bear to let them go. And this is why the decline effect is so troubling. Not because it reveals the human fallibility of science, in which data are tweaked and beliefs shape perceptions. (Such shortcomings aren’t surprising, at least for scientists.) And not because it reveals that many of our most exciting theories are fleeting fads and will soon be rejected. (That idea has been around since Thomas Kuhn.) The decline effect is troubling because it reminds us how difficult it is to prove anything. We like to pretend that our experiments define the truth for us. But that’s often not the case. Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true. When the experiments are done, we still have to choose what to believe.
Weiye Loh

Wikileaks and the Long Haul « Clay Shirky - 0 views

  • Citizens of a functioning democracy must be able to know what the state is saying and doing in our name, to engage in what Pierre Rosanvallon calls “counter-democracy”*, the democracy of citizens distrusting rather than legitimizing the actions of the state. Wikileaks plainly improves those abilities.
  • On the other hand, human systems can’t stand pure transparency. For negotiation to work, people’s stated positions have to change, but change is seen, almost universally, as weakness. People trying to come to consensus must be able to privately voice opinions they would publicly abjure, and may later abandon. Wikileaks plainly damages those abilities. (If Aaron Bady’s analysis is correct, it is the damage and not the oversight that Wikileaks is designed to create.*)
  • we have a tension between two requirements for democratic statecraft, one that can’t be resolved, but can be brought to an acceptable equilibrium. Indeed, like the virtues of equality vs. liberty, or popular will vs. fundamental rights, it has to be brought into such an equilibrium for democratic statecraft not to be wrecked either by too much secrecy or too much transparency.
  • ...5 more annotations...
  • As Tom Slee puts it, “Your answer to ‘what data should the government make public?’ depends not so much on what you think about data, but what you think about the government.”* My personal view is that there is too much secrecy in the current system, and that a corrective towards transparency is a good idea. I don’t, however, believe in total transparency, and even more importantly, I don’t think that independent actors who are subject to no checks or balances is a good idea in the long haul.
  • The practical history of politics, however, suggests that the periodic appearance of such unconstrained actors in the short haul is essential to increased democratization, not just of politics but of thought. We celebrate the printers of 16th century Amsterdam for making it impossible for the Catholic Church to constrain the output of the printing press to Church-approved books*, a challenge that helped usher in, among other things, the decentralization of scientific inquiry and the spread of politically seditious writings advocating democracy. This intellectual and political victory didn’t, however, mean that the printing press was then free of all constraints. Over time, a set of legal limitations around printing rose up, including restrictions on libel, the publication of trade secrets, and sedition. I don’t agree with all of these laws, but they were at least produced by some legal process.
  • I am conflicted about the right balance between the visibility required for counter-democracy and the need for private speech among international actors. Here’s what I’m not conflicted about: When authorities can’t get what they want by working within the law, the right answer is not to work outside the law. The right answer is that they can’t get what they want.
  • The Unites States is — or should be — subject to the rule of law, which makes the extra-judicial pursuit of Wikileaks especially nauseating. (Calls for Julian’s assassination are even more nauseating.) It may be that what Julian has done is a crime. (I know him casually, but not well enough to vouch for his motivations, nor am I a lawyer.) In that case, the right answer is to bring the case to a trial.
  • Over the long haul, we will need new checks and balances for newly increased transparency — Wikileaks shouldn’t be able to operate as a law unto itself anymore than the US should be able to. In the short haul, though, Wikileaks is our Amsterdam. Whatever restrictions we eventually end up enacting, we need to keep Wikileaks alive today, while we work through the process democracies always go through to react to change. If it’s OK for a democracy to just decide to run someone off the internet for doing something they wouldn’t prosecute a newspaper for doing, the idea of an internet that further democratizes the public sphere will have taken a mortal blow.
Weiye Loh

What Is Academic Work? - NYTimes.com - 0 views

  • After it was all over, everyone pronounced the occasion a great success; not because any substantive problems had been solved, but because a set of intellectual problems had been tossed around and teased out by men and women at the top of their game.
  • academic work is distinctive — something and not everything — and that a part of its distinctiveness is its distance from political agendas. This does not mean that political agendas can’t be the subject of academic work — one should inquire into their structure, history, etc. — but that the point of introducing them into the classroom should never be to urge them or to warn against them.
  • The conference format reflected its academic (not policy) imperatives. A presenter summarized his or her paper. A designated commentator posed sharp questions. The presenter responded and then the floor was opened to the other participants, who posed their own sharp questions to both the presenter and the commentator. The exchanges were swift and spirited. The room took on some of the aspects of an athletic competition — parry, thrust, soft balls, hard balls, palpable hits, ingenious defenses and a series of “well dones” said by everyone to everyone else at the end of each round.
  • ...1 more annotation...
  • The kind of questions asked also marked the occasion as an academic one. Not “Won’t the economy implode if we do this?” or “Wouldn’t free expression rights be eroded if we went down that path?”, but “Would you be willing to follow your argument to its logical conclusion?” or “Doesn’t that amount to just making up the law as you go along?” These questions were continuations of a philosophical conversation that stretches back at least to the beginning of the republic; and while they were illustrated by real-world topics (the pardon power, habeas corpus, the electoral college), the focus was always on the theoretical puzzles of which those topics were disposable examples; they were never the main show.
Weiye Loh

Singapore M.D.: A New Low for SMA - 0 views

  • I am used to SMA's weak stance against alternative medicine, but this letter in the ST Forum today is a new low for the "professional" association.Unwise to criticise alternative medicine, says SMA
  • Notice how Dr Razak had not addressed Dr Ho's main focus, which were on the lack of evidence behind alternative medicine.
  • Instead of accusing Dr Ho of failing to "take SMA's proposal to amend the SMC ethical code in context", Dr Razak should perhaps ask himself why SMC's ethical code specifically makes that distinction between medicine and "complementary medicine" - as I have argued in my previous posts, just because practitioners of a certain mode of alternative medicine are registered does not mean that there is any evidence backing their claims; reality does not alter itself because of cultural beliefs, political decrees, economic conveniences, or public opinion.
  • ...3 more annotations...
  • There are many forms of alternative medicine out there which are being sold to unsuspecting patients. Just because they are a part of our "cultural beliefs" or that they are renting shop spaces in our hospitals does not mean that doctors as a profession must accept them or not speak up against them. If you know that certain forms of alternative medicine are ineffective or indeed potentially harmful, but choose not to advocate against it because you do not want to be seen as "self-serving", what does that say about your strength of character? Will we rather our patients be harmed by such therapy than we be falsely accused of being self-serving?
  • If we claim to be a profession that is built upon science, if we claim to be advocates for our patients, then we must speak up even when we know it will not be well-received, even when we know it will offend.
  • SMA needs to ask itself whether it will choose what is expedient over what is right, and whether it is more important to be popular or to be intellectually honest.
Weiye Loh

Roger Pielke Jr.'s Blog: Ideological Diversity in Academia - 0 views

  • Jonathan Haidt's talk (above) at the annual meeting of the Society for Personality and Social Psychology was written up last week in a column by John Tierney in the NY Times.  This was soon followed by a dismissal of the work by Paul Krugman.  The entire sequence is interesting, but for me the best part, and the one that gets to the nub of the issue, is Haight's response to Krugman: My research, like so much research in social psychology, demonstrates that we humans are experts at using reasoning to find evidence for whatever conclusions we want to reach. We are terrible at searching for contradictory evidence. Science works because our peers are so darn good at finding that contradictory evidence for us. Social science — at least my corner of it — is broken because there is nobody to look for contradictory evidence regarding sacralized issues, particularly those related to race, gender, and class. I urged my colleagues to increase our ideological diversity not for any moral reason, but because it will make us better scientists. You do not have that problem in economics where the majority is liberal but there is a substantial and vocal minority of libertarians and conservatives. Your field is healthy, mine is not. Do you think I was wrong to call for my professional organization to seek out a modicum of ideological diversity?
  • On a related note, the IMF review of why the institution failed to warn of the global financial crisis identified a lack of intellectual diversity as being among the factors responsible (PDF): Several cognitive biases seem to have played an important role. Groupthink refers to the tendency among homogeneous, cohesive groups to consider issues only within a certain paradigm and not challenge its basic premises (Janis, 1982). The prevailing view among IMF staff—a cohesive group of macroeconomists—was that market discipline and self-regulation would be sufficient to stave off serious problems in financial institutions. They also believed that crises were unlikely to happen in advanced economies, where “sophisticated” financial markets could thrive safely with minimal regulation of a large and growing portion of the financial system.Everyyone in academia has seen similar dynamics at work.
Weiye Loh

Rationally Speaking: Studying folk morality: philosophy, psychology, or what? - 0 views

  • in the magazine article Joshua mentions several studies of “folk morality,” i.e. of how ordinary people think about moral problems. The results are fascinating. It turns out that people’s views are correlated with personality traits, with subjects who score high on “openness to experience” being reliably more relativists than objectivists about morality (I am not using the latter term in the infamous Randyan meaning here, but as Knobe does, to indicate the idea that morality has objective bases).
  • Other studies show that people who are capable of considering multiple options in solving mathematical puzzles also tend to be moral relativists, and — in a study co-authored by Knobe himself — the very same situation (infanticide) was judged along a sliding scale from objectivism to relativism depending on whether the hypothetical scenario involved a fellow American (presumably sharing our same general moral values), the member of an imaginary Amazonian tribe (for which infanticide was acceptable), and an alien from the planet Pentar (belonging to a race whose only goal in life is to turn everything into equilateral pentagons, and killing individuals that might get in the way of that lofty objective is a duty). Oh, and related research also shows that young children tend to be objectivists, while young adults are usually relativists — but that later in life one’s primordial objectivism apparently experiences a comeback.
  • This is all very interesting social science, but is it philosophy? Granted, the differences between various disciplines are often not clear cut, and of course whenever people engage in truly inter-disciplinary work we should simply applaud the effort and encourage further work. But I do wonder in what sense, if any, the kinds of results that Joshua and his colleagues find have much to do with moral philosophy.
  • ...6 more annotations...
  • there seems to me the potential danger of confusing various categories of moral discourse. For instance, are the “folks” studied in these cases actually relativist, or perhaps adherents to one of several versions of moral anti-realism? The two are definitely not the same, but I doubt that the subjects in question could tell the difference (and I wouldn’t expect them to, after all they are not philosophers).
  • why do we expect philosophers to learn from “folk morality” when we do not expect, say, physicists to learn from folk physics (which tends to be Aristotelian in nature), or statisticians from people’s understanding of probability theory (which is generally remarkably poor, as casino owners know very well)? Or even, while I’m at it, why not ask literary critics to discuss Shakespeare in light of what common folks think about the bard (making sure, perhaps, that they have at least read his works, and not just watched the movies)?
  • Hence, my other examples of stat (i.e., math) and literary criticism. I conceive of philosophy in general, and moral philosophy in particular, as more akin to a (science-informed, to be sure) mix between logic and criticism. Some moral philosophy consists in engaging an “if ... then” sort of scenario, akin to logical-mathematical thinking, where one begins with certain axioms and attempts to derive the consequences of such axioms. In other respects, moral philosophers exercise reflective criticism concerning those consequences as they might be relevant to practical problems.
  • For instance, we may write philosophically about abortion, and begin our discussion from a comparison of different conceptions of “person.” We might conclude that “if” one adopts conception X of what a person is, “then” abortion is justifiable under such and such conditions; while “if” one adopts conception Y of a person, “then” abortion is justifiable under a different set of conditions, or not justifiable at all. We could, of course, back up even further and engage in a discussion of what “personhood” is, thus moving from moral philosophy to metaphysics.
  • Nowhere in the above are we going to ask “folks” what they think a person is, or how they think their implicit conception of personhood informs their views on abortion. Of course people’s actual views on abortion are crucial — especially for public policy — and they are intrinsically interesting to social scientists. But they don’t seem to me to make much more contact with philosophy than the above mentioned popular opinions on Shakespeare make contact with serious literary criticism. And please, let’s not play the cheap card of “elitism,” unless we are willing to apply the label to just about any intellectual endeavor, in any discipline.
  • There is one area in which experimental philosophy can potentially contribute to philosophy proper (as opposed to social science). Once we have a more empirically grounded understanding of what people’s moral reasoning actually is, then we can analyze the likely consequences of that reasoning for a variety of societal issues. But now we would be doing something more akin to political than moral philosophy.
  •  
    My colleague Joshua Knobe at Yale University recently published an intriguing article in The Philosopher's Magazine about the experimental philosophy of moral decision making. Joshua and I have had a nice chat during a recent Rationally Speaking podcast dedicated to experimental philosophy, but I'm still not convinced about the whole enterprise.
Weiye Loh

LRB · Jim Holt · Smarter, Happier, More Productive - 0 views

  • There are two ways that computers might add to our wellbeing. First, they could do so indirectly, by increasing our ability to produce other goods and services. In this they have proved something of a disappointment. In the early 1970s, American businesses began to invest heavily in computer hardware and software, but for decades this enormous investment seemed to pay no dividends. As the economist Robert Solow put it in 1987, ‘You can see the computer age everywhere but in the productivity statistics.’ Perhaps too much time was wasted in training employees to use computers; perhaps the sorts of activity that computers make more efficient, like word processing, don’t really add all that much to productivity; perhaps information becomes less valuable when it’s more widely available. Whatever the case, it wasn’t until the late 1990s that some of the productivity gains promised by the computer-driven ‘new economy’ began to show up – in the United States, at any rate. So far, Europe appears to have missed out on them.
  • The other way computers could benefit us is more direct. They might make us smarter, or even happier. They promise to bring us such primary goods as pleasure, friendship, sex and knowledge. If some lotus-eating visionaries are to be believed, computers may even have a spiritual dimension: as they grow ever more powerful, they have the potential to become our ‘mind children’. At some point – the ‘singularity’ – in the not-so-distant future, we humans will merge with these silicon creatures, thereby transcending our biology and achieving immortality. It is all of this that Woody Allen is missing out on.
  • But there are also sceptics who maintain that computers are having the opposite effect on us: they are making us less happy, and perhaps even stupider. Among the first to raise this possibility was the American literary critic Sven Birkerts. In his book The Gutenberg Elegies (1994), Birkerts argued that the computer and other electronic media were destroying our capacity for ‘deep reading’. His writing students, thanks to their digital devices, had become mere skimmers and scanners and scrollers. They couldn’t lose themselves in a novel the way he could. This didn’t bode well, Birkerts thought, for the future of literary culture.
  • ...6 more annotations...
  • Suppose we found that computers are diminishing our capacity for certain pleasures, or making us worse off in other ways. Why couldn’t we simply spend less time in front of the screen and more time doing the things we used to do before computers came along – like burying our noses in novels? Well, it may be that computers are affecting us in a more insidious fashion than we realise. They may be reshaping our brains – and not for the better. That was the drift of ‘Is Google Making Us Stupid?’, a 2008 cover story by Nicholas Carr in the Atlantic.
  • Carr thinks that he was himself an unwitting victim of the computer’s mind-altering powers. Now in his early fifties, he describes his life as a ‘two-act play’, ‘Analogue Youth’ followed by ‘Digital Adulthood’. In 1986, five years out of college, he dismayed his wife by spending nearly all their savings on an early version of the Apple Mac. Soon afterwards, he says, he lost the ability to edit or revise on paper. Around 1990, he acquired a modem and an AOL subscription, which entitled him to spend five hours a week online sending email, visiting ‘chat rooms’ and reading old newspaper articles. It was around this time that the programmer Tim Berners-Lee wrote the code for the World Wide Web, which, in due course, Carr would be restlessly exploring with the aid of his new Netscape browser.
  • Carr launches into a brief history of brain science, which culminates in a discussion of ‘neuroplasticity’: the idea that experience affects the structure of the brain. Scientific orthodoxy used to hold that the adult brain was fixed and immutable: experience could alter the strengths of the connections among its neurons, it was believed, but not its overall architecture. By the late 1960s, however, striking evidence of brain plasticity began to emerge. In one series of experiments, researchers cut nerves in the hands of monkeys, and then, using microelectrode probes, observed that the monkeys’ brains reorganised themselves to compensate for the peripheral damage. Later, tests on people who had lost an arm or a leg revealed something similar: the brain areas that used to receive sensory input from the lost limbs seemed to get taken over by circuits that register sensations from other parts of the body (which may account for the ‘phantom limb’ phenomenon). Signs of brain plasticity have been observed in healthy people, too. Violinists, for instance, tend to have larger cortical areas devoted to processing signals from their fingering hands than do non-violinists. And brain scans of London cab drivers taken in the 1990s revealed that they had larger than normal posterior hippocampuses – a part of the brain that stores spatial representations – and that the increase in size was proportional to the number of years they had been in the job.
  • The brain’s ability to change its own structure, as Carr sees it, is nothing less than ‘a loophole for free thought and free will’. But, he hastens to add, ‘bad habits can be ingrained in our neurons as easily as good ones.’ Indeed, neuroplasticity has been invoked to explain depression, tinnitus, pornography addiction and masochistic self-mutilation (this last is supposedly a result of pain pathways getting rewired to the brain’s pleasure centres). Once new neural circuits become established in our brains, they demand to be fed, and they can hijack brain areas devoted to valuable mental skills. Thus, Carr writes: ‘The possibility of intellectual decay is inherent in the malleability of our brains.’ And the internet ‘delivers precisely the kind of sensory and cognitive stimuli – repetitive, intensive, interactive, addictive – that have been shown to result in strong and rapid alterations in brain circuits and functions’. He quotes the brain scientist Michael Merzenich, a pioneer of neuroplasticity and the man behind the monkey experiments in the 1960s, to the effect that the brain can be ‘massively remodelled’ by exposure to the internet and online tools like Google. ‘THEIR HEAVY USE HAS NEUROLOGICAL CONSEQUENCES,’ Merzenich warns in caps – in a blog post, no less.
  • It’s not that the web is making us less intelligent; if anything, the evidence suggests it sharpens more cognitive skills than it dulls. It’s not that the web is making us less happy, although there are certainly those who, like Carr, feel enslaved by its rhythms and cheated by the quality of its pleasures. It’s that the web may be an enemy of creativity. Which is why Woody Allen might be wise in avoiding it altogether.
  • empirical support for Carr’s conclusion is both slim and equivocal. To begin with, there is evidence that web surfing can increase the capacity of working memory. And while some studies have indeed shown that ‘hypertexts’ impede retention – in a 2001 Canadian study, for instance, people who read a version of Elizabeth Bowen’s story ‘The Demon Lover’ festooned with clickable links took longer and reported more confusion about the plot than did those who read it in an old-fashioned ‘linear’ text – others have failed to substantiate this claim. No study has shown that internet use degrades the ability to learn from a book, though that doesn’t stop people feeling that this is so – one medical blogger quoted by Carr laments, ‘I can’t read War and Peace any more.’
Weiye Loh

Uwe E. Reinhardt: How Convincing Is the Economists' Case for Free Trade? - NYTimes.com - 0 views

  • “Emerging Markets as Partners, Not Rivals,” a fine commentary in The New York Times on Sunday by N. Gregory Mankiw of Harvard prompted me to take a vacation from the dreariness of health policy to visit one of the economic profession’s intellectual triumphs: the theory that every country gains by unfettered international trade.
  • That theory is less popular among noneconomists, especially politicians and unions. They wring their hands at what is called offshoring of jobs and often have no problem obstructing free trade with such barriers as tariffs or import quotas, which they deem in the national interest. (Two blogs recently offered examples of this posture.)
  • Economists assert that over the longer run, the owners of businesses that lose their markets in international competition and their employees will shift into new economic endeavors in which they can function more competitively. Skeptics, of course, often respond with the retort of John Maynard Keynes: “In the long run, we’re all dead.”
  • ...3 more annotations...
  • this truth, which economists hold self-evident: Relative to a status quo of no or limited international trade, permitting full free trade across borders will leave in its wake some immediate losers, but citizens who gain from such trade gain much more than the losers lose. On a net basis, therefore, each nation gains over all from such trade.
  • In their work, economists are typically are not nationalistic. National boundaries mean little to them, other than that much data happen to be collected on a national basis. Whether a fellow American gains from a trade or someone in Shanghai does not make any difference to most economists, nor does it matter to them where the losers from global competition live, in America or elsewhere.
  • I say most economists, because here and there one can find some who do seem to worry about how fellow Americans fare in the matter of free trade. In a widely noted column in The Washington Post, “Free Trade’s Great, but Offshoring Rattles Me,” for example, my Princeton colleague Alan Blinder wrote: I’m a free trader down to my toes. Always have been. Yet lately, I’m being treated as a heretic by many of my fellow economists. Why? Because I have stuck my neck out and predicted that the offshoring of service jobs from rich countries such as the United States to poor countries such as India may pose major problems for tens of millions of American workers over the coming decades. In fact, I think offshoring may be the biggest political issue in economics for a generation. When I say this, many of my fellow free traders react with a mixture of disbelief, pity and hostility. Blinder, have you lost your mind? Professor Blinder has estimated that 30 million to 40 million jobs in the United States are potentially offshorable — including those of scientists, mathematicians, radiologists and editors on the high end of the market, and those of telephone operators, clerks and typists on the low end. He says he is rattled by the question of how our country will cope with this phenomenon, especially in view of our tattered social safety net. “That is why I am going public with my concerns now,” he concludes. “If we economists stubbornly insist on chanting ‘free trade is good for you’ to people who know that it is not, we will quickly become irrelevant to the public debate. Compared with that, a little apostasy should be welcome.
Weiye Loh

Roger Pielke Jr.'s Blog: Intolerance: Virtue or Anti-Science "Doublespeak"? - 0 views

  • John Beddington, the Chief Scientific Advisor to the UK government, has identified a need to be "grossly intolerant" of certain views that get in the way of dealing with important policy problems: We are grossly intolerant, and properly so, of racism. We are grossly intolerant, and properly so, of people who [are] anti-homosexuality... We are not—and I genuinely think we should think about how we do this—grossly intolerant of pseudo-science, the building up of what purports to be science by the cherry-picking of the facts and the failure to use scientific evidence and the failure to use scientific method. One way is to be completely intolerant of this nonsense. That we don't kind of shrug it off. We don't say: ‘oh, it's the media’ or ‘oh they would say that wouldn’t they?’ I think we really need, as a scientific community—and this is a very important scientific community—to think about how we do it.
  • Fortunately, Andrew Stirling, research director of the Science Policy Research Unit (which these days I think just goes by SPRU) at the University of Sussex, provides a much healthier perspective: What is this 'pseudoscience'? For Beddington, this seems to include any kind of criticism from non-scientists of new technologies like genetically modified organisms, much advocacy of the 'precautionary principle' in environmental protection, or suggestions that science itself might also legitimately be subjected to moral considerations. Who does Beddington hold to blame for this "politically or morally or religiously motivated nonsense"? For anyone who really values the central principles of science itself, the answer is quite shocking. He is targeting effectively anyone expressing "scepticism" over what he holds to be 'scientific' pronouncements—whether on GM, climate change or any other issue. Note, it is not irrational "denial" on which Beddington is calling for 'gross intolerance', but the eminently reasonable quality of "scepticism"! The alarming contradiction here is that organised, reasoned, scepticism—accepting rational argument from any quarter without favour for social status, cultural affiliations  or institutional prestige—is arguably the most precious and fundamental quality that science itself has (imperfectly) to offer. Without this enlightening aspiration, history shows how society is otherwise all-too-easily shackled by the doctrinal intolerance, intellectual blinkers and authoritarian suppression of criticism so familiar in religious, political, cultural and media institutions.
  • tirling concludes: [T]he basic aspirational principles of science offer the best means to challenge the ubiquitously human distorting pressures of self-serving privilege, hubris, prejudice and power. Among these principles are exactly the scepticism and tolerance against which Beddington is railing (ironically) so emotionally! Of course, scientific practices like peer review, open publication and acknowledgement of uncertainty all help reinforce the positive impacts of these underlying qualities. But, in the real world, any rational observer has to note that these practices are themselves imperfect. Although rarely achieved, it is inspirational ideals of universal, communitarian scepticism—guided by progressive principles of reasoned argument, integrity, pluralism, openness and, of course, empirical experiment—that best embody the great civilising potential of science itself. As the motto of none other than the Royal Society loosely enjoins (also sometimes somewhat ironically) "take nothing on authority". In this colourful instance of straight talking then, John Beddington is himself coming uncomfortably close to a particularly unsettling form of unscientific—even (in a deep sense) anti-scientific—'double speak'.
  • ...1 more annotation...
  • Anyone who really values the progressive civilising potential of science should argue (in a qualified way as here) against Beddington's intemperate call for "complete intolerance" of scepticism. It is the social and human realities shared by politicians, non-government organisations, journalists and scientists themselves, that make tolerance of scepticism so important. The priorities pursued in scientific research and the directions taken by technology are all as fundamentally political as other areas of policy. No matter how uncomfortable and messy the resulting debates may sometimes become, we should never be cowed by any special interest—including that of scientific institutions—away from debating these issues in open, rational, democratic ways. To allow this to happen would be to undermine science itself in the most profound sense. It is the upholding of an often imperfect pursuit of scepticism and tolerance that offer the best way to respect and promote science. Such a position is, indeed, much more in keeping with the otherwise-exemplary work of John Beddington himself.Stirling's eloquent response provides a nice tonic to Beddington's unsettling remarks. Nonetheless, Beddington's perspective should be taken as a clear warning as to the pathological state of highly politicized science these days.
Weiye Loh

How the Internet Gets Inside Us : The New Yorker - 0 views

  • N.Y.U. professor Clay Shirky—the author of “Cognitive Surplus” and many articles and blog posts proclaiming the coming of the digital millennium—is the breeziest and seemingly most self-confident
  • Shirky believes that we are on the crest of an ever-surging wave of democratized information: the Gutenberg printing press produced the Reformation, which produced the Scientific Revolution, which produced the Enlightenment, which produced the Internet, each move more liberating than the one before.
  • The idea, for instance, that the printing press rapidly gave birth to a new order of information, democratic and bottom-up, is a cruel cartoon of the truth. If the printing press did propel the Reformation, one of the biggest ideas it propelled was Luther’s newly invented absolutist anti-Semitism. And what followed the Reformation wasn’t the Enlightenment, a new era of openness and freely disseminated knowledge. What followed the Reformation was, actually, the Counter-Reformation, which used the same means—i.e., printed books—to spread ideas about what jerks the reformers were, and unleashed a hundred years of religious warfare.
  • ...17 more annotations...
  • If ideas of democracy and freedom emerged at the end of the printing-press era, it wasn’t by some technological logic but because of parallel inventions, like the ideas of limited government and religious tolerance, very hard won from history.
  • As Andrew Pettegree shows in his fine new study, “The Book in the Renaissance,” the mainstay of the printing revolution in seventeenth-century Europe was not dissident pamphlets but royal edicts, printed by the thousand: almost all the new media of that day were working, in essence, for kinglouis.gov.
  • Even later, full-fledged totalitarian societies didn’t burn books. They burned some books, while keeping the printing presses running off such quantities that by the mid-fifties Stalin was said to have more books in print than Agatha Christie.
  • Many of the more knowing Never-Betters turn for cheer not to messy history and mixed-up politics but to psychology—to the actual expansion of our minds.
  • The argument, advanced in Andy Clark’s “Supersizing the Mind” and in Robert K. Logan’s “The Sixth Language,” begins with the claim that cognition is not a little processing program that takes place inside your head, Robby the Robot style. It is a constant flow of information, memory, plans, and physical movements, in which as much thinking goes on out there as in here. If television produced the global village, the Internet produces the global psyche: everyone keyed in like a neuron, so that to the eyes of a watching Martian we are really part of a single planetary brain. Contraptions don’t change consciousness; contraptions are part of consciousness. We may not act better than we used to, but we sure think differently than we did.
  • Cognitive entanglement, after all, is the rule of life. My memories and my wife’s intermingle. When I can’t recall a name or a date, I don’t look it up; I just ask her. Our machines, in this way, become our substitute spouses and plug-in companions.
  • But, if cognitive entanglement exists, so does cognitive exasperation. Husbands and wives deny each other’s memories as much as they depend on them. That’s fine until it really counts (say, in divorce court). In a practical, immediate way, one sees the limits of the so-called “extended mind” clearly in the mob-made Wikipedia, the perfect product of that new vast, supersized cognition: when there’s easy agreement, it’s fine, and when there’s widespread disagreement on values or facts, as with, say, the origins of capitalism, it’s fine, too; you get both sides. The trouble comes when one side is right and the other side is wrong and doesn’t know it. The Shakespeare authorship page and the Shroud of Turin page are scenes of constant conflict and are packed with unreliable information. Creationists crowd cyberspace every bit as effectively as evolutionists, and extend their minds just as fully. Our trouble is not the over-all absence of smartness but the intractable power of pure stupidity, and no machine, or mind, seems extended enough to cure that.
  • Nicholas Carr, in “The Shallows,” William Powers, in “Hamlet’s BlackBerry,” and Sherry Turkle, in “Alone Together,” all bear intimate witness to a sense that the newfound land, the ever-present BlackBerry-and-instant-message world, is one whose price, paid in frayed nerves and lost reading hours and broken attention, is hardly worth the gains it gives us. “The medium does matter,” Carr has written. “As a technology, a book focuses our attention, isolates us from the myriad distractions that fill our everyday lives. A networked computer does precisely the opposite. It is designed to scatter our attention. . . . Knowing that the depth of our thought is tied directly to the intensity of our attentiveness, it’s hard not to conclude that as we adapt to the intellectual environment of the Net our thinking becomes shallower.
  • Carr is most concerned about the way the Internet breaks down our capacity for reflective thought.
  • Powers’s reflections are more family-centered and practical. He recounts, very touchingly, stories of family life broken up by the eternal consultation of smartphones and computer monitors
  • He then surveys seven Wise Men—Plato, Thoreau, Seneca, the usual gang—who have something to tell us about solitude and the virtues of inner space, all of it sound enough, though he tends to overlook the significant point that these worthies were not entirely in favor of the kinds of liberties that we now take for granted and that made the new dispensation possible.
  • Similarly, Nicholas Carr cites Martin Heidegger for having seen, in the mid-fifties, that new technologies would break the meditational space on which Western wisdoms depend. Since Heidegger had not long before walked straight out of his own meditational space into the arms of the Nazis, it’s hard to have much nostalgia for this version of the past. One feels the same doubts when Sherry Turkle, in “Alone Together,” her touching plaint about the destruction of the old intimacy-reading culture by the new remote-connection-Internet culture, cites studies that show a dramatic decline in empathy among college students, who apparently are “far less likely to say that it is valuable to put oneself in the place of others or to try and understand their feelings.” What is to be done?
  • Among Ever-Wasers, the Harvard historian Ann Blair may be the most ambitious. In her book “Too Much to Know: Managing Scholarly Information Before the Modern Age,” she makes the case that what we’re going through is like what others went through a very long while ago. Against the cartoon history of Shirky or Tooby, Blair argues that the sense of “information overload” was not the consequence of Gutenberg but already in place before printing began. She wants us to resist “trying to reduce the complex causal nexus behind the transition from Renaissance to Enlightenment to the impact of a technology or any particular set of ideas.” Anyway, the crucial revolution was not of print but of paper: “During the later Middle Ages a staggering growth in the production of manuscripts, facilitated by the use of paper, accompanied a great expansion of readers outside the monastic and scholastic contexts.” For that matter, our minds were altered less by books than by index slips. Activities that seem quite twenty-first century, she shows, began when people cut and pasted from one manuscript to another; made aggregated news in compendiums; passed around précis. “Early modern finding devices” were forced into existence: lists of authorities, lists of headings.
  • Everyone complained about what the new information technologies were doing to our minds. Everyone said that the flood of books produced a restless, fractured attention. Everyone complained that pamphlets and poems were breaking kids’ ability to concentrate, that big good handmade books were ignored, swept aside by printed works that, as Erasmus said, “are foolish, ignorant, malignant, libelous, mad.” The reader consulting a card catalogue in a library was living a revolution as momentous, and as disorienting, as our own.
  • The book index was the search engine of its era, and needed to be explained at length to puzzled researchers
  • That uniquely evil and necessary thing the comprehensive review of many different books on a related subject, with the necessary oversimplification of their ideas that it demanded, was already around in 1500, and already being accused of missing all the points. In the period when many of the big, classic books that we no longer have time to read were being written, the general complaint was that there wasn’t enough time to read big, classic books.
  • at any given moment, our most complicated machine will be taken as a model of human intelligence, and whatever media kids favor will be identified as the cause of our stupidity. When there were automatic looms, the mind was like an automatic loom; and, since young people in the loom period liked novels, it was the cheap novel that was degrading our minds. When there were telephone exchanges, the mind was like a telephone exchange, and, in the same period, since the nickelodeon reigned, moving pictures were making us dumb. When mainframe computers arrived and television was what kids liked, the mind was like a mainframe and television was the engine of our idiocy. Some machine is always showing us Mind; some entertainment derived from the machine is always showing us Non-Mind.
Weiye Loh

Rationally Speaking: A different kind of moral relativism - 0 views

  • Prinz’s basic stance is that moral values stem from our cognitive hardware, upbringing, and social environment. These equip us with deep-seated moral emotions, but these emotions express themselves in a contingent way due to cultural circumstances. And while reason can help, it has limited influence, and can only reshape our ethics up to a point, it cannot settle major differences between different value systems. Therefore, it is difficult, if not impossible, to construct an objective morality that transcends emotions and circumstance.
  • As Prinz writes, in part:“No amount of reasoning can engender a moral value, because all values are, at bottom, emotional attitudes. … Reason cannot tell us which facts are morally good. Reason is evaluatively neutral. At best, reason can tell us which of our values are inconsistent, and which actions will lead to fulfillment of our goals. But, given an inconsistency, reason cannot tell us which of our conflicting values to drop or which goals to follow. If my goals come into conflict with your goals, reason tells me that I must either thwart your goals, or give up caring about mine; but reason cannot tell me to favor one choice over the other. … Moral judgments are based on emotions, and reasoning normally contributes only by helping us extrapolate from our basic values to novel cases. Reasoning can also lead us to discover that our basic values are culturally inculcated, and that might impel us to search for alternative values, but reason alone cannot tell us which values to adopt, nor can it instill new values.”
  • This moral relativism is not the absolute moral relativism of, supposedly, bands of liberal intellectuals, or of postmodernist philosophers. It presents a more serious challenge to those who argue there can be objective morality. To be sure, there is much Prinz and I agree on. At the least, we agree that morality is largely constructed by our cognition, upbringing, and social environment; and that reason has the power synthesize and clarify our worldviews, and help us plan for and react to life’s situations
  • ...5 more annotations...
  • Suppose I concede to Prinz that reason cannot settle differences in moral values and sentiments. Difference of opinion doesn’t mean that there isn’t a true or rational answer. In fact, there are many reasons why our cognition, emotional reactions or previous values could be wrong or irrational — and why people would not pick up on their deficiencies. In his article, Prinz uses the case of sociopaths, who simply lack certain cognitive abilities. There are many reasons other than sociopathy why human beings can get things wrong, morally speaking, often and badly. It could be that people are unable to adopt a more objective morality because of their circumstances — from brain deficiencies to lack of access to relevant information. But, again, none of this amounts to an argument against the existence of objective morality.
  • As it turns out, Prinz’s conception of objective morality does not quite reflect the thinking of most people who believe in objective morality. He writes that: “Objectivism holds that there is one true morality binding upon all of us.” This is a particular strand of moral realism, but there are many. For instance, one can judge some moral precepts as better than others, yet remain open to the fact that there are probably many different ways to establish a good society. This is a pluralistic conception of objective morality which doesn’t assume one absolute moral truth. For all that has been said, Sam Harris’ idea of a moral landscape does help illustrate this concept. Thinking in terms of better and worse morality gets us out of relativism and into an objectivist approach. The important thing to note is that one need not go all the way to absolute objectivity to work toward a rational, non-arbitrary morality.
  • even Prinz admits that “Relativism does not entail that we should tolerate murderous tyranny. When someone threatens us or our way of life, we are strongly motivated to protect ourselves.” That is, there are such things as better and worse values: the worse ones kill us, the better ones don’t. This is a very broad criterion, but it is an objective standard. Prinz is arguing for a tighter moral relativism – a sort of stripped down objective morality that is constricted by nature, experience, and our (modest) reasoning abilities.
  • I proposed at the discussion that a more objective morality could be had with the help of a robust public discourse on the issues at hand. Prinz does not necessarily disagree. He wrote that “Many people have overlapping moral values, and one can settle debates by appeal to moral common ground.” But Prinz pointed out a couple of limitations on public discourse. For example, the agreements we reach on “moral common ground” are often exclusive of some, and abstract in content. Consider the United Nations Declaration of Human Rights, a seemingly good example of global moral agreement. Yet, it was ratified by a small sample of 48 countries, and it is based on suspiciously Western sounding language. Everyone has a right to education and health care, but — Prinz pointed out during the discussion — what level of education and health care? Still, the U.N. declaration was passed 48-0 with just 8 abstentions (Belarus, Czechoslovakia, Poland, Ukraine, USSR, Yugoslavia, South Africa and Saudi Arabia). It includes 30 articles of ethical standards agreed upon by 48 countries around the world. Such a document does give us more reason to think that public discourse can lead to significant agreement upon values.
  • Reason might not be able to arrive at moral truths, but it can push us to test and question the rationality of our values — a crucial part in the process that leads to the adoption of new, or modified values. The only way to reduce disputes about morality is to try to get people on the same page about their moral goals. Given the above, this will not be easy, and perhaps we shouldn’t be too optimistic in our ability to employ reason to figure things out. But reason is still the best, and even only, tool we can wield, and while it might not provide us with a truly objective morality, it’s enough to save us from complete moral relativism.
Weiye Loh

Can We Kill Off This Myth That The Internet Is A Wild West That Needs To Be Tamed? | Te... - 0 views

  • The latest version of this, is a horrible, dangerous and ridiculous editorial from Martin Kettle, at The Guardian, who insists that it's time to bring the internet "under control." Yet whatever one's qualms about Sarkozy and his plan, he is surely on to something that should not be so sweepingly dismissed. Looking at British politics this week, it is hard to make an intellectually serious case that internet regulation issues should not be raised. Not only has the balance between parliament, the courts and the media been made to look irrelevant over superinjunctions by the twitterati, but almost the first act of the new Scottish government on Thursday was to promise a clampdown on internet sectarian hate postings. The fact that Facebook's Mark Zuckerberg also popped up this week with the casual suggestion that children under 13 should be able to use social networking sites dramatically underlines the argument that there are issues of importance to discuss here.
  • on the issue of the superinjunction, it suggests the exact opposite of what Kettle is arguing. It's pointing out the ridiculousness of analog-era regulations in a digital age. That's not a case for controls. It's a case for removing controls.
  • issue of hate speech is another one where people overreact emotionally. The best way to counter hate speech (which is almost always ignorance) is with more speech. "Clamping down" only convinces those who hate that they're "onto something" and that they're being persecuted.
  • ...5 more annotations...
  • Zuckerberg's claim -- which he's already pointed out involved taking his words out of context -- was just that there could be socially useful reasons why younger people might be helped if they could have accounts, but over aggressive internet controls prevent that. Again, that seems to argue against control, not for it.
  • The internet does not exist as untouchable. Morality and the rule of law do apply to the actions people do there. The question is whether those laws are appropriate. In many cases, it appears they're not.
  • the fallacy is not that these laws are obsolete because they're difficult to enforce. It's that they're obsolete because many of them don't make any sense, such as these injunctions that seek to merely protect the rich and famous from having their own embarrassing actions discussed.
  • ome of these laws aren't "difficult" to enforce, they're impossible to enforce. And it's not because the internet is some "wild west," but because it's a very different platform of communication -- a many to many platform, which the world has not had before. We've had one-to-one and one-to-many forms of communication, but a many-to-many platform really does change some important fundamentals when it comes to speech. Far more important are the questions of internet access to unsuitable material, especially but not solely by children, as well as the danger to children from inadequately policed social media. Merely to write such a sentence is to invite outrage in some quarters, but these issues are all too easy for a society to ignore until they return to haunt us. And the proper response, if there is "unsuitable" (unsuitable to whom, by the way?) content is to go after those who produced and distributed it. Not to seek to block access and sweep it under the rug. That's denial. Let's live in reality.
  • Kettle talks about spam and pornography. Yet, I almost never see spam any more. Why? Because technologists came in and built filters. I never see pornography either. And not because of any laws or filters, but because the websites I surf don't display any, and contrary to the myth makers, it's pretty difficult to "accidentally" run into porn. I do a lot of surfing and can't recall ever accidentally coming across any.
Weiye Loh

It's Even Less in Your Genes by Richard C. Lewontin | The New York Review of Books - 0 views

  • One of the complications is that the effective environment is defined by the life activities of the organism itself.
  • Thus, as organisms evolve, their environments necessarily evolve with them. Although classic Darwinism is framed by referring to organisms adapting to environments, the actual process of evolution involves the creation of new “ecological niches” as new life forms come into existence. Part of the ecological niche of an earthworm is the tunnel excavated by the worm and part of the ecological niche of a tree is the assemblage of fungi associated with the tree’s root system that provide it with nutrients.
  • , the distinction between organisms and their environments remains deeply embedded in our consciousness. Partly this is due to the inertia of educational institutions and materials
  • ...7 more annotations...
  • But the problem is deeper than simply intellectual inertia. It goes back, ultimately, to the unconsidered differentiations we make—at every moment when we distinguish among objects—between those in the foreground of our consciousness and the background places in which the objects happen to be situated. Moreover, this distinction creates a hierarchy of objects. We are conscious not only of the skin that encloses and defines the object, but of bits and pieces of that object, each of which must have its own “skin.” That is the problem of anatomization. A car has a motor and brakes and a transmission and an outer body that, at appropriate moments, become separate objects of our consciousness, objects that at least some knowledgeable person recognizes as coherent entities.
  • Evelyn Fox Keller sees “The Mirage of a Space Between Nature and Nurture” as a consequence of our false division of the world into living objects without sufficient consideration of the external milieu in which they are embedded, since organisms help create effective environments through their own life activities.
  • The central point of her analysis has been that gender itself (as opposed to sex) is socially constructed, and that construction has influenced the development of science:If there is a single point on which all feminist scholarship…has converged, it is the importance of recognizing the social construction of gender…. All of my work on gender and science proceeds from this basic recognition. My endeavor has been to call attention to the ways in which the social construction of a binary opposition between “masculine” and “feminine” has influenced the social construction of science.
  • major critical concern of Fox Keller’s present book is the widespread attempt to partition in some quantitative way the contribution made to human variation by differences in biological inheritance, that is, differences in genes, as opposed to differences in life experience. She wants to make clear a distinction between analyzing the relative strength of the causes of variation among individuals and groups, an analysis that is coherent in principle, and simply assigning the relative contributions of biological and environmental causes to the value of some character in an individual
  • It is, for example, all very well to say that genetic variation is responsible for 76 percent of the observed variation in adult height among American women while the remaining 24 percent is a consequence of differences in nutrition. The implication is that if all variation in nutrition were abolished then 24 percent of the observed height variation among individuals in the population in the next generation would disappear. To say, however, that 76 percent of Evelyn Fox Keller’s height was caused by her genes and 24 percent by her nutrition does not make sense. The nonsensical implication of trying to partition the causes of her individual height would be that if she never ate anything she would still be three quarters as tall as she is.
  • In fact, Keller is too optimistic about the assignment of causes of variation even when considering variation in a population. As she herself notes parenthetically, the assignment of relative proportions of population variation to different causes in a population depends on there being no specific interaction between the causes.
  • Keller’s rather casual treatment of the interaction between causal factors in the case of the drummers, despite her very great sophistication in analyzing the meaning of variation, is a symptom of a fault that is deeply embedded in the analytic training and thinking of both natural and social scientists. If there are several variable factors influencing some phenomenon, how are we to assign the relative importance to each in determining total variation? Let us take an extreme example. Suppose that we plant seeds of each of two different varieties of corn in two different locations with the following results measured in bushels of corn produced (see Table 1). There are differences between the varieties in their yield from location to location and there are differences between locations from variety to variety. So, both variety and location matter. But there is no average variation between locations when averaged over varieties or between varieties when averaged over locations. Just by knowing the variation in yield associated with location and variety separately does not tell us which factor is the more important source of variation; nor do the facts of location and variety exhaust the description of that variation.
  •  
    In trying to analyze the natural world, scientists are seldom aware of the degree to which their ideas are influenced both by their way of perceiving the everyday world and by the constraints that our cognitive development puts on our formulations. At every moment of perception of the world around us, we isolate objects as discrete entities with clear boundaries while we relegate the rest to a background in which the objects exist.
Weiye Loh

Royal Society launches study on openness in science | Royal Society - 0 views

  • Science as a public enterprise: opening up scientific information will look at how scientific information should best be managed to improve the quality of research and build public trust.
  • “Science has always been about open debate. But incidents such as the UEA email leaks have prompted the Royal Society to look at how open science really is.  With the advent of the Internet, the public now expect a greater degree of transparency. The impact of science on people’s lives, and the implications of scientific assessments for society and the economy are now so great that  people won’t just believe scientists when they say “trust me, I’m an expert.” It is not just scientists who want to be able to see inside scientific datasets, to see how robust they are and ask difficult questions about their implications. Science has to adapt.”
  • The study will look at questions such as: What are the benefits and risks of openly sharing scientific data? How does the rise of the blogosphere change scientific research? What responsibility should scientists, their institutions and the funders of research have for open data? How do we make information more accessible and who will pay to do it? Should privately funded scientists be held to the same standards as those who are publicly funded? How do we balance openness against intellectual property rights and in the case of medical information how do protect patient confidentiality?  Will the same rules apply to scientists across the world?
  • ...1 more annotation...
  • “Different scientific disciplines share their information very differently.  The human genome project was incredibly open in how data were shared. But in biomedical science you also have drug trials conducted where no results are made public.” 
Weiye Loh

Don't dumb me down | Science | The Guardian - 0 views

  • Science stories usually fall into three families: wacky stories, scare stories and "breakthrough" stories.
  • these stories are invariably written by the science correspondents, and hotly followed, to universal jubilation, with comment pieces, by humanities graduates, on how bonkers and irrelevant scientists are.
  • A close relative of the wacky story is the paradoxical health story. Every Christmas and Easter, regular as clockwork, you can read that chocolate is good for you (www.badscience.net/?p=67), just like red wine is, and with the same monotonous regularity
  • ...19 more annotations...
  • At the other end of the spectrum, scare stories are - of course - a stalwart of media science. Based on minimal evidence and expanded with poor understanding of its significance, they help perform the most crucial function for the media, which is selling you, the reader, to their advertisers. The MMR disaster was a fantasy entirely of the media's making (www.badscience.net/?p=23), which failed to go away. In fact the Daily Mail is still publishing hysterical anti-immunisation stories, including one calling the pneumococcus vaccine a "triple jab", presumably because they misunderstood that the meningitis, pneumonia, and septicaemia it protects against are all caused by the same pneumococcus bacteria (www.badscience.net/?p=118).
  • people periodically come up to me and say, isn't it funny how that Wakefield MMR paper turned out to be Bad Science after all? And I say: no. The paper always was and still remains a perfectly good small case series report, but it was systematically misrepresented as being more than that, by media that are incapable of interpreting and reporting scientific data.
  • Once journalists get their teeth into what they think is a scare story, trivial increases in risk are presented, often out of context, but always using one single way of expressing risk, the "relative risk increase", that makes the danger appear disproportionately large (www.badscience.net/?p=8).
  • he media obsession with "new breakthroughs": a more subtly destructive category of science story. It's quite understandable that newspapers should feel it's their job to write about new stuff. But in the aggregate, these stories sell the idea that science, and indeed the whole empirical world view, is only about tenuous, new, hotly-contested data
  • Articles about robustly-supported emerging themes and ideas would be more stimulating, of course, than most single experimental results, and these themes are, most people would agree, the real developments in science. But they emerge over months and several bits of evidence, not single rejiggable press releases. Often, a front page science story will emerge from a press release alone, and the formal academic paper may never appear, or appear much later, and then not even show what the press reports claimed it would (www.badscience.net/?p=159).
  • there was an interesting essay in the journal PLoS Medicine, about how most brand new research findings will turn out to be false (www.tinyurl.com/ceq33). It predictably generated a small flurry of ecstatic pieces from humanities graduates in the media, along the lines of science is made-up, self-aggrandising, hegemony-maintaining, transient fad nonsense; and this is the perfect example of the parody hypothesis that we'll see later. Scientists know how to read a paper. That's what they do for a living: read papers, pick them apart, pull out what's good and bad.
  • Scientists never said that tenuous small new findings were important headline news - journalists did.
  • there is no useful information in most science stories. A piece in the Independent on Sunday from January 11 2004 suggested that mail-order Viagra is a rip-off because it does not contain the "correct form" of the drug. I don't use the stuff, but there were 1,147 words in that piece. Just tell me: was it a different salt, a different preparation, a different isomer, a related molecule, a completely different drug? No idea. No room for that one bit of information.
  • Remember all those stories about the danger of mobile phones? I was on holiday at the time, and not looking things up obsessively on PubMed; but off in the sunshine I must have read 15 newspaper articles on the subject. Not one told me what the experiment flagging up the danger was. What was the exposure, the measured outcome, was it human or animal data? Figures? Anything? Nothing. I've never bothered to look it up for myself, and so I'm still as much in the dark as you.
  • Because papers think you won't understand the "science bit", all stories involving science must be dumbed down, leaving pieces without enough content to stimulate the only people who are actually going to read them - that is, the people who know a bit about science.
  • Compare this with the book review section, in any newspaper. The more obscure references to Russian novelists and French philosophers you can bang in, the better writer everyone thinks you are. Nobody dumbs down the finance pages.
  • Statistics are what causes the most fear for reporters, and so they are usually just edited out, with interesting consequences. Because science isn't about something being true or not true: that's a humanities graduate parody. It's about the error bar, statistical significance, it's about how reliable and valid the experiment was, it's about coming to a verdict, about a hypothesis, on the back of lots of bits of evidence.
  • science journalists somehow don't understand the difference between the evidence and the hypothesis. The Times's health editor Nigel Hawkes recently covered an experiment which showed that having younger siblings was associated with a lower incidence of multiple sclerosis. MS is caused by the immune system turning on the body. "This is more likely to happen if a child at a key stage of development is not exposed to infections from younger siblings, says the study." That's what Hawkes said. Wrong! That's the "Hygiene Hypothesis", that's not what the study showed: the study just found that having younger siblings seemed to be somewhat protective against MS: it didn't say, couldn't say, what the mechanism was, like whether it happened through greater exposure to infections. He confused evidence with hypothesis (www.badscience.net/?p=112), and he is a "science communicator".
  • how do the media work around their inability to deliver scientific evidence? They use authority figures, the very antithesis of what science is about, as if they were priests, or politicians, or parent figures. "Scientists today said ... scientists revealed ... scientists warned." And if they want balance, you'll get two scientists disagreeing, although with no explanation of why (an approach at its most dangerous with the myth that scientists were "divided" over the safety of MMR). One scientist will "reveal" something, and then another will "challenge" it
  • The danger of authority figure coverage, in the absence of real evidence, is that it leaves the field wide open for questionable authority figures to waltz in. Gillian McKeith, Andrew Wakefield, Kevin Warwick and the rest can all get a whole lot further, in an environment where their authority is taken as read, because their reasoning and evidence is rarely publicly examined.
  • it also reinforces the humanities graduate journalists' parody of science, for which we now have all the ingredients: science is about groundless, incomprehensible, didactic truth statements from scientists, who themselves are socially powerful, arbitrary, unelected authority figures. They are detached from reality: they do work that is either wacky, or dangerous, but either way, everything in science is tenuous, contradictory and, most ridiculously, "hard to understand".
  • This misrepresentation of science is a direct descendant of the reaction, in the Romantic movement, against the birth of science and empiricism more than 200 years ago; it's exactly the same paranoid fantasy as Mary Shelley's Frankenstein, only not as well written. We say descendant, but of course, the humanities haven't really moved forward at all, except to invent cultural relativism, which exists largely as a pooh-pooh reaction against science. And humanities graduates in the media, who suspect themselves to be intellectuals, desperately need to reinforce the idea that science is nonsense: because they've denied themselves access to the most significant developments in the history of western thought for 200 years, and secretly, deep down, they're angry with themselves over that.
  • had a good spirited row with an eminent science journalist, who kept telling me that scientists needed to face up to the fact that they had to get better at communicating to a lay audience. She is a humanities graduate. "Since you describe yourself as a science communicator," I would invariably say, to the sound of derisory laughter: "isn't that your job?" But no, for there is a popular and grand idea about, that scientific ignorance is a useful tool: if even they can understand it, they think to themselves, the reader will. What kind of a communicator does that make you?
  • Science is done by scientists, who write it up. Then a press release is written by a non-scientist, who runs it by their non-scientist boss, who then sends it to journalists without a science education who try to convey difficult new ideas to an audience of either lay people, or more likely - since they'll be the ones interested in reading the stuff - people who know their way around a t-test a lot better than any of these intermediaries. Finally, it's edited by a whole team of people who don't understand it. You can be sure that at least one person in any given "science communication" chain is just juggling words about on a page, without having the first clue what they mean, pretending they've got a proper job, their pens all lined up neatly on the desk.
« First ‹ Previous 41 - 60 of 64 Next ›
Showing 20 items per page