Skip to main content

Home/ New Media Ethics 2009 course/ Group items tagged University

Rss Feed Group items tagged

Weiye Loh

Understanding the universe: Order of creation | The Economist - 0 views

  • In their “The Grand Design”, the authors discuss “M-theory”, a composite of various versions of cosmological “string” theory that was developed in the mid-1990s, and announce that, if it is confirmed by observation, “we will have found the grand design.” Yet this is another tease. Despite much talk of the universe appearing to be “fine-tuned” for human existence, the authors do not in fact think that it was in any sense designed. And once more we are told that we are on the brink of understanding everything.
  • The authors rather fancy themselves as philosophers, though they would presumably balk at the description, since they confidently assert on their first page that “philosophy is dead.” It is, allegedly, now the exclusive right of scientists to answer the three fundamental why-questions with which the authors purport to deal in their book. Why is there something rather than nothing? Why do we exist? And why this particular set of laws and not some other?
  • It is hard to evaluate their case against recent philosophy, because the only subsequent mention of it, after the announcement of its death, is, rather oddly, an approving reference to a philosopher’s analysis of the concept of a law of nature, which, they say, “is a more subtle question than one may at first think.” There are actually rather a lot of questions that are more subtle than the authors think. It soon becomes evident that Professor Hawking and Mr Mlodinow regard a philosophical problem as something you knock off over a quick cup of tea after you have run out of Sudoku puzzles.
  • ...2 more annotations...
  • The main novelty in “The Grand Design” is the authors’ application of a way of interpreting quantum mechanics, derived from the ideas of the late Richard Feynman, to the universe as a whole. According to this way of thinking, “the universe does not have just a single existence or history, but rather every possible version of the universe exists simultaneously.” The authors also assert that the world’s past did not unfold of its own accord, but that “we create history by our observation, rather than history creating us.” They say that these surprising ideas have passed every experimental test to which they have been put, but that is misleading in a way that is unfortunately typical of the authors. It is the bare bones of quantum mechanics that have proved to be consistent with what is presently known of the subatomic world. The authors’ interpretations and extrapolations of it have not been subjected to any decisive tests, and it is not clear that they ever could be.
  • Once upon a time it was the province of philosophy to propose ambitious and outlandish theories in advance of any concrete evidence for them. Perhaps science, as Professor Hawking and Mr Mlodinow practice it in their airier moments, has indeed changed places with philosophy, though probably not quite in the way that they think.
  •  
    Order of creation Even Stephen Hawking doesn't quite manage to explain why we are here
Weiye Loh

Approaching the cliffs of time - Plane Talking - 0 views

  • have you noticed how the capacity of the media to explain in lay terms such matters as quantum physics, or cosmology, is contracting faster than the universe is expanding? The more mind warping the discoveries the less opportunity there is to fit them into 30 seconds in a news cast, or 300 words in print.
  • There has been a long running conspiracy of convenience between science reporters and the science being reported to leave out inconvenient time and space consuming explanations, and go for the punch line that best suits the use of the media to lobby for more project funding.
  • Almost every space story I have written over 50 years has been about projects claiming to ‘discover the origins of the solar system/life on earth/life on Mars/discover the origins of the universe, or recover parts of things like comets because they are as old as the sun, except that we have discovered they aren’t ancient at all.’ None of them were ever designed to achieved those goals. They were brilliant projects, brilliantly misrepresented by the scientists and the reporters because an accurate story would have been incomprehensible to 99.9% of readers or viewers.
  • ...3 more annotations...
  • this push to abbreviate and banalify the more esoteric but truly intriguing mysteries of the universe has lurched close to parody yet failed to be as thoughtfully funny as Douglas Adams was with the Hitchhiker’s Guide to the Galaxy
  • Our most powerful telescopes are approaching what Columbia physicist and mathematician Brian Greene recently called the cliffs of time,  beyond which an infinitely large yet progressively emptier universe lies forever invisible to us and vice versa, since to that universe, we also lie beyond the cliffs of time. This capturing of images from the start of time is being done by finding incredibly faint and old light using computing power and forensic techniques not even devised when Hubble was assembled on earth. In this instance Hubble has found the faint image of an object that emitted light a mere 480 million years after the ‘big bang’ 13.7 billion years ago. It is, thus, nearly as old as time itself.
  • The conspiracy of over simplification has until now kept the really gnarly principles involved in big bang theory out of the general media because nothing short of a first class degree in theoretical and practical physics is going to suffice for a reasonable overview. Plus a 100,000 word article with a few thousand diagrams.
Weiye Loh

Rationally Speaking: Is modern moral philosophy still in thrall to religion? - 0 views

  • Recently I re-read Richard Taylor’s An Introduction to Virtue Ethics, a classic published by Prometheus
  • Taylor compares virtue ethics to the other two major approaches to moral philosophy: utilitarianism (a la John Stuart Mill) and deontology (a la Immanuel Kant). Utilitarianism, of course, is roughly the idea that ethics has to do with maximizing pleasure and minimizing pain; deontology is the idea that reason can tell us what we ought to do from first principles, as in Kant’s categorical imperative (e.g., something is right if you can agree that it could be elevated to a universally acceptable maxim).
  • Taylor argues that utilitarianism and deontology — despite being wildly different in a variety of respects — share one common feature: both philosophies assume that there is such a thing as moral right and wrong, and a duty to do right and avoid wrong. But, he says, on the face of it this is nonsensical. Duty isn’t something one can have in the abstract, duty is toward a law or a lawgiver, which begs the question of what could arguably provide us with a universal moral law, or who the lawgiver could possibly be.
  • ...11 more annotations...
  • His answer is that both utilitarianism and deontology inherited the ideas of right, wrong and duty from Christianity, but endeavored to do without Christianity’s own answers to those questions: the law is given by God and the duty is toward Him. Taylor says that Mill, Kant and the like simply absorbed the Christian concept of morality while rejecting its logical foundation (such as it was). As a result, utilitarians and deontologists alike keep talking about the right thing to do, or the good as if those concepts still make sense once we move to a secular worldview. Utilitarians substituted pain and pleasure for wrong and right respectively, and Kant thought that pure reason can arrive at moral universals. But of course neither utilitarians nor deontologist ever give us a reason why it would be irrational to simply decline to pursue actions that increase global pleasure and diminish global pain, or why it would be irrational for someone not to find the categorical imperative particularly compelling.
  • The situation — again according to Taylor — is dramatically different for virtue ethics. Yes, there too we find concepts like right and wrong and duty. But, for the ancient Greeks they had completely different meanings, which made perfect sense then and now, if we are not mislead by the use of those words in a different context. For the Greeks, an action was right if it was approved by one’s society, wrong if it wasn’t, and duty was to one’s polis. And they understood perfectly well that what was right (or wrong) in Athens may or may not be right (or wrong) in Sparta. And that an Athenian had a duty to Athens, but not to Sparta, and vice versa for a Spartan.
  • But wait a minute. Does that mean that Taylor is saying that virtue ethics was founded on moral relativism? That would be an extraordinary claim indeed, and he does not, in fact, make it. His point is a bit more subtle. He suggests that for the ancient Greeks ethics was not (principally) about right, wrong and duty. It was about happiness, understood in the broad sense of eudaimonia, the good or fulfilling life. Aristotle in particular wrote in his Ethics about both aspects: the practical ethics of one’s duty to one’s polis, and the universal (for human beings) concept of ethics as the pursuit of the good life. And make no mistake about it: for Aristotle the first aspect was relatively trivial and understood by everyone, it was the second one that represented the real challenge for the philosopher.
  • For instance, the Ethics is famous for Aristotle’s list of the virtues (see Table), and his idea that the right thing to do is to steer a middle course between extreme behaviors. But this part of his work, according to Taylor, refers only to the practical ways of being a good Athenian, not to the universal pursuit of eudaimonia. Vice of Deficiency Virtuous Mean Vice of Excess Cowardice Courage Rashness Insensibility Temperance Intemperance Illiberality Liberality Prodigality Pettiness Munificence Vulgarity Humble-mindedness High-mindedness Vaingloriness Want of Ambition Right Ambition Over-ambition Spiritlessness Good Temper Irascibility Surliness Friendly Civility Obsequiousness Ironical Depreciation Sincerity Boastfulness Boorishness Wittiness Buffoonery</t
  • How, then, is one to embark on the more difficult task of figuring out how to live a good life? For Aristotle eudaimonia meant the best kind of existence that a human being can achieve, which in turns means that we need to ask what it is that makes humans different from all other species, because it is the pursuit of excellence in that something that provides for a eudaimonic life.
  • Now, Plato - writing before Aristotle - ended up construing the good life somewhat narrowly and in a self-serving fashion. He reckoned that the thing that distinguishes humanity from the rest of the biological world is our ability to use reason, so that is what we should be pursuing as our highest goal in life. And of course nobody is better equipped than a philosopher for such an enterprise... Which reminds me of Bertrand Russell’s quip that “A process which led from the amoeba to man appeared to the philosophers to be obviously a progress, though whether the amoeba would agree with this opinion is not known.”
  • But Aristotle's conception of "reason" was significantly broader, and here is where Taylor’s own update of virtue ethics begins to shine, particularly in Chapter 16 of the book, aptly entitled “Happiness.” Taylor argues that the proper way to understand virtue ethics is as the quest for the use of intelligence in the broadest possible sense, in the sense of creativity applied to all walks of life. He says: “Creative intelligence is exhibited by a dancer, by athletes, by a chess player, and indeed in virtually any activity guided by intelligence [including — but certainly not limited to — philosophy].” He continues: “The exercise of skill in a profession, or in business, or even in such things as gardening and farming, or the rearing of a beautiful family, all such things are displays of creative intelligence.”
  • what we have now is a sharp distinction between utilitarianism and deontology on the one hand and virtue ethics on the other, where the first two are (mistakenly, in Taylor’s assessment) concerned with the impossible question of what is right or wrong, and what our duties are — questions inherited from religion but that in fact make no sense outside of a religious framework. Virtue ethics, instead, focuses on the two things that really matter and to which we can find answers: the practical pursuit of a life within our polis, and the lifelong quest of eudaimonia understood as the best exercise of our creative faculties
  • &gt; So if one's profession is that of assassin or torturer would being the best that you can be still be your duty and eudaimonic? And what about those poor blighters who end up with an ugly family? &lt;Aristotle's philosophy is ver much concerned with virtue, and being an assassin or a torturer is not a virtue, so the concept of a eudaimonic life for those characters is oxymoronic. As for ending up in a "ugly" family, Aristotle did write that eudaimonia is in part the result of luck, because it is affected by circumstances.
  • &gt; So to the title question of this post: "Is modern moral philosophy still in thrall to religion?" one should say: Yes, for some residual forms of philosophy and for some philosophers &lt;That misses Taylor's contention - which I find intriguing, though I have to give it more thought - that *all* modern moral philosophy, except virtue ethics, is in thrall to religion, without realizing it.
  • “The exercise of skill in a profession, or in business, or even in such things as gardening and farming, or the rearing of a beautiful family, all such things are displays of creative intelligence.”So if one's profession is that of assassin or torturer would being the best that you can be still be your duty and eudaimonic? And what about those poor blighters who end up with an ugly family?
juliet huang

Go slow with Net law - 4 views

Article : Go slow with tech law Published : 23 Aug 2009 Source: Straits Times Background : When Singapore signed a free trade agreement with the USA in 2003, intellectual property rights was a ...

sim lim square

started by juliet huang on 26 Aug 09 no follow-up yet
Weiye Loh

The overblown crisis in American education : The New Yorker - 0 views

  • it’s odd that a narrative of crisis, of a systemic failure, in American education is currently so persuasive. This back-to-school season, we have Davis Guggenheim’s documentary about the charter-school movement, “Waiting for ‘Superman’&nbsp;”; two short, dyspeptic books about colleges and universities, “Higher Education?,” by Andrew Hacker and Claudia Dreifus, and “Crisis on Campus,” by Mark C. Taylor; and a lot of positive attention to the school-reform movement in the national press. From any of these sources, it would be difficult to reach the conclusion that, over all, the American education system works quite well.
  • In higher education, the reform story isn’t so fully baked yet, but its main elements are emerging. The system is vast: hundreds of small liberal-arts colleges; a new and highly leveraged for-profit sector that offers degrees online; community colleges; state universities whose budgets are being cut because of the recession; and the big-name private universities, which get the most attention. You wouldn’t design a system this way—it’s filled with overlaps and competitive excess. Much of it strives toward an ideal that took shape in nineteenth-century Germany: the university as a small, élite center of pure scholarly research. Research is the rationale for low teaching loads, publication requirements, tenure, tight-knit academic disciplines, and other practices that take it on the chin from Taylor, Hacker, and Dreifus for being of little benefit to students or society.
  • Yet for a system that—according to Taylor, especially—is deeply in crisis, American higher education is not doing badly. The lines of people wanting to get into institutions that the authors say are just waiting to cheat them by overcharging and underteaching grow ever longer and more international, and the people waiting in those lines don’t seem deterred by price increases, even in a terrible recession.
  • ...1 more annotation...
  • There have been attempts in the past to make the system more rational and less redundant, and to shrink the portion of it that undertakes scholarly research, but they have not met with much success, and not just because of bureaucratic resistance by the interested parties. Large-scale, decentralized democratic societies are not very adept at generating neat, rational solutions to messy situations. The story line on education, at this ill-tempered moment in American life, expresses what might be called the Noah’s Ark view of life: a vast territory looks so impossibly corrupted that it must be washed away, so that we can begin its activities anew, on finer, higher, firmer principles. One should treat any perception that something so large is so completely awry with suspicion, and consider that it might not be true—especially before acting on it.
  •  
    mass higher education is one of the great achievements of American democracy. It embodies a faith in the capabilities of ordinary people that the Founders simply didn't have.
Weiye Loh

Times Higher Education - Unconventional thinkers or recklessly dangerous minds? - 0 views

  • The origin of Aids denialism lies with one man. Peter Duesberg has spent the whole of his academic career at the University of California, Berkeley. In the 1970s he performed groundbreaking work that helped show how mutated genes cause cancer, an insight that earned him a well-deserved international reputation.
  • in the early 1980s, something changed. Duesberg attempted to refute his own theories, claiming that it was not mutated genes but rather environmental toxins that are cancer's true cause. He dismissed the studies of other researchers who had furthered his original work. Then, in 1987, he published a paper that extended his new train of thought to Aids.
  • Initially many scientists were open to Duesberg's ideas. But as evidence linking HIV to Aids mounted - crucially the observation that ARVs brought Aids sufferers who were on the brink of death back to life - the vast majority concluded that the debate was over. Nonetheless, Duesberg persisted with his arguments, and in doing so attracted a cabal of supporters
  • ...12 more annotations...
  • In 1999, denialism secured its highest-profile advocate: Thabo Mbeki, who was then president of South Africa. Having studied denialist literature, Mbeki decided that the consensus on Aids sounded too much like a "biblical absolute truth" that couldn't be questioned. The following year he set up a panel of advisers, nearly half of whom were Aids denialists, including Duesberg. The resultant health policies cut funding for clinics distributing ARVs, withheld donor medication and blocked international aid grants. Meanwhile, Mbeki's health minister, Manto Tshabalala-Msimang, promoted the use of alternative Aids remedies, such as beetroot and garlic.
  • In 2007, Nicoli Nattrass, an economist and director of the Aids and Society Research Unit at the University of Cape Town, estimated that, between 1999 and 2007, Mbeki's Aids denialist policies led to more than 340,000 premature deaths. Later, scientists Max Essex, Pride Chigwedere and other colleagues at the Harvard School of Public Health arrived at a similar figure.
  • "I don't think it's hyperbole to say the (Mbeki regime's) Aids policies do not fall short of a crime against humanity," says Kalichman. "The science behind these medications was irrefutable, and yet they chose to buy into pseudoscience and withhold life-prolonging, if not life-saving, medications from the population. I just don't think there's any question that it should be looked into and investigated."
  • In fairness, there was a reason to have faint doubts about HIV treatment in the early days of Mbeki's rule.
  • some individual cases had raised questions about their reliability on mass rollout. In 2002, for example, Sarah Hlalele, a South African HIV patient and activist from a settlement background, died from "lactic acidosis", a side-effect of her drugs combination. Today doctors know enough about mixing ARVs not to make the same mistake, but at the time her death terrified the medical community.
  • any trial would be futile because of the uncertainties over ARVs that existed during Mbeki's tenure and the fact that others in Mbeki's government went along with his views (although they have since renounced them). "Mbeki was wrong, but propositions we had established then weren't as incontestably established as they are now ... So I think these calls (for genocide charges or criminal trials) are misguided, and I think they're a sideshow, and I don't support them."
  • Regardless of the culpability of politicians, the question remains whether scientists themselves should be allowed to promote views that go wildly against the mainstream consensus. The history of science is littered with offbeat ideas that were ridiculed by the scientific communities of the time. Most of these ideas missed the textbooks and went straight into the waste-paper basket, but a few - continental drift, the germ basis of disease or the Earth's orbit around the Sun, for instance - ultimately proved to be worth more than the paper they were written on. In science, many would argue, freedom of expression is too important to throw away.
  • Such an issue is engulfing the Elsevier journal Medical Hypotheses. Last year the journal, which is not peer reviewed, published a paper by Duesberg and others claiming that the South African Aids death-toll estimates were inflated, while reiterating the argument that there is "no proof that HIV causes Aids". That prompted several Aids scientists to complain to Elsevier, which responded by retracting the paper and asking the journal's editor, Bruce Charlton, to implement a system of peer review. Having refused to change the editorial policy, Charlton faces the sack
  • There are people who would like the journal to keep its current format and continue accepting controversial papers, but for Aids scientists, Duesberg's paper was a step too far. Although it was deleted from both the journal's website and the Medline database, its existence elsewhere on the internet drove Chigwedere and Essex to publish a peer-reviewed rebuttal earlier this year in AIDS and Behavior, lest any readers be "hoodwinked" into thinking there was genuine debate about the causes of Aids.
  • Duesberg believes he is being "censored", although he has found other outlets. In 1991, he helped form "The Group for the Scientific Reappraisal of the HIV/Aids Hypothesis" - now called Rethinking Aids, or simply The Group - to publicise denialist information. Backed by his Berkeley credentials, he regularly promotes his views in media articles and films. Meanwhile, his closest collaborator, David Rasnick, tells "anyone who asks" that "HIV drugs do more harm than good".
  • "Is academic freedom such a precious concept that scientists can hide behind it while betraying the public so blatantly?" asked John Moore, an Aids scientist at Cornell University, on a South African health news website last year. Moore suggested that universities could put in place a "post-tenure review" system to ensure that their researchers act within accepted bounds of scientific practice. "When the facts are so solidly against views that kill people, there must be a price to pay," he added.
  • Now it seems Duesberg may have to pay that price since it emerged last month that his withdrawn paper has led to an investigation at Berkeley for misconduct. Yet for many in the field, chasing fellow scientists comes second to dealing with the Aids pandemic.
  •  
    6 May 2010 Aids denialism is estimated to have killed many thousands. Jon Cartwright asks if scientists should be held accountable, while overleaf Bruce Charlton defends his decision to publish the work of an Aids sceptic, which sparked a row that has led to his being sacked and his journal abandoning its raison d'etre: presenting controversial ideas for scientific debate
Weiye Loh

The Decline Effect and the Scientific Method : The New Yorker - 0 views

  • On September 18, 2007, a few dozen neuroscientists, psychiatrists, and drug-company executives gathered in a hotel conference room in Brussels to hear some startling news. It had to do with a class of drugs known as atypical or second-generation antipsychotics, which came on the market in the early nineties.
  • the therapeutic power of the drugs appeared to be steadily waning. A recent study showed an effect that was less than half of that documented in the first trials, in the early nineteen-nineties. Many researchers began to argue that the expensive pharmaceuticals weren’t any better than first-generation antipsychotics, which have been in use since the fifties. “In fact, sometimes they now look even worse,” John Davis, a professor of psychiatry at the University of Illinois at Chicago, told me.
  • Before the effectiveness of a drug can be confirmed, it must be tested and tested again. Different scientists in different labs need to repeat the protocols and publish their results. The test of replicability, as it’s known, is the foundation of modern research. Replicability is how the community enforces itself. It’s a safeguard for the creep of subjectivity. Most of the time, scientists know what results they want, and that can influence the results they get. The premise of replicability is that the scientific community can correct for these flaws.
  • ...30 more annotations...
  • But now all sorts of well-established, multiply confirmed findings have started to look increasingly uncertain. It’s as if our facts were losing their truth: claims that have been enshrined in textbooks are suddenly unprovable. This phenomenon doesn’t yet have an official name, but it’s occurring across a wide range of fields, from psychology to ecology. In the field of medicine, the phenomenon seems extremely widespread, affecting not only antipsychotics but also therapies ranging from cardiac stents to Vitamin E and antidepressants: Davis has a forthcoming analysis demonstrating that the efficacy of antidepressants has gone down as much as threefold in recent decades.
  • In private, Schooler began referring to the problem as “cosmic habituation,” by analogy to the decrease in response that occurs when individuals habituate to particular stimuli. “Habituation is why you don’t notice the stuff that’s always there,” Schooler says. “It’s an inevitable process of adjustment, a ratcheting down of excitement. I started joking that it was like the cosmos was habituating to my ideas. I took it very personally.”
  • At first, he assumed that he’d made an error in experimental design or a statistical miscalculation. But he couldn’t find anything wrong with his research. He then concluded that his initial batch of research subjects must have been unusually susceptible to verbal overshadowing. (John Davis, similarly, has speculated that part of the drop-off in the effectiveness of antipsychotics can be attributed to using subjects who suffer from milder forms of psychosis which are less likely to show dramatic improvement.) “It wasn’t a very satisfying explanation,” Schooler says. “One of my mentors told me that my real mistake was trying to replicate my work. He told me doing that was just setting myself up for disappointment.”
  • the effect is especially troubling because of what it exposes about the scientific process. If replication is what separates the rigor of science from the squishiness of pseudoscience, where do we put all these rigorously validated findings that can no longer be proved? Which results should we believe? Francis Bacon, the early-modern philosopher and pioneer of the scientific method, once declared that experiments were essential, because they allowed us to “put nature to the question.” But it appears that nature often gives us different answers.
  • The most likely explanation for the decline is an obvious one: regression to the mean. As the experiment is repeated, that is, an early statistical fluke gets cancelled out. The extrasensory powers of Schooler’s subjects didn’t decline—they were simply an illusion that vanished over time. And yet Schooler has noticed that many of the data sets that end up declining seem statistically solid—that is, they contain enough data that any regression to the mean shouldn’t be dramatic. “These are the results that pass all the tests,” he says. “The odds of them being random are typically quite remote, like one in a million. This means that the decline effect should almost never happen. But it happens all the time!
  • this is why Schooler believes that the decline effect deserves more attention: its ubiquity seems to violate the laws of statistics. “Whenever I start talking about this, scientists get very nervous,” he says. “But I still want to know what happened to my results. Like most scientists, I assumed that it would get easier to document my effect over time. I’d get better at doing the experiments, at zeroing in on the conditions that produce verbal overshadowing. So why did the opposite happen? I’m convinced that we can use the tools of science to figure this out. First, though, we have to admit that we’ve got a problem.”
  • In 2001, Michael Jennions, a biologist at the Australian National University, set out to analyze “temporal trends” across a wide range of subjects in ecology and evolutionary biology. He looked at hundreds of papers and forty-four meta-analyses (that is, statistical syntheses of related studies), and discovered a consistent decline effect over time, as many of the theories seemed to fade into irrelevance. In fact, even when numerous variables were controlled for—Jennions knew, for instance, that the same author might publish several critical papers, which could distort his analysis—there was still a significant decrease in the validity of the hypothesis, often within a year of publication. Jennions admits that his findings are troubling, but expresses a reluctance to talk about them publicly. “This is a very sensitive issue for scientists,” he says. “You know, we’re supposed to be dealing with hard facts, the stuff that’s supposed to stand the test of time. But when you see these trends you become a little more skeptical of things.”
  • the worst part was that when I submitted these null results I had difficulty getting them published. The journals only wanted confirming data. It was too exciting an idea to disprove, at least back then.
  • the steep rise and slow fall of fluctuating asymmetry is a clear example of a scientific paradigm, one of those intellectual fads that both guide and constrain research: after a new paradigm is proposed, the peer-review process is tilted toward positive results. But then, after a few years, the academic incentives shift—the paradigm has become entrenched—so that the most notable results are now those that disprove the theory.
  • Jennions, similarly, argues that the decline effect is largely a product of publication bias, or the tendency of scientists and scientific journals to prefer positive data over null results, which is what happens when no effect is found. The bias was first identified by the statistician Theodore Sterling, in 1959, after he noticed that ninety-seven per cent of all published psychological studies with statistically significant data found the effect they were looking for. A “significant” result is defined as any data point that would be produced by chance less than five per cent of the time. This ubiquitous test was invented in 1922 by the English mathematician Ronald Fisher, who picked five per cent as the boundary line, somewhat arbitrarily, because it made pencil and slide-rule calculations easier. Sterling saw that if ninety-seven per cent of psychology studies were proving their hypotheses, either psychologists were extraordinarily lucky or they published only the outcomes of successful experiments. In recent years, publication bias has mostly been seen as a problem for clinical trials, since pharmaceutical companies are less interested in publishing results that aren’t favorable. But it’s becoming increasingly clear that publication bias also produces major distortions in fields without large corporate incentives, such as psychology and ecology.
  • While publication bias almost certainly plays a role in the decline effect, it remains an incomplete explanation. For one thing, it fails to account for the initial prevalence of positive results among studies that never even get submitted to journals. It also fails to explain the experience of people like Schooler, who have been unable to replicate their initial data despite their best efforts
  • an equally significant issue is the selective reporting of results—the data that scientists choose to document in the first place. Palmer’s most convincing evidence relies on a statistical tool known as a funnel graph. When a large number of studies have been done on a single subject, the data should follow a pattern: studies with a large sample size should all cluster around a common value—the true result—whereas those with a smaller sample size should exhibit a random scattering, since they’re subject to greater sampling error. This pattern gives the graph its name, since the distribution resembles a funnel.
  • The funnel graph visually captures the distortions of selective reporting. For instance, after Palmer plotted every study of fluctuating asymmetry, he noticed that the distribution of results with smaller sample sizes wasn’t random at all but instead skewed heavily toward positive results.
  • Palmer has since documented a similar problem in several other contested subject areas. “Once I realized that selective reporting is everywhere in science, I got quite depressed,” Palmer told me. “As a researcher, you’re always aware that there might be some nonrandom patterns, but I had no idea how widespread it is.” In a recent review article, Palmer summarized the impact of selective reporting on his field: “We cannot escape the troubling conclusion that some—perhaps many—cherished generalities are at best exaggerated in their biological significance and at worst a collective illusion nurtured by strong a-priori beliefs often repeated.”
  • Palmer emphasizes that selective reporting is not the same as scientific fraud. Rather, the problem seems to be one of subtle omissions and unconscious misperceptions, as researchers struggle to make sense of their results. Stephen Jay Gould referred to this as the “shoehorning” process. “A lot of scientific measurement is really hard,” Simmons told me. “If you’re talking about fluctuating asymmetry, then it’s a matter of minuscule differences between the right and left sides of an animal. It’s millimetres of a tail feather. And so maybe a researcher knows that he’s measuring a good male”—an animal that has successfully mated—“and he knows that it’s supposed to be symmetrical. Well, that act of measurement is going to be vulnerable to all sorts of perception biases. That’s not a cynical statement. That’s just the way human beings work.”
  • One of the classic examples of selective reporting concerns the testing of acupuncture in different countries. While acupuncture is widely accepted as a medical treatment in various Asian countries, its use is much more contested in the West. These cultural differences have profoundly influenced the results of clinical trials. Between 1966 and 1995, there were forty-seven studies of acupuncture in China, Taiwan, and Japan, and every single trial concluded that acupuncture was an effective treatment. During the same period, there were ninety-four clinical trials of acupuncture in the United States, Sweden, and the U.K., and only fifty-six per cent of these studies found any therapeutic benefits. As Palmer notes, this wide discrepancy suggests that scientists find ways to confirm their preferred hypothesis, disregarding what they don’t want to see. Our beliefs are a form of blindness.
  • John Ioannidis, an epidemiologist at Stanford University, argues that such distortions are a serious issue in biomedical research. “These exaggerations are why the decline has become so common,” he says. “It’d be really great if the initial studies gave us an accurate summary of things. But they don’t. And so what happens is we waste a lot of money treating millions of patients and doing lots of follow-up studies on other themes based on results that are misleading.”
  • In 2005, Ioannidis published an article in the Journal of the American Medical Association that looked at the forty-nine most cited clinical-research studies in three major medical journals. Forty-five of these studies reported positive results, suggesting that the intervention being tested was effective. Because most of these studies were randomized controlled trials—the “gold standard” of medical evidence—they tended to have a significant impact on clinical practice, and led to the spread of treatments such as hormone replacement therapy for menopausal women and daily low-dose aspirin to prevent heart attacks and strokes. Nevertheless, the data Ioannidis found were disturbing: of the thirty-four claims that had been subject to replication, forty-one per cent had either been directly contradicted or had their effect sizes significantly downgraded.
  • The situation is even worse when a subject is fashionable. In recent years, for instance, there have been hundreds of studies on the various genes that control the differences in disease risk between men and women. These findings have included everything from the mutations responsible for the increased risk of schizophrenia to the genes underlying hypertension. Ioannidis and his colleagues looked at four hundred and thirty-two of these claims. They quickly discovered that the vast majority had serious flaws. But the most troubling fact emerged when he looked at the test of replication: out of four hundred and thirty-two claims, only a single one was consistently replicable. “This doesn’t mean that none of these claims will turn out to be true,” he says. “But, given that most of them were done badly, I wouldn’t hold my breath.”
  • the main problem is that too many researchers engage in what he calls “significance chasing,” or finding ways to interpret the data so that it passes the statistical test of significance—the ninety-five-per-cent boundary invented by Ronald Fisher. “The scientists are so eager to pass this magical test that they start playing around with the numbers, trying to find anything that seems worthy,” Ioannidis says. In recent years, Ioannidis has become increasingly blunt about the pervasiveness of the problem. One of his most cited papers has a deliberately provocative title: “Why Most Published Research Findings Are False.”
  • The problem of selective reporting is rooted in a fundamental cognitive flaw, which is that we like proving ourselves right and hate being wrong. “It feels good to validate a hypothesis,” Ioannidis said. “It feels even better when you’ve got a financial interest in the idea or your career depends upon it. And that’s why, even after a claim has been systematically disproven”—he cites, for instance, the early work on hormone replacement therapy, or claims involving various vitamins—“you still see some stubborn researchers citing the first few studies that show a strong effect. They really want to believe that it’s true.”
  • scientists need to become more rigorous about data collection before they publish. “We’re wasting too much time chasing after bad studies and underpowered experiments,” he says. The current “obsession” with replicability distracts from the real problem, which is faulty design. He notes that nobody even tries to replicate most science papers—there are simply too many. (According to Nature, a third of all studies never even get cited, let alone repeated.)
  • Schooler recommends the establishment of an open-source database, in which researchers are required to outline their planned investigations and document all their results. “I think this would provide a huge increase in access to scientific work and give us a much better way to judge the quality of an experiment,” Schooler says. “It would help us finally deal with all these issues that the decline effect is exposing.”
  • Although such reforms would mitigate the dangers of publication bias and selective reporting, they still wouldn’t erase the decline effect. This is largely because scientific research will always be shadowed by a force that can’t be curbed, only contained: sheer randomness. Although little research has been done on the experimental dangers of chance and happenstance, the research that exists isn’t encouraging
  • John Crabbe, a neuroscientist at the Oregon Health and Science University, conducted an experiment that showed how unknowable chance events can skew tests of replicability. He performed a series of experiments on mouse behavior in three different science labs: in Albany, New York; Edmonton, Alberta; and Portland, Oregon. Before he conducted the experiments, he tried to standardize every variable he could think of. The same strains of mice were used in each lab, shipped on the same day from the same supplier. The animals were raised in the same kind of enclosure, with the same brand of sawdust bedding. They had been exposed to the same amount of incandescent light, were living with the same number of littermates, and were fed the exact same type of chow pellets. When the mice were handled, it was with the same kind of surgical glove, and when they were tested it was on the same equipment, at the same time in the morning.
  • The premise of this test of replicability, of course, is that each of the labs should have generated the same pattern of results. “If any set of experiments should have passed the test, it should have been ours,” Crabbe says. “But that’s not the way it turned out.” In one experiment, Crabbe injected a particular strain of mouse with cocaine. In Portland the mice given the drug moved, on average, six hundred centimetres more than they normally did; in Albany they moved seven hundred and one additional centimetres. But in the Edmonton lab they moved more than five thousand additional centimetres. Similar deviations were observed in a test of anxiety. Furthermore, these inconsistencies didn’t follow any detectable pattern. In Portland one strain of mouse proved most anxious, while in Albany another strain won that distinction.
  • The disturbing implication of the Crabbe study is that a lot of extraordinary scientific data are nothing but noise. The hyperactivity of those coked-up Edmonton mice wasn’t an interesting new fact—it was a meaningless outlier, a by-product of invisible variables we don’t understand. The problem, of course, is that such dramatic findings are also the most likely to get published in prestigious journals, since the data are both statistically significant and entirely unexpected. Grants get written, follow-up studies are conducted. The end result is a scientific accident that can take years to unravel.
  • This suggests that the decline effect is actually a decline of illusion.
  • While Karl Popper imagined falsification occurring with a single, definitive experiment—Galileo refuted Aristotelian mechanics in an afternoon—the process turns out to be much messier than that. Many scientific theories continue to be considered true even after failing numerous experimental tests. Verbal overshadowing might exhibit the decline effect, but it remains extensively relied upon within the field. The same holds for any number of phenomena, from the disappearing benefits of second-generation antipsychotics to the weak coupling ratio exhibited by decaying neutrons, which appears to have fallen by more than ten standard deviations between 1969 and 2001. Even the law of gravity hasn’t always been perfect at predicting real-world phenomena. (In one test, physicists measuring gravity by means of deep boreholes in the Nevada desert found a two-and-a-half-per-cent discrepancy between the theoretical predictions and the actual data.) Despite these findings, second-generation antipsychotics are still widely prescribed, and our model of the neutron hasn’t changed. The law of gravity remains the same.
  • Such anomalies demonstrate the slipperiness of empiricism. Although many scientific ideas generate conflicting results and suffer from falling effect sizes, they continue to get cited in the textbooks and drive standard medical practice. Why? Because these ideas seem true. Because they make sense. Because we can’t bear to let them go. And this is why the decline effect is so troubling. Not because it reveals the human fallibility of science, in which data are tweaked and beliefs shape perceptions. (Such shortcomings aren’t surprising, at least for scientists.) And not because it reveals that many of our most exciting theories are fleeting fads and will soon be rejected. (That idea has been around since Thomas Kuhn.) The decline effect is troubling because it reminds us how difficult it is to prove anything. We like to pretend that our experiments define the truth for us. But that’s often not the case. Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true. When the experiments are done, we still have to choose what to believe.
Weiye Loh

Talking Philosophy | Ethicists, Courtesy & Morals - 0 views

  • research raises questions about the extent to which studying ethics improves moral behavior. To the extent that practical effect is among one’s aims in studying (or as an administrator, in requiring) philosophy, I think there is reason for concern. I’m inclined to think that either philosophy should be justified differently, or we should work harder to try to figure out whether there is a *way* of studying philosophy that is more effective in changing moral behavior than the ordinary (21st century, Anglophone) way of studying philosophy is.”
  • I think it’s fairly common that professionals in any field are skeptical about it. Professional politicians are much more skeptical or even cynical about politics than your average informed citizen. Most of the doctors whom I’ve talked to off the record are fairly skeptical about the merits of medical care. Those who specialize in giving investment “advice” will generally admit that they have no idea about the future of markets with the inevitable comment: “if I really knew how the market will react, I’d be on my yacht, not advising you”.
  •  
    For all their pondering on matters moral, ethicists are no better mannered than other philosophers, and they behave no better morally than other philosophers or other academics either. Or such, at least, are the conclusions suggested by the research of philosophers Eric Schwitzgebel (at the University of California, at Riverside) and Joshua Rust (of Stetson University, Florida). On Ethicists' courtesy at philosophy conferences as recently published in Philosophical Psychology', Schwitzgebel & Rust report on a study that suggests that audiences in ethics sessions do not behave any better than those attending seminars on other areas of philosophy. Not when it comes to talking audibly whilst a speaker is addressing the room and not when it comes to 'allowing the door to slam shut while entering or exiting mid-session'. And though, appropriately enough "audiences in environmental ethics sessions … appear to leave behind less trash" generally speaking, the ethicists are just as likely to leave a mess as the epistemologists and metaphysicians.
Weiye Loh

TODAYonline | World | Off-the-shelf body parts? - 0 views

  • LONDON - Scientific advances including techniques allowing patients to grow new joints inside their own bodies will allow the elderly to remain active well beyond their 100th birthdays, researchers claim. British scientists are working on a system which should allow the elderly to buy body parts "off the shelf" and even regenerate their own damaged joints and hearts. Their ultimate aim is to fix up the body with customised replacement parts grown to order. They have already carried out human trials on heart valves which are still working four years after they were transplanted. At the University of Leeds, Britain's biggest bioengineering unit and the world leader in artificial joint replacement research is coordinating a project that aims to give people 50 active years after the age of 50."It is the rise of the bionic pensioner," said Professor Christina Doyle, whose company is working with the university to develop the new technologies. "The idea is when something wears out, your surgeon can buy a replacement off the shelf or, more accurately, in a bag."The university is spending £50 million ($114 million) over the next five years on the new project. The main thrust of the research centres on a method of tissue and medical engineering which the university is at the forefront of developing. Led by the immunologist Professor Eileen Ingham, they are pioneering a technique of stripping the living cells from donor human and animal parts, leaving just the collagen or elastin "scaffold" of the tissue. These "biological shells", which could be for knee, ankle or hip ligaments, as well as blood vessels and heart valves, are then transplanted into the patient whose own body then invades them replacing the removed cells with their own. The technique, which could be available within five years, effectively removes the need for anti-rejection drugs. It is similar to the recently developed system of using stem cells to regrow organs outside the body, but costs about a tenth of the price.
Weiye Loh

Rationally Speaking: Ray Kurzweil and the Singularity: visionary genius or pseudoscient... - 0 views

  • I will focus on a single detailed essay he wrote entitled “Superintelligence and Singularity,” which was originally published as chapter 1 of his The Singularity is Near (Viking 2005), and has been reprinted in an otherwise insightful collection edited by Susan Schneider, Science Fiction and Philosophy.
  • Kurzweil begins by telling us that he gradually became aware of the coming Singularity, in a process that, somewhat peculiarly, he describes as a “progressive awakening” — a phrase with decidedly religious overtones. He defines the Singularity as “a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed.” Well, by that definition, we have been through several “singularities” already, as technology has often rapidly and irreversibly transformed our lives.
  • The major piece of evidence for Singularitarianism is what “I [Kurzweil] have called the law of accelerating returns (the inherent acceleration of the rate of evolution, with technological evolution as a continuation of biological evolution).”
  • ...9 more annotations...
  • the first obvious serious objection is that technological “evolution” is in no logical way a continuation of biological evolution — the word “evolution” here being applied with completely different meanings. And besides, there is no scientifically sensible way in which biological evolution has been accelerating over the several billion years of its operation on our planet. So much for scientific accuracy and logical consistency.
  • here is a bit that will give you an idea of why some people think of Singularitarianism as a secular religion: “The Singularity will allow us to transcend [the] limitations of our biological bodies and brains. We will gain power over our fates. Our mortality will be in our own hands. We will be able to live as long as we want.”
  • Fig. 2 of that essay shows a progression through (again, entirely arbitrary) six “epochs,” with the next one (#5) occurring when there will be a merger between technological and human intelligence (somehow, a good thing), and the last one (#6) labeled as nothing less than “the universe wakes up” — a nonsensical outcome further described as “patterns of matter and energy in the universe becom[ing] saturated with intelligence processes and knowledge.” This isn’t just science fiction, it is bad science fiction.
  • “a serious assessment of the history of technology reveals that technological change is exponential. Exponential growth is a feature of any evolutionary process.” First, it is highly questionable that one can even measure “technological change” on a coherent uniform scale. Yes, we can plot the rate of, say, increase in microprocessor speed, but that is but one aspect of “technological change.” As for the idea that any evolutionary process features exponential growth, I don’t know where Kurzweil got it, but it is simply wrong, for one thing because biological evolution does not have any such feature — as any student of Biology 101 ought to know.
  • Kurzweil’s ignorance of evolution is manifested again a bit later, when he claims — without argument, as usual — that “Evolution is a process of creating patterns of increasing order. ... It’s the evolution of patterns that constitutes the ultimate story of the world. ... Each stage or epoch uses the information-processing methods of the previous epoch to create the next.” I swear, I was fully expecting a scholarly reference to Deepak Chopra at the end of that sentence. Again, “evolution” is a highly heterogeneous term that picks completely different concepts, such as cosmic “evolution” (actually just change over time), biological evolution (which does have to do with the creation of order, but not in Kurzweil’s blatantly teleological sense), and technological “evolution” (which is certainly yet another type of beast altogether, since it requires intelligent design). And what on earth does it mean that each epoch uses the “methods” of the previous one to “create” the next one?
  • As we have seen, the whole idea is that human beings will merge with machines during the ongoing process of ever accelerating evolution, an event that will eventually lead to the universe awakening to itself, or something like that. Now here is the crucial question: how come this has not happened already?
  • To appreciate the power of this argument you may want to refresh your memory about the Fermi Paradox, a serious (though in that case, not a knockdown) argument against the possibility of extraterrestrial intelligent life. The story goes that physicist Enrico Fermi (the inventor of the first nuclear reactor) was having lunch with some colleagues, back in 1950. His companions were waxing poetic about the possibility, indeed the high likelihood, that the galaxy is teeming with intelligent life forms. To which Fermi asked something along the lines of: “Well, where are they, then?”
  • The idea is that even under very pessimistic (i.e., very un-Kurzweil like) expectations about how quickly an intelligent civilization would spread across the galaxy (without even violating the speed of light limit!), and given the mind boggling length of time the galaxy has already existed, it becomes difficult (though, again, not impossible) to explain why we haven’t seen the darn aliens yet.
  • Now, translate that to Kurzweil’s much more optimistic predictions about the Singularity (which allegedly will occur around 2045, conveniently just a bit after Kurzweil’s expected demise, given that he is 63 at the time of this writing). Considering that there is no particular reason to think that planet earth, or the human species, has to be the one destined to trigger the big event, why is it that the universe hasn’t already “awakened” as a result of a Singularity occurring somewhere else at some other time?
Weiye Loh

The Science of Why We Don't Believe Science | Mother Jones - 0 views

  • "A MAN WITH A CONVICTION is a hard man to change. Tell him you disagree and he turns away. Show him facts or figures and he questions your sources. Appeal to logic and he fails to see your point." So wrote the celebrated Stanford University psychologist Leon Festinger (PDF)
  • How would people so emotionally invested in a belief system react, now that it had been soundly refuted? At first, the group struggled for an explanation. But then rationalization set in. A new message arrived, announcing that they'd all been spared at the last minute. Festinger summarized the extraterrestrials' new pronouncement: "The little group, sitting all night long, had spread so much light that God had saved the world from destruction." Their willingness to believe in the prophecy had saved Earth from the prophecy!
  • This tendency toward so-called "motivated reasoning" helps explain why we find groups so polarized over matters where the evidence is so unequivocal: climate change, vaccines, "death panels," the birthplace and religion of the president (PDF), and much else. It would seem that expecting people to be convinced by the facts flies in the face of, you know, the facts.
  • ...4 more annotations...
  • The theory of motivated reasoning builds on a key insight of modern neuroscience (PDF): Reasoning is actually suffused with emotion (or what researchers often call "affect"). Not only are the two inseparable, but our positive or negative feelings about people, things, and ideas arise much more rapidly than our conscious thoughts, in a matter of milliseconds—fast enough to detect with an EEG device, but long before we're aware of it. That shouldn't be surprising: Evolution required us to react very quickly to stimuli in our environment. It's a "basic human survival skill," explains political scientist Arthur Lupia of the University of Michigan. We push threatening information away; we pull friendly information close. We apply fight-or-flight reflexes not only to predators, but to data itself.
  • We're not driven only by emotions, of course—we also reason, deliberate. But reasoning comes later, works slower—and even then, it doesn't take place in an emotional vacuum. Rather, our quick-fire emotions can set us on a course of thinking that's highly biased, especially on topics we care a great deal about.
  • Consider a person who has heard about a scientific discovery that deeply challenges her belief in divine creation—a new hominid, say, that confirms our evolutionary origins. What happens next, explains political scientist Charles Taber of Stony Brook University, is a subconscious negative response to the new information—and that response, in turn, guides the type of memories and associations formed in the conscious mind. "They retrieve thoughts that are consistent with their previous beliefs," says Taber, "and that will lead them to build an argument and challenge what they're hearing."
  • when we think we're reasoning, we may instead be rationalizing. Or to use an analogy offered by University of Virginia psychologist Jonathan Haidt: We may think we're being scientists, but we're actually being lawyers (PDF). Our "reasoning" is a means to a predetermined end—winning our "case"—and is shot through with biases. They include "confirmation bias," in which we give greater heed to evidence and arguments that bolster our beliefs, and "disconfirmation bias," in which we expend disproportionate energy trying to debunk or refute views and arguments that we find uncongenial.
Weiye Loh

Oxford academic wins right to read UEA climate data | Environment | guardian.co.uk - 0 views

  • Jonathan Jones, physics professor at Oxford University and self-confessed "climate change agnostic", used freedom of information law to demand the data that is the life's work of the head of the University of East Anglia's Climatic Research Unit, Phil Jones. UEA resisted the requests to disclose the data, but this week it was compelled to do so.
  • Graham gave the UEA one month to deliver the data, which includes more than 4m individual thermometer readings taken from 4,000 weather stations over the past 160 years. The commissioner's office said this was his first ruling on demands for climate data made in the wake of the climategate affair.
  • an archive of world temperature records collected jointly with the Met Office.
  • ...3 more annotations...
  • Critics of the UEA's scientists say an independent analysis of the temperature data may reveal that Phil Jones and his colleagues have misinterpreted the evidence of global warming. They may have failed to allow for local temperature influences, such as the growth of cities close to many of the thermometers.
  • when Jonathan Jones and others asked for the data in the summer of 2009, the UEA said legal exemptions applied. It said variously that the temperature data were the property of foreign meteorological offices; were intellectual property that might be valuable if sold to other researchers; and were in any case often publicly available.
  • Jonathan Jones said this week that he took up the cause of data freedom after Steve McIntyre, a Canadian mathematician, had requests for the data turned down. He thought this was an unreasonable response when Phil Jones had already shared the data with academic collaborators, including Prof Peter Webster of the Georgia Institute of Technology in the US. He asked to be given the data already sent to Webster, and was also turned down.
  •  
    An Oxford academic has won the right to read previously secret data on climate change held by the University of East Anglia (UEA). The decision, by the government's information commissioner, Christopher Graham, is being hailed as a landmark ruling that will mean that thousands of British researchers are required to share their data with the public.
Weiye Loh

Digital Domain - Computers at Home - Educational Hope vs. Teenage Reality - NYTimes.com - 0 views

  • MIDDLE SCHOOL students are champion time-wasters. And the personal computer may be the ultimate time-wasting appliance.
  • there is an automatic inclination to think of the machine in its most idealized form, as the Great Equalizer. In developing countries, computers are outfitted with grand educational hopes, like those that animate the One Laptop Per Child initiative, which was examined in this space in April.
  • Economists are trying to measure a home computer’s educational impact on schoolchildren in low-income households. Taking widely varying routes, they are arriving at similar conclusions: little or no educational benefit is found. Worse, computers seem to have further separated children in low-income households, whose test scores often decline after the machine arrives, from their more privileged counterparts.
  • ...5 more annotations...
  • Professor Malamud and his collaborator, Cristian Pop-Eleches, an assistant professor of economics at Columbia University, did their field work in Romania in 2009, where the government invited low-income families to apply for vouchers worth 200 euros (then about $300) that could be used for buying a home computer. The program provided a control group: the families who applied but did not receive a voucher.
  • the professors report finding “strong evidence that children in households who won a voucher received significantly lower school grades in math, English and Romanian.” The principal positive effect on the students was improved computer skills.
  • few children whose families obtained computers said they used the machines for homework. What they were used for — daily — was playing games.
  • negative effect on test scores was not universal, but was largely confined to lower-income households, in which, the authors hypothesized, parental supervision might be spottier, giving students greater opportunity to use the computer for entertainment unrelated to homework and reducing the amount of time spent studying.
  • The North Carolina study suggests the disconcerting possibility that home computers and Internet access have such a negative effect only on some groups and end up widening achievement gaps between socioeconomic groups. The expansion of broadband service was associated with a pronounced drop in test scores for black students in both reading and math, but no effect on the math scores and little on the reading scores of other students.
  •  
    Computers at Home: Educational Hope vs. Teenage Reality By RANDALL STROSS Published: July 9, 2010
Weiye Loh

Op-Ed Columnist - The Moral Naturalists - NYTimes.com - 0 views

  • Moral naturalists, on the other hand, believe that we have moral sentiments that have emerged from a long history of relationships. To learn about morality, you don’t rely upon revelation or metaphysics; you observe people as they live.
  • By the time humans came around, evolution had forged a pretty firm foundation for a moral sense. Jonathan Haidt of the University of Virginia argues that this moral sense is like our sense of taste. We have natural receptors that help us pick up sweetness and saltiness. In the same way, we have natural receptors that help us recognize fairness and cruelty. Just as a few universal tastes can grow into many different cuisines, a few moral senses can grow into many different moral cultures.
  • Paul Bloom of Yale noted that this moral sense can be observed early in life. Bloom and his colleagues conducted an experiment in which they showed babies a scene featuring one figure struggling to climb a hill, another figure trying to help it, and a third trying to hinder it. At as early as six months, the babies showed a preference for the helper over the hinderer. In some plays, there is a second act. The hindering figure is either punished or rewarded. In this case, 8-month-olds preferred a character who was punishing the hinderer over ones being nice to it.
  • ...6 more annotations...
  • This illustrates, Bloom says, that people have a rudimentary sense of justice from a very early age. This doesn’t make people naturally good. If you give a 3-year-old two pieces of candy and ask him if he wants to share one of them, he will almost certainly say no. It’s not until age 7 or 8 that even half the children are willing to share. But it does mean that social norms fall upon prepared ground. We come equipped to learn fairness and other virtues.
  • If you ask for donations with the photo and name of one sick child, you are likely to get twice as much money than if you had asked for donations with a photo and the names of eight children. Our minds respond more powerfully to the plight of an individual than the plight of a group.
  • If you are in a bad mood you will make harsher moral judgments than if you’re in a good mood or have just seen a comedy. As Elizabeth Phelps of New York University points out, feelings of disgust will evoke a desire to expel things, even those things unrelated to your original mood. General fear makes people risk-averse. Anger makes them risk-seeking.
  • People who behave morally don’t generally do it because they have greater knowledge; they do it because they have a greater sensitivity to other people’s points of view.
  • The moral naturalists differ over what role reason plays in moral judgments. Some, like Haidt, believe that we make moral judgments intuitively and then construct justifications after the fact. Others, like Joshua Greene of Harvard, liken moral thinking to a camera. Most of the time we rely on the automatic point-and-shoot process, but occasionally we use deliberation to override the quick and easy method.
  • For people wary of abstract theorizing, it’s nice to see people investigating morality in ways that are concrete and empirical. But their approach does have certain implicit tendencies. They emphasize group cohesion over individual dissent. They emphasize the cooperative virtues, like empathy, over the competitive virtues, like the thirst for recognition and superiority. At this conference, they barely mentioned the yearning for transcendence and the sacred, which plays such a major role in every human society. Their implied description of the moral life is gentle, fair and grounded. But it is all lower case. So far, at least, it might not satisfy those who want their morality to be awesome, formidable, transcendent or great.
  •  
    The Moral Naturalists By DAVID BROOKS Published: July 22, 2010
Weiye Loh

Why Did 17 Million Students Go to College? - Innovations - The Chronicle of Higher Educ... - 0 views

  • Over 317,000 waiters and waitresses have college degrees (over 8,000 of them have doctoral or professional degrees), along with over 80,000 bartenders, and over 18,000 parking lot attendants. All told, some 17,000,000 Americans with college degrees are doing jobs that the BLS says require less than the skill levels associated with a bachelor’s degree.
  • Charles Murray’s thesis that an increasing number of people attending college do not have the cognitive abilities or other attributes usually necessary for success at higher levels of learning. As more and more try to attend colleges, either college degrees will be watered&nbsp;down (something already happening I suspect) or drop-out rates will rise.
  • interesting new study was&nbsp;posted&nbsp;on the Web site of America’s most prestigious economic-research organization, the National Bureau of Economic Research. Three highly regarded economists (one of whom has won the Nobel Prize in Economic Science) have produced “Estimating Marginal Returns to Education,” Working Paper 16474 of the NBER. After very sophisticated and elaborate analysis, the authors conclude “In general, marginal and average returns to college are not the same.” (p. 28)
  • ...8 more annotations...
  • even if on average, an investment in higher education yields a good, say 10 percent, rate of return, it does not follow that adding to existing investments will yield that return, partly for reasons outlined above. The authors (Pedro Carneiro, James Heckman, and Edward Vytlacil) make that point explicitly,&nbsp;stating “Some marginal expansions of schooling produce gains that are well below average returns, in general agreement with the analysis of Charles Murray.”&nbsp; (p.29)
  • Once the economy improves, and history tells us it will improve within our lifetimes, those who already have a college degree under their belts will be better equipped to take advantage of new employment opportunities than those who don’t. Perhaps not because of the actual knowledge obtained through their degrees, but definitely as an offset to the social stigma that still exists for those who do not attend college. A college degree may not help a young person secure professional work immediately – so new graduates spend a few years waiting tables until the right opportunity comes along. So what? It’s probably good for them. But they have 40-50 years in the workforce ahead of them and need to be forward-thinking if they don’t want to wait tables for that entire time. If we stop encouraging all young people to view college as both a goal and a possibility, and start weeding out those whose “prior academic records suggest little likelihood of academic success” which, let’s face it, will happen in larger proportions in poorer schools, then in 20 years we’ll find that efforts to reduce socioeconomic gaps between minorities and non-minorities have been seriously undermined.
  • Bet you a lot of those janitors with PhDs are from the humanities, in particular ethic studies, film studies,…basket weaving courses… or non-economics social sciences, eg., sociology, anthropology of never heard of country….There should be a buyer beware warning on all those non-quantitative majors that make people into sophisticated malcontent complainers!
  • This article also presumes that the purpose of higher education is merely to train one for a career path and enhance future income. This devalues the university, turning it into a vocational training institution. There’s nothing in this data that suggests that they are “sophisticated complainers”; that’s an unwarranted inference.
  • it was mentioned that the Bill and Melinda Gates Foundation would like 80% of American youth to attend and graduate from college. It is a nice thought in many ways. As a teacher and professor, intellectually I am all for it (if the university experience is a serious one, which these days, I don’t know).
  • students’ expectations in attending college are not just intellectual; they are careerist (probably far more so)
  • This employment issue has more to do with levels of training and subsequent levels of expectation. When a Korean student emerges from 20 years of intense study with a university degree, he or she reasonably expects a “good” job — which is to say, a well-paying professional or managerial job with good forward prospects. But here’s the problem. There does not exist, nor will there ever exist, a society in which 80% of the available jobs are professional, managerial, comfortable, and well-paid. No way.
  • Korea has a number of other jobs, but some are low-paid service work, and many others — in factories, farming, fishing — are scorned as 3-D jobs (difficult, dirty, and dangerous). Educated Koreans don’t want them. So the country is importing labor in droves — from China, Vietnam, Cambodia, the Philippines, even Uzbekistan. In the countryside, rural Korean men are having such a difficult time finding prospective wives to share their agricultural lifestyle that fully 40% of rural marriages are to poor women from those other Asian countries, who are brought in by match-makers and marriage brokers.
  •  
    Why Did 17 Million Students Go to College?
Weiye Loh

BBC News - Muslim challenge to tuition fee interest charges - 0 views

  • Repayments will be structured so that higher-earning graduates are paying higher levels of interest rates, up to 3% above inflation. Only those who earn below £21,000 will remain paying an effective zero rate of interest.
  • There are concerns that such interest charges are against Muslim teaching on finance and will prevent young Muslims from getting the finance needed to go to university.
  • "Many Muslim students are averse to interest due to teachings in the Islamic faith - such interest derails accessibility to higher education," says Nabil Ahmed, president of the FOSIS student group.
  • ...2 more annotations...
  • Mr Ahmed says there is a wider principle about the raising of interest rates and increasing debt for students, which he describes as "unethical". "People are already drowning in debt," he says. "We don't want people to be priced out of university."
  • Mr Ahmed highlighted how this debt would stretch across generations. Many students will be in their fifties when they finish paying for their degree courses - at which point they might then be expected to support their own children at university.
  •  
    Muslim student leaders say changes to tuition fees in England could breach Islamic rules on finance, which do not permit interest charges.
Weiye Loh

Experts claim 2006 climate report plagiarized - USATODAY.com - 0 views

  • An influential 2006 congressional report that raised questions about the validity of global warming research was partly based on material copied from textbooks, Wikipedia and the writings of one of the scientists criticized in the report, plagiarism experts say.
  • "It kind of undermines the credibility of your work criticizing others' integrity when you don't conform to the basic rules of scholarship," Virginia Tech plagiarism expert Skip Garner says.
  • Led by George Mason University statistician Edward Wegman, the 2006 report criticized the statistics and scholarship of scientists who found the last century the warmest in 1,000 years.
  • ...1 more annotation...
  • But in March, climate scientist Raymond Bradley of the University of Massachusetts asked GMU, based in Fairfax, Va., to investigate "clear plagiarism" of one of his textbooks. Bradley says he learned of the copying on the Deep Climate website and through a now year-long analysis of the Wegman report made by retired computer scientist John Mashey of Portola Valley, Calif. Mashey's analysis concludes that 35 of the report's 91 pages "are mostly plagiarized text, but often injected with errors, bias and changes of meaning." Copying others' text or ideas without crediting them violates universities' standards, according to Liz Wager of the London-based Committee on Publication Ethics.
Weiye Loh

Climate Emails Stoke Debate - WSJ.com - 0 views

  • The scientific community is buzzing over thousands of emails and documents -- posted on the Internet last week after being hacked from a prominent climate-change research center -- that some say raise ethical questions about a group of scientists who contend humans are responsible for global warming.
  • Some emails also refer to efforts by scientists who believe man is causing global warming to exclude contrary views from important scientific publications.
  • "This is what everyone feared. Over the years, it has become increasingly difficult for anyone who does not view global warming as an end-of-the-world issue to publish papers. This isn't questionable practice, this is unethical."
  • ...4 more annotations...
  • "The selective publication of some stolen emails and other papers taken out of context is mischievous and cannot be considered a genuine attempt to engage with this issue in a responsible way," the university said.
  • A partial review of the hacked material suggests there was an effort at East Anglia, which houses an important center of global climate research, to shut out dissenters and their points of view. In the emails, which date to 1996, researchers in the U.S. and the U.K. repeatedly take issue with climate research at odds with their own findings. In some cases, they discuss ways to rebut what they call "disinformation" using new articles in scientific journals or popular Web sites. The emails include discussions of apparent efforts to make sure that reports from the Intergovernmental Panel on Climate Change, a United Nations group that monitors climate science, include their own views and exclude others. In addition, emails show that climate scientists declined to make their data available to scientists whose views they disagreed with.
  • Phil Jones, the director of the East Anglia climate center, suggested to climate scientist Michael Mann of Penn State University that skeptics' research was unwelcome: We "will keep them out somehow -- even if we have to redefine what the peer-review literature is!"
  • John Christy, a scientist at the University of Alabama at Huntsville attacked in the emails for asking that an IPCC report include dissenting viewpoints, said, "It's disconcerting to realize that legislative actions this nation is preparing to take, and which will cost trillions of dollars, are based upon a view of climate that has not been completely scientifically tested."
  •  
    The scientific community is buzzing over thousands of emails and documents -- posted on the Internet last week after being hacked from a prominent climate-change research center -- that some say raise ethical questions about a group of scientists who contend humans are responsible for global warming.
Weiye Loh

Review: What Rawls Hath Wrought | The National Interest - 0 views

  • THE primacy of this ideal is very recent. In the late 1970s, clearly a full thirty years after World War II, it all came about quite abruptly. And the ascendancy of rights as we now understand them came as a response, in part, to developments in the academy.
  • There were versions of utilitarianism, some scornful of rights (with Jeremy Bentham describing them as “nonsense upon stilts”), others that accepted that rights have important social functions (as in John Stuart Mill), but none of them asserted that rights were fundamental in ethical and political thinking.
  • There were various kinds of historicism—the English thinker Michael Oakeshott’s conservative traditionalism and the American scholar Richard Rorty’s postmodern liberalism, for example—that viewed human values as cultural creations, whose contents varied significantly from society to society. There was British theorist Isaiah Berlin’s value pluralism, which held that while some values are universally human, they conflict with one another in ways that do not always have a single rational solution. There were also varieties of Marxism which understood rights in explicitly historical terms.
  • ...2 more annotations...
  • human rights were discussed—when they were mentioned at all—as demands made in particular times and places. Some of these demands might be universal in scope—that torture be prohibited everywhere was frequently (though not always) formulated in terms of an all-encompassing necessity, but no one imagined that human rights comprised the only possible universal morality.
  • the notion that rights are the foundation of society came only with the rise of the Harvard philosopher John Rawls’s vastly influential A Theory of Justice (1971). In the years following, it slowly came to be accepted that human rights were the bottom line in political morality.
‹ Previous 21 - 40 of 232 Next › Last »
Showing 20 items per page