Skip to main content

Home/ New Media Ethics 2009 course/ Group items matching "Text" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
Weiye Loh

The Death of Postmodernism And Beyond | Philosophy Now - 0 views

  • Most of the undergraduates who will take ‘Postmodern Fictions’ this year will have been born in 1985 or after, and all but one of the module’s primary texts were written before their lifetime. Far from being ‘contemporary’, these texts were published in another world, before the students were born: The French Lieutenant’s Woman, Nights at the Circus, If on a Winter’s Night a Traveller, Do Androids Dream of Electric Sheep? (and Blade Runner), White Noise: this is Mum and Dad’s culture. Some of the texts (‘The Library of Babel’) were written even before their parents were born. Replace this cache with other postmodern stalwarts – Beloved, Flaubert’s Parrot, Waterland, The Crying of Lot 49, Pale Fire, Slaughterhouse 5, Lanark, Neuromancer, anything by B.S. Johnson – and the same applies. It’s all about as contemporary as The Smiths, as hip as shoulder pads, as happening as Betamax video recorders. These are texts which are just coming to grips with the existence of rock music and television; they mostly do not dream even of the possibility of the technology and communications media – mobile phones, email, the internet, computers in every house powerful enough to put a man on the moon – which today’s undergraduates take for granted.
  • somewhere in the late 1990s or early 2000s, the emergence of new technologies re-structured, violently and forever, the nature of the author, the reader and the text, and the relationships between them.
  • Postmodernism, like modernism and romanticism before it, fetishised [ie placed supreme importance on] the author, even when the author chose to indict or pretended to abolish him or herself. But the culture we have now fetishises the recipient of the text to the degree that they become a partial or whole author of it. Optimists may see this as the democratisation of culture; pessimists will point to the excruciating banality and vacuity of the cultural products thereby generated (at least so far).
  • ...17 more annotations...
  • Pseudo-modernism also encompasses contemporary news programmes, whose content increasingly consists of emails or text messages sent in commenting on the news items. The terminology of ‘interactivity’ is equally inappropriate here, since there is no exchange: instead, the viewer or listener enters – writes a segment of the programme – then departs, returning to a passive role. Pseudo-modernism also includes computer games, which similarly place the individual in a context where they invent the cultural content, within pre-delineated limits. The content of each individual act of playing the game varies according to the particular player.
  • The pseudo-modern cultural phenomenon par excellence is the internet. Its central act is that of the individual clicking on his/her mouse to move through pages in a way which cannot be duplicated, inventing a pathway through cultural products which has never existed before and never will again. This is a far more intense engagement with the cultural process than anything literature can offer, and gives the undeniable sense (or illusion) of the individual controlling, managing, running, making up his/her involvement with the cultural product. Internet pages are not ‘authored’ in the sense that anyone knows who wrote them, or cares. The majority either require the individual to make them work, like Streetmap or Route Planner, or permit him/her to add to them, like Wikipedia, or through feedback on, for instance, media websites. In all cases, it is intrinsic to the internet that you can easily make up pages yourself (eg blogs).
  • Where once special effects were supposed to make the impossible appear credible, CGI frequently [inadvertently] works to make the possible look artificial, as in much of Lord of the Rings or Gladiator. Battles involving thousands of individuals have really happened; pseudo-modern cinema makes them look as if they have only ever happened in cyberspace.
  • Similarly, television in the pseudo-modern age favours not only reality TV (yet another unapt term), but also shopping channels, and quizzes in which the viewer calls to guess the answer to riddles in the hope of winning money.
  • The purely ‘spectacular’ function of television, as with all the arts, has become a marginal one: what is central now is the busy, active, forging work of the individual who would once have been called its recipient. In all of this, the ‘viewer’ feels powerful and is indeed necessary; the ‘author’ as traditionally understood is either relegated to the status of the one who sets the parameters within which others operate, or becomes simply irrelevant, unknown, sidelined; and the ‘text’ is characterised both by its hyper-ephemerality and by its instability. It is made up by the ‘viewer’, if not in its content then in its sequence – you wouldn’t read Middlemarch by going from page 118 to 316 to 401 to 501, but you might well, and justifiably, read Ceefax that way.
  • A pseudo-modern text lasts an exceptionally brief time. Unlike, say, Fawlty Towers, reality TV programmes cannot be repeated in their original form, since the phone-ins cannot be reproduced, and without the possibility of phoning-in they become a different and far less attractive entity.
  • If scholars give the date they referenced an internet page, it is because the pages disappear or get radically re-cast so quickly. Text messages and emails are extremely difficult to keep in their original form; printing out emails does convert them into something more stable, like a letter, but only by destroying their essential, electronic state.
  • The cultural products of pseudo-modernism are also exceptionally banal
  • Much text messaging and emailing is vapid in comparison with what people of all educational levels used to put into letters.
  • A triteness, a shallowness dominates all.
  • In music, the pseudo-modern supersedingof the artist-dominated album as monolithic text by the downloading and mix-and-matching of individual tracks on to an iPod, selected by the listener, was certainly prefigured by the music fan’s creation of compilation tapes a generation ago. But a shift has occurred, in that what was a marginal pastime of the fan has become the dominant and definitive way of consuming music, rendering the idea of the album as a coherent work of art, a body of integrated meaning, obsolete.
  • To a degree, pseudo-modernism is no more than a technologically motivated shift to the cultural centre of something which has always existed (similarly, metafiction has always existed, but was never so fetishised as it was by postmodernism). Television has always used audience participation, just as theatre and other performing arts did before it; but as an option, not as a necessity: pseudo-modern TV programmes have participation built into them.
  • Whereas postmodernism called ‘reality’ into question, pseudo-modernism defines the real implicitly as myself, now, ‘interacting’ with its texts. Thus, pseudo-modernism suggests that whatever it does or makes is what is reality, and a pseudo-modern text may flourish the apparently real in an uncomplicated form: the docu-soap with its hand-held cameras (which, by displaying individuals aware of being regarded, give the viewer the illusion of participation); The Office and The Blair Witch Project, interactive pornography and reality TV; the essayistic cinema of Michael Moore or Morgan Spurlock.
  • whereas postmodernism favoured the ironic, the knowing and the playful, with their allusions to knowledge, history and ambivalence, pseudo-modernism’s typical intellectual states are ignorance, fanaticism and anxiety
  • pseudo-modernism lashes fantastically sophisticated technology to the pursuit of medieval barbarism – as in the uploading of videos of beheadings onto the internet, or the use of mobile phones to film torture in prisons. Beyond this, the destiny of everyone else is to suffer the anxiety of getting hit in the cross-fire. But this fatalistic anxiety extends far beyond geopolitics, into every aspect of contemporary life; from a general fear of social breakdown and identity loss, to a deep unease about diet and health; from anguish about the destructiveness of climate change, to the effects of a new personal ineptitude and helplessness, which yield TV programmes about how to clean your house, bring up your children or remain solvent.
  • Pseudo-modernism belongs to a world pervaded by the encounter between a religiously fanatical segment of the United States, a largely secular but definitionally hyper-religious Israel, and a fanatical sub-section of Muslims scattered across the planet: pseudo-modernism was not born on 11 September 2001, but postmodernism was interred in its rubble.
  • pseudo-modernist communicates constantly with the other side of the planet, yet needs to be told to eat vegetables to be healthy, a fact self-evident in the Bronze Age. He or she can direct the course of national television programmes, but does not know how to make him or herself something to eat – a characteristic fusion of the childish and the advanced, the powerful and the helpless. For varying reasons, these are people incapable of the “disbelief of Grand Narratives” which Lyotard argued typified postmodernists
  •  
    Postmodern philosophy emphasises the elusiveness of meaning and knowledge. This is often expressed in postmodern art as a concern with representation and an ironic self-awareness. And the argument that postmodernism is over has already been made philosophically. There are people who have essentially asserted that for a while we believed in postmodern ideas, but not any more, and from now on we're going to believe in critical realism. The weakness in this analysis is that it centres on the academy, on the practices and suppositions of philosophers who may or may not be shifting ground or about to shift - and many academics will simply decide that, finally, they prefer to stay with Foucault [arch postmodernist] than go over to anything else. However, a far more compelling case can be made that postmodernism is dead by looking outside the academy at current cultural production.
Weiye Loh

Rationally Speaking: Don't blame free speech for the murders in Afghanistan - 0 views

  • The most disturbing example of this response came from the head of the U.N. Assistance Mission in Afghanistan, Staffan de Mistura, who said, “I don't think we should be blaming any Afghan. We should be blaming the person who produced the news — the one who burned the Koran. Freedom of speech does not mean freedom of offending culture, religion, traditions.” I was not going to comment on this monumentally inane line of thought, especially since Susan Jacoby, Michael Tomasky, and Mike Labossiere have already done such a marvelous job of it. But then I discovered, to my shock, that several of my liberal, progressive American friends actually agreed that Jones has some sort of legal and moral responsibility for what happened in Afghanistan
  • I believe he has neither. Here is why. Unlike many countries in the Middle East and Europe that punish blasphemy by fine, jail or death, the U.S., via the First Amendment and a history of court decisions, strongly protects freedom of speech and expression as basic and fundamental human rights. These include critiquing and offending other citizens’ culture, religion, and traditions. Such rights are not supposed to be swayed by peoples' subjective feelings, which form an incoherent and arbitrary basis for lawmaking. In a free society, if and when a person is offended by an argument or act, he or she has every right to argue and act back. If a person commits murder, the answer is not to limit the right; the answer is to condemn and punish the murderer for overreacting.
  • Of course, there are exceptions to this rule. Governments have an interest in condemning certain speech that provokes immediate hatred of or violence against people. The canonical example is yelling “fire!” in a packed room when there in fact is no fire, since this creates a clear and imminent danger for those inside the room. But Jones did not create such an environment, nor did he intend to. Jones (more precisely, Wayne Sapp) merely burned a book in a private ceremony in protest of its contents. Indeed, the connection between Jones and the murders requires many links in-between. The mob didn’t kill those accountable, or even Americans.
  • ...3 more annotations...
  • But even if there is no law prohibiting Jones’ action, isn’t he morally to blame for creating the environment that led to the murders? Didn’t he know Muslims would riot, and people might die? It seems ridiculous to assume that Jones could know such a thing, even if parts of the Muslim world have a poor track record in this area. But imagine for a moment that Jones did know Muslims would riot, and people would die. This does not make the act of burning a book and the act of murder morally equivalent, nor does it make the book burner responsible for reactions to his act. In and of itself, burning a book is a morally neutral act. Why would this change because some misguided individuals think book burning is worth the death penalty? And why is it that so many have automatically assumed the reaction to be respectable? To use an example nearer to some of us, recall when PZ Myers desecrated a communion wafer. If some Christian was offended, and went on to murder the closest atheist, would we really blame Myers? Is Myers' offense any different than Jones’?
  • the deep-seated belief among many that blasphemy is wrong. This means any reaction to blasphemy is less wrong, and perhaps even excused, compared to the blasphemous offense. Even President Obama said that, "The desecration of any holy text, including the Koran, is an act of extreme intolerance and bigotry.” To be sure, Obama went on to denounce the murders, and to state that burning a holy book is no excuse for murder. But Obama apparently couldn’t condemn the murders without also condemning Jones’ act of religious defiance.
  • As it turns out, this attitude is exactly what created the environment that led to murders in the first place. The members of the mob believed that religious belief should be free from public critical inquiry, and that a person who offends religious believers should face punishment. In the absence of official prosecution, they took matters into their own hands and sought anyone on the side of the offender. It didn’t help that Afghan leaders stoked the flames of hatred — but they only did so because they agreed with the mob’s sentiment to begin with. Afghan President Hamid Karzai said the U.S. should punish those responsible, and three well-known Afghan mullahs urged their followers to take to the streets and protest to call for the arrest of Jones
Weiye Loh

Rationally Speaking: The problem of replicability in science - 0 views

  • The problem of replicability in science from xkcdby Massimo Pigliucci
  • In recent months much has been written about the apparent fact that a surprising, indeed disturbing, number of scientific findings cannot be replicated, or when replicated the effect size turns out to be much smaller than previously thought.
  • Arguably, the recent streak of articles on this topic began with one penned by David Freedman in The Atlantic, and provocatively entitled “Lies, Damned Lies, and Medical Science.” In it, the major character was John Ioannidis, the author of some influential meta-studies about the low degree of replicability and high number of technical flaws in a significant portion of published papers in the biomedical literature.
  • ...18 more annotations...
  • As Freedman put it in The Atlantic: “80 percent of non-randomized studies (by far the most common type) turn out to be wrong, as do 25 percent of supposedly gold-standard randomized trials, and as much as 10 percent of the platinum-standard large randomized trials.” Ioannidis himself was quoted uttering some sobering words for the medical community (and the public at large): “Science is a noble endeavor, but it’s also a low-yield endeavor. I’m not sure that more than a very small percentage of medical research is ever likely to lead to major improvements in clinical outcomes and quality of life. We should be very comfortable with that fact.”
  • Julia and I actually addressed this topic during a Rationally Speaking podcast, featuring as guest our friend Steve Novella, of Skeptics’ Guide to the Universe and Science-Based Medicine fame. But while Steve did quibble with the tone of the Atlantic article, he agreed that Ioannidis’ results are well known and accepted by the medical research community. Steve did point out that it should not be surprising that results get better and better as one moves toward more stringent protocols like large randomized trials, but it seems to me that one should be surprised (actually, appalled) by the fact that even there the percentage of flawed studies is high — not to mention the fact that most studies are in fact neither large nor properly randomized.
  • The second big recent blow to public perception of the reliability of scientific results is an article published in The New Yorker by Jonah Lehrer, entitled “The truth wears off.” Lehrer also mentions Ioannidis, but the bulk of his essay is about findings in psychiatry, psychology and evolutionary biology (and even in research on the paranormal!).
  • In these disciplines there are now several documented cases of results that were initially spectacularly positive — for instance the effects of second generation antipsychotic drugs, or the hypothesized relationship between a male’s body symmetry and the quality of his genes — that turned out to be increasingly difficult to replicate over time, with the original effect sizes being cut down dramatically, or even disappearing altogether.
  • As Lehrer concludes at the end of his article: “Such anomalies demonstrate the slipperiness of empiricism. Although many scientific ideas generate conflicting results and suffer from falling effect sizes, they continue to get cited in the textbooks and drive standard medical practice. Why? Because these ideas seem true. Because they make sense. Because we can’t bear to let them go. And this is why the decline effect is so troubling.”
  • None of this should actually be particularly surprising to any practicing scientist. If you have spent a significant time of your life in labs and reading the technical literature, you will appreciate the difficulties posed by empirical research, not to mention a number of issues such as the fact that few scientists ever actually bother to replicate someone else’s results, for the simple reason that there is no Nobel (or even funded grant, or tenured position) waiting for the guy who arrived second.
  • n the midst of this I was directed by a tweet by my colleague Neil deGrasse Tyson (who has also appeared on the RS podcast, though in a different context) to a recent ABC News article penned by John Allen Paulos, which meant to explain the decline effect in science.
  • Paulos’ article is indeed concise and on the mark (though several of the explanations he proposes were already brought up in both the Atlantic and New Yorker essays), but it doesn’t really make things much better.
  • Paulos suggests that one explanation for the decline effect is the well known statistical phenomenon of the regression toward the mean. This phenomenon is responsible, among other things, for a fair number of superstitions: you’ve probably heard of some athletes’ and other celebrities’ fear of being featured on the cover of a magazine after a particularly impressive series of accomplishments, because this brings “bad luck,” meaning that the following year one will not be able to repeat the performance at the same level. This is actually true, not because of magical reasons, but simply as a result of the regression to the mean: extraordinary performances are the result of a large number of factors that have to line up just right for the spectacular result to be achieved. The statistical chances of such an alignment to repeat itself are low, so inevitably next year’s performance will likely be below par. Paulos correctly argues that this also explains some of the decline effect of scientific results: the first discovery might have been the result of a number of factors that are unlikely to repeat themselves in exactly the same way, thus reducing the effect size when the study is replicated.
  • nother major determinant of the unreliability of scientific results mentioned by Paulos is the well know problem of publication bias: crudely put, science journals (particularly the high-profile ones, like Nature and Science) are interested only in positive, spectacular, “sexy” results. Which creates a powerful filter against negative, or marginally significant results. What you see in science journals, in other words, isn’t a statistically representative sample of scientific results, but a highly biased one, in favor of positive outcomes. No wonder that when people try to repeat the feat they often come up empty handed.
  • A third cause for the problem, not mentioned by Paulos but addressed in the New Yorker article, is the selective reporting of results by scientists themselves. This is essentially the same phenomenon as the publication bias, except that this time it is scientists themselves, not editors and reviewers, who don’t bother to submit for publication results that are either negative or not strongly conclusive. Again, the outcome is that what we see in the literature isn’t all the science that we ought to see. And it’s no good to argue that it is the “best” science, because the quality of scientific research is measured by the appropriateness of the experimental protocols (including the use of large samples) and of the data analyses — not by whether the results happen to confirm the scientist’s favorite theory.
  • The conclusion of all this is not, of course, that we should throw the baby (science) out with the bath water (bad or unreliable results). But scientists should also be under no illusion that these are rare anomalies that do not affect scientific research at large. Too much emphasis is being put on the “publish or perish” culture of modern academia, with the result that graduate students are explicitly instructed to go for the SPU’s — Smallest Publishable Units — when they have to decide how much of their work to submit to a journal. That way they maximize the number of their publications, which maximizes the chances of landing a postdoc position, and then a tenure track one, and then of getting grants funded, and finally of getting tenure. The result is that, according to statistics published by Nature, it turns out that about ⅓ of published studies is never cited (not to mention replicated!).
  • “Scientists these days tend to keep up the polite fiction that all science is equal. Except for the work of the misguided opponent whose arguments we happen to be refuting at the time, we speak as though every scientist’s field and methods of study are as good as every other scientist’s, and perhaps a little better. This keeps us all cordial when it comes to recommending each other for government grants. ... We speak piously of taking measurements and making small studies that will ‘add another brick to the temple of science.’ Most such bricks lie around the brickyard.”
    • Weiye Loh
       
      Written by John Platt in a "Science" article published in 1964
  • Most damning of all, however, is the potential effect that all of this may have on science’s already dubious reputation with the general public (think evolution-creation, vaccine-autism, or climate change)
  • “If we don’t tell the public about these problems, then we’re no better than non-scientists who falsely claim they can heal. If the drugs don’t work and we’re not sure how to treat something, why should we claim differently? Some fear that there may be less funding because we stop claiming we can prove we have miraculous treatments. But if we can’t really provide those miracles, how long will we be able to fool the public anyway? The scientific enterprise is probably the most fantastic achievement in human history, but that doesn’t mean we have a right to overstate what we’re accomplishing.”
  • Joseph T. Lapp said... But is any of this new for science? Perhaps science has operated this way all along, full of fits and starts, mostly duds. How do we know that this isn't the optimal way for science to operate?My issues are with the understanding of science that high school graduates have, and with the reporting of science.
    • Weiye Loh
       
      It's the media at fault again.
  • What seems to have emerged in recent decades is a change in the institutional setting that got science advancing spectacularly since the establishment of the Royal Society. Flaws in the system such as corporate funded research, pal-review instead of peer-review, publication bias, science entangled with policy advocacy, and suchlike, may be distorting the environment, making it less suitable for the production of good science, especially in some fields.
  • Remedies should exist, but they should evolve rather than being imposed on a reluctant sociological-economic science establishment driven by powerful motives such as professional advance or funding. After all, who or what would have the authority to impose those rules, other than the scientific establishment itself?
Weiye Loh

Rationally Speaking: Human, know thy place! - 0 views

  • I kicked off a recent episode of the Rationally Speaking podcast on the topic of transhumanism by defining it as “the idea that we should be pursuing science and technology to improve the human condition, modifying our bodies and our minds to make us smarter, healthier, happier, and potentially longer-lived.”
  • Massimo understandably expressed some skepticism about why there needs to be a transhumanist movement at all, given how incontestable their mission statement seems to be. As he rhetorically asked, “Is transhumanism more than just the idea that we should be using technologies to improve the human condition? Because that seems a pretty uncontroversial point.” Later in the episode, referring to things such as radical life extension and modifications of our minds and genomes, Massimo said, “I don't think these are things that one can necessarily have objections to in principle.”
  • There are a surprising number of people whose reaction, when they are presented with the possibility of making humanity much healthier, smarter and longer-lived, is not “That would be great,” nor “That would be great, but it's infeasible,” nor even “That would be great, but it's too risky.” Their reaction is, “That would be terrible.”
  • ...14 more annotations...
  • The people with this attitude aren't just fringe fundamentalists who are fearful of messing with God's Plan. Many of them are prestigious professors and authors whose arguments make no mention of religion. One of the most prominent examples is political theorist Francis Fukuyama, author of End of History, who published a book in 2003 called “Our Posthuman Future: Consequences of the Biotechnology Revolution.” In it he argues that we will lose our “essential” humanity by enhancing ourselves, and that the result will be a loss of respect for “human dignity” and a collapse of morality.
  • Fukuyama's reasoning represents a prominent strain of thought about human enhancement, and one that I find doubly fallacious. (Fukuyama is aware of the following criticisms, but neither I nor other reviewers were impressed by his attempt to defend himself against them.) The idea that the status quo represents some “essential” quality of humanity collapses when you zoom out and look at the steady change in the human condition over previous millennia. Our ancestors were less knowledgable, more tribalistic, less healthy, shorter-lived; would Fukuyama have argued for the preservation of all those qualities on the grounds that, in their respective time, they constituted an “essential human nature”? And even if there were such a thing as a persistent “human nature,” why is it necessarily worth preserving? In other words, I would argue that Fukuyama is committing both the fallacy of essentialism (there exists a distinct thing that is “human nature”) and the appeal to nature (the way things naturally are is how they ought to be).
  • Writer Bill McKibben, who was called “probably the nation's leading environmentalist” by the Boston Globe this year, and “the world's best green journalist” by Time magazine, published a book in 2003 called “Enough: Staying Human in an Engineered Age.” In it he writes, “That is the choice... one that no human should have to make... To be launched into a future without bounds, where meaning may evaporate.” McKibben concludes that it is likely that “meaning and pain, meaning and transience are inextricably intertwined.” Or as one blogger tartly paraphrased: “If we all live long healthy happy lives, Bill’s favorite poetry will become obsolete.”
  • President George W. Bush's Council on Bioethics, which advised him from 2001-2009, was steeped in it. Harvard professor of political philosophy Michael J. Sandel served on the Council from 2002-2005 and penned an article in the Atlantic Monthly called “The Case Against Perfection,” in which he objected to genetic engineering on the grounds that, basically, it’s uppity. He argues that genetic engineering is “the ultimate expression of our resolve to see ourselves astride the world, the masters of our nature.” Better we should be bowing in submission than standing in mastery, Sandel feels. Mastery “threatens to banish our appreciation of life as a gift,” he warns, and submitting to forces outside our control “restrains our tendency toward hubris.”
  • If you like Sandel's “It's uppity” argument against human enhancement, you'll love his fellow Councilmember Dr. William Hurlbut's argument against life extension: “It's unmanly.” Hurlbut's exact words, delivered in a 2007 debate with Aubrey de Grey: “I actually find a preoccupation with anti-aging technologies to be, I think, somewhat spiritually immature and unmanly... I’m inclined to think that there’s something profound about aging and death.”
  • And Council chairman Dr. Leon Kass, a professor of bioethics from the University of Chicago who served from 2001-2005, was arguably the worst of all. Like McKibben, Kass has frequently argued against radical life extension on the grounds that life's transience is central to its meaningfulness. “Could the beauty of flowers depend on the fact that they will soon wither?” he once asked. “How deeply could one deathless ‘human’ being love another?”
  • Kass has also argued against human enhancements on the same grounds as Fukuyama, that we shouldn't deviate from our proper nature as human beings. “To turn a man into a cockroach— as we don’t need Kafka to show us —would be dehumanizing. To try to turn a man into more than a man might be so as well,” he said. And Kass completes the anti-transhumanist triad (it robs life of meaning; it's dehumanizing; it's hubris) by echoing Sandel's call for humility and gratitude, urging, “We need a particular regard and respect for the special gift that is our own given nature.”
  • By now you may have noticed a familiar ring to a lot of this language. The idea that it's virtuous to suffer, and to humbly surrender control of your own fate, is a cornerstone of Christian morality.
  • it's fairly representative of standard Christian tropes: surrendering to God, submitting to God, trusting that God has good reasons for your suffering.
  • I suppose I can understand that if you believe in an all-powerful entity who will become irate if he thinks you are ungrateful for anything, then this kind of groveling might seem like a smart strategic move. But what I can't understand is adopting these same attitudes in the absence of any religious context. When secular people chastise each other for the “hubris” of trying to improve the “gift” of life they've received, I want to ask them: just who, exactly, are you groveling to? Who, exactly, are you afraid of affronting if you dare to reach for better things?
  • This is why transhumanism is most needed, from my perspective – to counter the astoundingly widespread attitude that suffering and 80-year-lifespans are good things that are worth preserving. That attitude may make sense conditional on certain peculiarly masochistic theologies, but the rest of us have no need to defer to it. It also may have been a comforting thing to tell ourselves back when we had no hope of remedying our situation, but that's not necessarily the case anymore.
  • I think there is a seperation of Transhumanism and what Massimo is referring to. Things like robotic arms and the like come from trying to deal with a specific defect and thus seperate it from Transhumanism. I would define transhumanism the same way you would (the achievement of a better human), but I would exclude the inventions of many life altering devices as transhumanism. If we could invent a device that just made you smarter, then ideed that would be transhumanism, but if we invented a device that could make someone that was metally challenged to be able to be normal, I would define this as modern medicine. I just want to make sure we seperate advances in modern medicine from transhumanism. Modern medicine being the one that advances to deal with specific medical issues to improve quality of life (usually to restore it to normal conditions) and transhumanism being the one that can advance every single human (perhaps equally?).
    • Weiye Loh
       
      Assumes that "normal conditions" exist. 
  • I agree with all your points about why the arguments against transhumanism and for suffering are ridiculous. That being said, when I first heard about the ideas of Transhumanism, after the initial excitement wore off (since I'm a big tech nerd), my reaction was more of less the same as Massimo's. I don't particularly see the need for a philosophical movement for this.
  • if people believe that suffering is something God ordained for us, you're not going to convince them otherwise with philosophical arguments any more than you'll convince them there's no God at all. If the technologies do develop, acceptance of them will come as their use becomes more prevalent, not with arguments.
  •  
    Human, know thy place!
Weiye Loh

Rationally Speaking: Some animals are more equal than others - 0 views

  • society's answer to the question “Is it acceptable to hurt animals for our pleasure?” isn't always “No.” Odds are that most of the people who objected to the dog fighting and crush videos are frequent consumers of meat, milk, and eggs from industrialized farms. And the life of an animal in a typical industrialized farm is notoriously punishing. Many spend their lives in cages so confining they can barely move; ammonia fumes burn their eyes; their beaks or tails are chopped off to prevent them from biting each other out of stress; and the farm's conditions make many of them so sick or weak that they die in their cages or on the way to slaughter. As a society, however, we apparently believe that the pleasure we get from eating those animals makes their suffering worth it.
  • many people will object that eating animals isn’t a matter of pleasure at all, but of the need for sustenance. While that may have been true for our ancestors who survived by hunting wild animals, I don’t think it has much relevance to our current situation. First, it's questionable whether we actually do need to eat animal products in order to be healthy; the American Dietetic Association has given the thumbs up to vegetarian and even vegan diets. But even if you believe that some amount of animal product consumption is medically necessary, we could still buy from farms that raise their livestock much more humanely. It would cost more, but we could always compensate by cutting back on other luxuries, or simply by eating less meat. By any reasonable estimate, Americans could cut their meat consumption drastically with no ill effects on their health (and likely with many positive effects). Buying the sheer amount of meat that Americans do, at the low prices made possible by industrialized farms, is a luxury that can’t be defended with a “need for sustenance” argument. It’s about pleasure — the pleasure of eating more meat than strictly necessary for health, and the pleasure of saving money that can then be spent on other things we enjoy.
  • there are several reasons why people regard consumers of industrial farming differently than consumers of crush videos and dogfighting. The first has to do with the types of animals involved: pigs, cows, and chickens simply aren't as cute as dogs, bunnies, and kittens. I don't know how many people would explicitly cite that as the reason they're willing to inflict suffering on the former and not the latter, but it seems to play a role, even if people won't admit as much. People who have no qualms about a pig spending its life in a small, dark crate would nevertheless be outraged if a dog were treated in the same way.
  • ...6 more annotations...
  • Cuteness is a pretty silly criterion by which to assign moral status, though. It's not as if unappealing animals are less intelligent or less sensitive to pain.
  • And if you have any trouble seeing the absurdity of basing moral judgments on cuteness, it helps to try out the principle in other contexts. (Is it worse to abuse a cute child than an ugly one?)
  • But I think the biggest reason that different examples of hurting animals for pleasure elicit different reactions from people is not about the types of animals involved, but about the types of pleasure.
  • One objective difference people might cite is the fact that a desire to eat meat is “natural” while a desire to watch kittens being crushed is not. Which is true, in the sense that our species did evolve to eat meat while a fetish for crushing kittens is an aberration. But using naturalness as a criterion for moral rightness is a dubious move. First, it seems rather arbitrary, from a logical perspective, which is why it's often referred to as the naturalistic fallacy. And second, it would justify some pretty unsavory “natural” urges, like rape and tribalism, while prohibiting other “unnatural” urges, like the desire to wear clothing or to refrain from having children.
  • The closest thing that I can find to a morally relevant distinction between industrial farming, dogfighting, and crush videos is this: While it’s true that all three acts cause animal suffering in order to give people pleasure, the nature of that tradeoff differs. The consumers of crush videos and dogfighting are taking pleasure in the suffering itself, whereas the consumers of industrially-farmed meat are taking pleasure in the meat that was produced by the suffering. From a purely harm-based perspective, the moral calculus is the same: the animal suffers so that you can experience pleasure. But the degree of directness of that tradeoff makes a difference in how we perceive your character. Someone whose motive is “I enjoy seeing another creature suffer” seems more evil than someone whose motive is “I want a tasty meal,” even if both people cause the same amount of suffering.
  • And I can certainly understand why people would want to call a crush video enthusiast more “evil” than a person who buys meat from industrial farms, because of the difference in their motivations. That's a reasonable way to define evilness. But in that case we're left with the fact that a person's evilness may be totally unrelated to the amount of harm she causes; and that, in fact, some of the greatest harm may be caused by people whose motivations seem unobjectionable to us. Apathy, denial, conformity; none of these inspire the same outrage as sadism, but they've caused some pretty horrible outcomes. And if you believe that it's wrong to make animals suffer for our pleasure, but you reserve your moral condemnation only for cases that viscerally upset you, like dogfighting or crush videos, then you're falling prey to the trap that Isaac Asimov famously warned us against: “Never let your sense of morals prevent you from doing what is right.”
Weiye Loh

Rationally Speaking: On Utilitarianism and Consequentialism - 0 views

  • Utilitarianism and consequentialism are different, yet closely related philosophical positions. Utilitarians are usually consequentialists, and the two views mesh in many areas, but each rests on a different claim
  • Utilitarianism's starting point is that we all attempt to seek happiness and avoid pain, and therefore our moral focus ought to center on maximizing happiness (or, human flourishing generally) and minimizing pain for the greatest number of people. This is both about what our goals should be and how to achieve them.
  • Consequentialism asserts that determining the greatest good for the greatest number of people (the utilitarian goal) is a matter of measuring outcome, and so decisions about what is moral should depend on the potential or realized costs and benefits of a moral belief or action.
  • ...17 more annotations...
  • first question we can reasonably ask is whether all moral systems are indeed focused on benefiting human happiness and decreasing pain.
  • Jeremy Bentham, the founder of utilitarianism, wrote the following in his Introduction to the Principles of Morals and Legislation: “When a man attempts to combat the principle of utility, it is with reasons drawn, without his being aware of it, from that very principle itself.”
  • Michael Sandel discusses this line of thought in his excellent book, Justice: What’s the Right Thing to Do?, and sums up Bentham’s argument as such: “All moral quarrels, properly understood, are [for Bentham] disagreements about how to apply the utilitarian principle of maximizing pleasure and minimizing pain, not about the principle itself.”
  • But Bentham’s definition of utilitarianism is perhaps too broad: are fundamentalist Christians or Muslims really utilitarians, just with different ideas about how to facilitate human flourishing?
  • one wonders whether this makes the word so all-encompassing in meaning as to render it useless.
  • Yet, even if pain and happiness are the objects of moral concern, so what? As philosopher Simon Blackburn recently pointed out, “Every moral philosopher knows that moral philosophy is functionally about reducing suffering and increasing human flourishing.” But is that the central and sole focus of all moral philosophies? Don’t moral systems vary in their core focuses?
  • Consider the observation that religious belief makes humans happier, on average
  • Secularists would rightly resist the idea that religious belief is moral if it makes people happier. They would reject the very idea because deep down, they value truth – a value that is non-negotiable.Utilitarians would assert that truth is just another utility, for people can only value truth if they take it to be beneficial to human happiness and flourishing.
  • . We might all agree that morality is “functionally about reducing suffering and increasing human flourishing,” as Blackburn says, but how do we achieve that? Consequentialism posits that we can get there by weighing the consequences of beliefs and actions as they relate to human happiness and pain. Sam Harris recently wrote: “It is true that many people believe that ‘there are non-consequentialist ways of approaching morality,’ but I think that they are wrong. In my experience, when you scratch the surface on any deontologist, you find a consequentialist just waiting to get out. For instance, I think that Kant's Categorical Imperative only qualifies as a rational standard of morality given the assumption that it will be generally beneficial (as J.S. Mill pointed out at the beginning of Utilitarianism). Ditto for religious morality.”
  • we might wonder about the elasticity of words, in this case consequentialism. Do fundamentalist Christians and Muslims count as consequentialists? Is consequentialism so empty of content that to be a consequentialist one need only think he or she is benefiting humanity in some way?
  • Harris’ argument is that one cannot adhere to a certain conception of morality without believing it is beneficial to society
  • This still seems somewhat obvious to me as a general statement about morality, but is it really the point of consequentialism? Not really. Consequentialism is much more focused than that. Consider the issue of corporal punishment in schools. Harris has stated that we would be forced to admit that corporal punishment is moral if studies showed that “subjecting children to ‘pain, violence, and public humiliation’ leads to ‘healthy emotional development and good behavior’ (i.e., it conduces to their general well-being and to the well-being of society). If it did, well then yes, I would admit that it was moral. In fact, it would appear moral to more or less everyone.” Harris is being rhetorical – he does not believe corporal punishment is moral – but the point stands.
  • An immediate pitfall of this approach is that it does not qualify corporal punishment as the best way to raise emotionally healthy children who behave well.
  • The virtue ethicists inside us would argue that we ought not to foster a society in which people beat and humiliate children, never mind the consequences. There is also a reasonable and powerful argument based on personal freedom. Don’t children have the right to be free from violence in the public classroom? Don’t children have the right not to suffer intentional harm without consent? Isn’t that part of their “moral well-being”?
  • If consequences were really at the heart of all our moral deliberations, we might live in a very different society.
  • what if economies based on slavery lead to an increase in general happiness and flourishing for their respective societies? Would we admit slavery was moral? I hope not, because we value certain ideas about human rights and freedom. Or, what if the death penalty truly deterred crime? And what if we knew everyone we killed was guilty as charged, meaning no need for The Innocence Project? I would still object, on the grounds that it is morally wrong for us to kill people, even if they have committed the crime of which they are accused. Certain things hold, no matter the consequences.
  • We all do care about increasing human happiness and flourishing, and decreasing pain and suffering, and we all do care about the consequences of our beliefs and actions. But we focus on those criteria to differing degrees, and we have differing conceptions of how to achieve the respective goals – making us perhaps utilitarians and consequentialists in part, but not in whole.
  •  
    Is everyone a utilitarian and/or consequentialist, whether or not they know it? That is what some people - from Jeremy Bentham and John Stuart Mill to Sam Harris - would have you believe. But there are good reasons to be skeptical of such claims.
Weiye Loh

Valerie Plame, YES! Wikileaks, NO! - English pravda.ru - 0 views

  • n my recent article Ward Churchill: The Lie Lives On (Pravda.Ru, 11/29/2010), I discussed the following realities about America's legal "system": it is duplicitous and corrupt; it will go to any extremes to insulate from prosecution, and in many cases civil liability, persons whose crimes facilitate this duplicity and corruption; it has abdicated its responsibility to serve as a "check-and-balance" against the other two branches of government, and has instead been transformed into a weapon exploited by the wealthy, the corporations, and the politically connected to defend their criminality, conceal their corruption and promote their economic interests
  • it is now evident that Barack Obama, who entered the White House with optimistic messages of change and hope, is just as complicit in, and manipulative of, the legal "system's" duplicity and corruption as was his predecessor George W. Bush.
  • the Obama administration has refused to prosecute former Attorney General John Ashcroft for abusing the "material witness" statute; refused to prosecute Ashcroft's successor (and suspected perjurer) Alberto Gonzales for his role in the politically motivated firing of nine federal prosecutors; refused to prosecute Justice Department authors of the now infamous "torture memos," like John Yoo and Jay Bybee; and, more recently, refused to prosecute former CIA official Jose Rodriquez Jr. for destroying tapes that purportedly showed CIA agents torturing detainees.
  • ...11 more annotations...
  • thanks to Wikileaks, the world has been enlightened to the fact that the Obama administration not only refused to prosecute these individuals itself, it also exerted pressure on the governments of Germany and Spain not to prosecute, or even indict, any of the torturers or war criminals from the Bush dictatorship.
  • we see many right-wing commentators demanding that Assange be hunted down, with some even calling for his murder, on the grounds that he may have endangered lives by releasing confidential government documents. Yet, for the right-wing, this apparently was not a concern when the late columnist Robert Novak "outed" CIA agent Valerie Plame after her husband Joseph Wilson authored an OP-ED piece in The New York Times criticizing the motivations for waging war against Iraq. Even though there was evidence of involvement within the highest echelons of the Bush dictatorship, only one person, Lewis "Scooter" Libby, was indicted and convicted of "outing" Plame to Novak. And, despite the fact that this "outing" potentially endangered the lives of Plame's overseas contacts, Bush commuted Libby's thirty-month prison sentence, calling it "excessive."
  • Why the disparity? The answer is simple: The Plame "outing" served the interests of the military-industrial complex and helped to conceal the Bush dictatorship's lies, tortures and war crimes, while Wikileaks not only exposed such evils, but also revealed how Obama's administration, and Obama himself, are little more than "snake oil" merchants pontificating about government accountability while undermining it at every turn.
  • When the United States Constitution was being created, a conflict emerged between delegates who wanted a strong federal government (the Federalists) and those who wanted a weak federal government (the anti-Federalists). Although the Federalists won the day, one of the most distinguished anti-Federalists, George Mason, refused to sign the new Constitution, sacrificing in the process, some historians say, a revered place amongst America's founding fathers. Two of Mason's concerns were that the Constitution did not contain a Bill of Rights, and that the presidential pardon powers would allow corrupt presidents to pardon people who had committed crimes on presidential orders.
  • Mason's concerns about the abuse of the pardon powers were eventually proven right when Gerald Ford pardoned Richard Nixon, when Ronald Reagan pardoned FBI agents convicted of authorizing illegal break-ins, and when George H.W. Bush pardoned six individuals involved in the Iran-Contra Affair.
  • Mason was also proven right after the Federalists realized that the States would not ratify the Constitution unless a Bill of Rights was added. But this was done begrudgingly, as demonstrated by America's second president, Federalist John Adams, who essentially destroyed the right to freedom of speech via the Alien and Sedition Acts, which made it a crime to say, write or publish anything critical of the United States government.
  • Most criminals break laws that others have created, and people who assist in exposing or apprehending them are usually lauded as heroes. But with the "espionage" acts, the criminals themselves have actually created laws to conceal their crimes, and exploit these laws to penalize people who expose them.
  • The problem with America's system of government is that it has become too easy, and too convenient, to simply stamp "classified" on documents that reveal acts of government corruption, cover-up, mendacity and malfeasance, or to withhold them "in the interest of national security." Given this web of secrecy, is it any wonder why so many Americans are still skeptical about the "official" versions of the John F. Kennedy or Martin Luther King Jr. assassinations, or the events surrounding the attacks of September 11, 2001?
  • I want to believe that the Wikileaks documents will change America for the better. But what undoubtedly will happen is a repetition of the past: those who expose government crimes and cover-ups will be prosecuted or branded as criminals; new laws will be passed to silence dissent; new Liebermans will arise to intimidate the corporate-controlled media; and new ways will be found to conceal the truth.
  • What Wikileaks has done is make people understand why so many Americans are politically apathetic and content to lose themselves in one or more of the addictions American culture offers, be it drugs, alcohol, the Internet, video games, celebrity gossip, text-messaging-in essence anything that serves to divert attention from the harshness of reality.
  • the evils committed by those in power can be suffocating, and the sense of powerlessness that erupts from being aware of these evils can be paralyzing, especially when accentuated by the knowledge that government evildoers almost always get away with their crimes
Weiye Loh

Rationally Speaking: Are Intuitions Good Evidence? - 0 views

  • Is it legitimate to cite one’s intuitions as evidence in a philosophical argument?
  • appeals to intuitions are ubiquitous in philosophy. What are intuitions? Well, that’s part of the controversy, but most philosophers view them as intellectual “seemings.” George Bealer, perhaps the most prominent defender of intuitions-as-evidence, writes, “For you to have an intuition that A is just for it to seem to you that A… Of course, this kind of seeming is intellectual, not sensory or introspective (or imaginative).”2 Other philosophers have characterized them as “noninferential belief due neither to perception nor introspection”3 or alternatively as “applications of our ordinary capacities for judgment.”4
  • Philosophers may not agree on what, exactly, intuition is, but that doesn’t stop them from using it. “Intuitions often play the role that observation does in science – they are data that must be explained, confirmers or the falsifiers of theories,” Brian Talbot says.5 Typically, the way this works is that a philosopher challenges a theory by applying it to a real or hypothetical case and showing that it yields a result which offends his intuitions (and, he presumes, his readers’ as well).
  • ...16 more annotations...
  • For example, John Searle famously appealed to intuition to challenge the notion that a computer could ever understand language: “Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output)… If the man in the room does not understand Chinese on the basis of implementing the appropriate program for understanding Chinese then neither does any other digital computer solely on that basis because no computer, qua computer, has anything the man does not have.” Should we take Searle’s intuition that such a system would not constitute “understanding” as good evidence that it would not? Many critics of the Chinese Room argument argue that there is no reason to expect our intuitions about intelligence and understanding to be reliable.
  • Ethics leans especially heavily on appeals to intuition, with a whole school of ethicists (“intuitionists”) maintaining that a person can see the truth of general ethical principles not through reason, but because he “just sees without argument that they are and must be true.”6
  • Intuitions are also called upon to rebut ethical theories such as utilitarianism: maximizing overall utility would require you to kill one innocent person if, in so doing, you could harvest her organs and save five people in need of transplants. Such a conclusion is taken as a reductio ad absurdum, requiring utilitarianism to be either abandoned or radically revised – not because the conclusion is logically wrong, but because it strikes nearly everyone as intuitively wrong.
  • British philosopher G.E. Moore used intuition to argue that the existence of beauty is good irrespective of whether anyone ever gets to see and enjoy that beauty. Imagine two planets, he said, one full of stunning natural wonders – trees, sunsets, rivers, and so on – and the other full of filth. Now suppose that nobody will ever have the opportunity to glimpse either of those two worlds. Moore concluded, “Well, even so, supposing them quite apart from any possible contemplation by human beings; still, is it irrational to hold that it is better that the beautiful world should exist than the one which is ugly? Would it not be well, in any case, to do what we could to produce it rather than the other? Certainly I cannot help thinking that it would."7
  • Although similar appeals to intuition can be found throughout all the philosophical subfields, their validity as evidence has come under increasing scrutiny over the last two decades, from philosophers such as Hilary Kornblith, Robert Cummins, Stephen Stich, Jonathan Weinberg, and Jaakko Hintikka (links go to representative papers from each philosopher on this issue). The severity of their criticisms vary from Weinberg’s warning that “We simply do not know enough about how intuitions work,” to Cummins’ wholesale rejection of philosophical intuition as “epistemologically useless.”
  • One central concern for the critics is that a single question can inspire totally different, and mutually contradictory, intuitions in different people.
  • For example, I disagree with Moore’s intuition that it would be better for a beautiful planet to exist than an ugly one even if there were no one around to see it. I can’t understand what the words “better” and “worse,” let alone “beautiful” and “ugly,” could possibly mean outside the domain of the experiences of conscious beings
  • If we want to take philosophers’ intuitions as reason to believe a proposition, then the existence of opposing intuitions leaves us in the uncomfortable position of having reason to believe both a proposition and its opposite.
  • “I suspect there is overall less agreement than standard philosophical practice presupposes, because having the ‘right’ intuitions is the entry ticket to various subareas of philosophy,” Weinberg says.
  • But the problem that intuitions are often not universally shared is overshadowed by another problem: even if an intuition is universally shared, that doesn’t mean it’s accurate. For in fact there are many universal intuitions that are demonstrably false.
  • People who have not been taught otherwise typically assume that an object dropped out of a moving plane will fall straight down to earth, at exactly the same latitude and longitude from which it was dropped. What will actually happen is that, because the object begins its fall with the same forward momentum it had while it was on the plane, it will continue to travel forward, tracing out a curve as it falls and not a straight line. “Considering the inadequacies of ordinary physical intuitions, it is natural to wonder whether ordinary moral intuitions might be similarly inadequate,” Princeton’s Gilbert Harman has argued,9 and the same could be said for our intuitions about consciousness, metaphysics, and so on.
  • We can’t usually “check” the truth of our philosophical intuitions externally, with an experiment or a proof, the way we can in physics or math. But it’s not clear why we should expect intuitions to be true. If we have an innate tendency towards certain intuitive beliefs, it’s likely because they were useful to our ancestors.
  • But there’s no reason to expect that the intuitions which were true in the world of our ancestors would also be true in other, unfamiliar contexts
  • And for some useful intuitions, such as moral ones, “truth” may have been beside the point. It’s not hard to see how moral intuitions in favor of fairness and generosity would have been crucial to the survival of our ancestors’ tribes, as would the intuition to condemn tribe members who betrayed those reciprocal norms. If we can account for the presence of these moral intuitions by the fact that they were useful, is there any reason left to hypothesize that they are also “true”? The same question could be asked of the moral intuitions which Jonathan Haidt has classified as “purity-based” – an aversion to incest, for example, would clearly have been beneficial to our ancestors. Since that fact alone suffices to explain the (widespread) presence of the “incest is morally wrong” intuition, why should we take that intuition as evidence that “incest is morally wrong” is true?
  • The still-young debate over intuition will likely continue to rage, especially since it’s intertwined with a rapidly growing body of cognitive and social psychological research examining where our intuitions come from and how they vary across time and place.
  • its resolution bears on the work of literally every field of analytic philosophy, except perhaps logic. Can analytic philosophy survive without intuition? (If so, what would it look like?) And can the debate over the legitimacy of appeals to intuition be resolved with an appeal to intuition?
Weiye Loh

Rationally Speaking: A pluralist approach to ethics - 0 views

  • The history of Western moral philosophy includes numerous attempts to ground ethics in one rational principle, standard, or rule. This narrative stretches back 2,500 years to the Greeks, who were interested mainly in virtue ethics and the moral character of the person. The modern era has seen two major additions. In 1785, Immanuel Kant introduced the categorical imperative: act only under the assumption that what you do could be made into a universal law. And in 1789, Jeremy Bentham proposed utilitarianism: work toward the greatest happiness of the greatest number of people (the “utility” principle).
  • Many people now think projects to build a reasonable and coherent moral system are doomed. Still, most secular and religious people reject the alternative of moral relativism, and have spent much ink criticizing it (among my favorite books on the topic is Moral Relativism by Stephen Lukes). The most recent and controversial work in this area comes from Sam Harris. In The Moral Landscape, Harris argues for a morality based on (a science of) well-being and flourishing, rather than religious dogma.
  • I am interested in another oft-heard criticism of Harris’ book, which is that words like “well-being” and “flourishing” are too general to form any relevant basis for morality. This criticism has some force to it, as these certainly are somewhat vague terms. But what if “well-being” and “flourishing” were to be used only as a starting point for a moral framework? These concepts would still put us on a better grounding than religious faith. But they cannot stand alone. Nor do they need to.
  • ...4 more annotations...
  • 1. The harm principle bases our ethical considerations on other beings’ capacity for higher-level subjective experience. Human beings (and some animals) have the potential — and desire — to experience deep pleasure and happiness while seeking to avoid pain and suffering. We have the obligation, then, to afford creatures with these capacities, desires and relations a certain level of respect. They also have other emotional and social interests: for instance, friends and families concerned with their health and enjoyment. These actors also deserve consideration.
  • 2. If we have a moral obligation to act a certain way toward someone, that should be reflected in law. Rights theory is the idea that there are certain rights worth granting to people with very few, if any, caveats. Many of these rights were spelled out in the founding documents of this country, the Declaration of Independence (which admittedly has no legal pull) and the Constitution (which does). They have been defended in a long history of U.S. Supreme Court rulings. They have also been expanded on in the U.N.’s 1948 Universal Declaration of Human Rights and in the founding documents of other countries around the world. To name a few, they include: freedom of belief, speech and expression, due process, equal treatment, health care, and education.
  • 3. While we ought to consider our broader moral efforts, and focus on our obligations to others, it is also important to place attention on our quality as moral agents. A vital part of fostering a respectable pluralist moral framework is to encourage virtues, and cultivate moral character. A short list of these virtues would include prudence, justice, wisdom, honesty, compassion, and courage. One should study these, and strive to put these into practice and work to be a better human being, as Aristotle advised us to do.
  • most people already are ethical pluralists. Life and society are complex to navigate, and one cannot rely on a single idea for guidance. It is probably accurate to say that people lean more toward one theory, rather than practice it to the exclusion of all others. Of course, this only describes the fact that people think about morality in a pluralistic way. But the outlined approach is supported, sound reasoning — that is, unless you are ready to entirely dismiss 2,500 years of Western moral philosophy.
  •  
    while each ethical system discussed so far has its shortcomings, put together they form a solid possibility. One system might not be able to do the job required, but we can assemble a mature moral outlook containing parts drawn from different systems put forth by philosophers over the centuries (plus some biology, but that's Massimo's area). The following is a rough sketch of what I think a decent pluralist approach to ethics might look like.
Weiye Loh

Lying Adapts to New Technology - NYTimes.com - 0 views

  • Being constantly reachable makes butler lies necessary to many people, and the Cornell researchers concluded in a subsequent study that ambiguities inherent in traditional texting also made them easier.
  • Yet technology is already laying siege to the butler lie. Services like BlackBerry Messenger enable mutual users to track when their texts are read, effectively torpedoing the “sorry, phone died last night” excuse. “Friend tracking” applications like Google Latitude allow people to geographically pinpoint their friends’ mobile phones. So much for “stuck in traffic” when you really overslept.
  • eople are already adapting, finding how to circumvent BlackBerry Messenger and read texts undetected, Dr. Birnholtz said. Others form “lie clubs,” groups who back up one another’s phony texts. But if technology has spawned new ruses, are we actually lying more? So far, researchers say no.
  •  
    Many believe it is easier to lie by text than by phone or in person, but emerging research indicates that's not necessarily true. We've always lied; new technologies are merely changing the ways and the reasons we lie. Witness the "butler lie," a term coined by Cornell University researchers in 2009 to describe lies that politely initiate and terminate instant messaging conversations. ("Gotta go, boss is coming!") Like butlers, they act as social buffers, telling others that we are at lunch when we are just avoiding them.
Weiye Loh

Rationally Speaking: The Michael Hecht-Rationally Speaking affair - 0 views

  • As many of our readers and podcast listeners have now learned, author, colleague and friend Jennifer Michael Hecht has started an internet campaign on June 22nd using social media to accuse us of plagiarism.
  • Jennifer apparently believes that we in some form stole her ideas, as presented in her 2008 book, The Happiness Myth
  • We protested our innocence, emphasizing that the only areas of overlap between her book and our podcast concern a few very common topics about happiness (its treatment by Aristotle and Epicurus, so-called happiness “set points,” and the question of whether wealth is connected to happiness). These, we pointed out, are so fundamental to a discussion of happiness that they are practically mandatory in any treatment of it. It would be odd indeed to have a show on happiness and not mention the research on set points, or on income and happiness — sort of like talking about evolution without mentioning Darwin and natural selection. We also pointed out that said topics make up only a small fraction of those we discussed in the podcast, and of her book for that matter. These ideas are certainly not Jennifer’s original contributions (of which there are many genuine examples in her book); rather, they have been widely discussed in the media, academic journals, and in many popular press books, such as Stumbling on Happiness by Daniel Todd Gilbert, Authentic Happiness by Martin E. P. Seligman, and The Happiness Hypothesis, by Jonathan Haidt.
  • ...3 more annotations...
  • a podcast (as opposed to, say, a book, or a technical paper) is a summary for a lay audience, and is not in any way a scholarly pursuit towards defining new ideas on the topic. This means that it isn't even clear how the very concept of plagiarism could possibly apply in this context. Nevertheless, we asked Jennifer — multiple times — to provide us with a detailed list of her charges, such as at what points in the podcast we used exactly what from her book. We thought that was fair, considering that she was the one making the potentially damaging charges. She refused, stating that we should do that kind of home work on our own. So we did. Below is a table that Julia and I put together, with a minute-by-minute summary and commentary of the entire podcast.
  • c) Those ideas that do overlap with Jennifer’s are common knowledge in the field. 
  • We deeply regret this incident, particularly the manner in which Jennifer has chosen to exploit social networks to smear our reputation before even attempting to contact us and hear our side of the story. We stand by the content and form of our podcast, which we think is intrinsically interesting (while certainly not groundbreaking!). We also still profess admiration for Jennifer’s work, not just about happiness, but in her other books as well, and hope that this ugly incident can soon be put behind us so that we can all get back to what we enjoy doing: writing and talking about interesting topics for an intelligent and informed audience.
Weiye Loh

Rationally Speaking: Response to Jonathan Haidt's response, on the academy's liberal bias - 0 views

  • Dear Prof. Haidt,You understandably got upset by my harsh criticism of your recent claims about the mechanisms behind the alleged anti-conservative bias that apparently so permeates the modern academy. I find it amusing that you simply assumed I had not looked at your talk and was therefore speaking without reason. Yet, I have indeed looked at it (it is currently published at Edge, a non-peer reviewed webzine), and found that it simply doesn’t add much to the substance (such as it is) of Tierney’s summary.
  • Yes, you do acknowledge that there may be multiple reasons for the imbalance between the number of conservative and liberal leaning academics, but then you go on to characterize the academy, at least in your field, as a tribe having a serious identity issue, with no data whatsoever to back up your preferred subset of causal explanations for the purported problem.
  • your talk is simply an extended op-ed piece, which starts out with a summary of your findings about the different moral outlooks of conservatives and liberals (which I have criticized elsewhere on this blog), and then proceeds to build a flimsy case based on a couple of anecdotes and some badly flawed data.
  • ...4 more annotations...
  • For instance, slide 23 shows a Google search for “liberal social psychologist,” highlighting the fact that one gets a whopping 2,740 results (which, actually, by Google standards is puny; a search under my own name yields 145,000, and I ain’t no Lady Gaga). You then compared this search to one for “conservative social psychologist” and get only three entries.
  • First of all, if Google searches are the main tool of social psychology these days, I fear for the entire field. Second, I actually re-did your searches — at the prompting of one of my readers — and came up with quite different results. As the photo here shows, if you actually bother to scroll through the initial Google search for “liberal social psychologist” you will find that there are in fact only 24 results, to be compared to 10 (not 3) if you search for “conservative social psychologist.” Oops. From this scant data I would simply conclude that political orientation isn’t a big deal in social psychology.
  • Your talk continues with some pretty vigorous hand-waving: “We rely on our peers to find flaws in our arguments, but when there is essentially nobody out there to challenge liberal assumptions and interpretations of experimental findings, the peer review process breaks down, at least for work that is related to those sacred values.” Right, except that I would like to see a systematic survey of exactly how the lack of conservative peer review has affected the quality of academic publications. Oh, wait, it hasn’t, at least according to what you yourself say in the next sentence: “The great majority of work in social psychology is excellent, and is unaffected by these problems.” I wonder how you know this, and why — if true — you then think that there is a problem. Philosophers call this an inherent contradiction, it’s a common example of bad argument.
  • Finally, let me get to your outrage at the fact that I have allegedly accused you of academic misconduct and lying. I have done no such thing, and you really ought (in the ethical sense) to be careful when throwing those words around. I have simply raised the logical possibility that you (and Tierney) have an agenda, a possibility based on reading several of the things both you and Tierney have written of late. As a psychologist, I’m sure you are aware that biases can be unconscious, and therefore need not imply that the person in question is lying or engaging in any form of purposeful misconduct. Or were you implying in your own talk that your colleagues’ bias was conscious? Because if so, you have just accused an entire profession of misconduct.
Weiye Loh

Libel Chill and Me « Skepticism « Critical Thinking « Skeptic North - 0 views

  • Skeptics may by now be very familiar with recent attempts in Canada to ban wifi from public schools and libraries.  In short: there is no valid scientific reason to be worried about wifi.  It has also been revealed that the chief scientists pushing the wifi bans have been relying on poor data and even poorer studies.  By far the vast majority of scientific data that currently exists supports the conclusion that wifi and cell phone signals are perfectly safe.
  • So I wrote about that particular topic in the summer.  It got some decent coverage, but the fear mongering continued. I wrote another piece after I did a little digging into one of the main players behind this, one Rodney Palmer, and I discovered some decidedly pseudo-scientific tendencies in his past, as well as some undisclosed collusion.
  • One night I came home after a long day at work, a long commute, and a phone call that a beloved family pet was dying, and will soon be in significant pain.  That is the state I was in when I read the news about Palmer and Parliamentary committee.
  • ...18 more annotations...
  • That’s when I wrote my last significant piece for Skeptic North.  Titled, “Rodney Palmer: When Pseudoscience and Narcissism Collide,” it was a fiery take-down of every claim I heard Palmer speak before the committee, as well as reiterating some of his undisclosed collusion, unethical media tactics, and some reasons why he should not be considered an expert.
  • This time, the article got a lot more reader eyeballs than anything I had ever written for this blog (or my own) and it also caught the attention of someone on a school board which was poised to vote on wifi.  In these regards: Mission very accomplished.  I finally thought that I might be able to see some people in the media start to look at Palmer’s claims with a more critical eye than they had been previously, and I was flattered at the mountain of kind words, re-tweets, reddit comments and Facebook “likes.”
  • The comments section was mostly supportive of my article, and they were one of the few things that kept me from hiding in a hole for six weeks.  There were a few comments in opposition to what I wrote, some sensible, most incoherent rambling (one commenter, when asked for evidence, actually linked to a YouTube video which they referred to as “peer reviewed”)
  • One commenter was none other than the titular subject of the post, Rodney Palmer himself.  Here is a screen shot of what he said: Screen shot of the Libel/Slander threat.
  • Knowing full well the story of the libel threat against Simon Singh, I’ve always thought that if ever a threat like that came my way, I’d happily beat it back with the righteous fury and good humour of a person with the facts on their side.  After all, if I’m wrong, you’d be able to prove me wrong, rather than try to shut me up with a threat of a lawsuit.  Indeed, I’ve been through a similar situation once before, so I should be an old hat at this! Let me tell you friends, it’s not that easy.  In fact, it’s awful.  Outside observers could easily identify that Palmer had no case against me, but that was still cold comfort to me.  It is a very stressful situation to find yourself in.
  • The state of libel and slander laws in this country are such that a person can threaten a lawsuit without actually threatening a lawsuit.  There is no need to hire a lawyer to investigate the claims, look into who I am, where I live, where I work, and issue a carefully worded threatening letter demanding compliance.  All a person has to say is some version of  “Libel.  Slander.  Hmmmm….,” and that’s enough to spook a lot of people into backing off. It’s a modern day bogeyman.  They don’t have to prove it.  They don’t have to act on it.  A person or organization just has to say “BOO!” with sufficient seriousness, and unless you’ve got a good deal of editorial and financial support, discussion goes out the window. Libel Chill refers to the ‘chilling effect’ that the possibility of a libel/slander lawsuit has.  If a person is scared they might get sued, then they won’t even comment on a piece at all.  In my case, I had already commented three times on the wifi scaremongering, but this bogus threat against me was surely a major contributing factor to my not commenting again.
  • I ceased to discuss anything in the comment thread of the original article, and even shied away from other comment threads, calling me out.  I learned a great deal about the wifi/EMF issue since I wrote the article, but I did not comment on any of it, because I knew that Palmer and his supporters were watching me like a hawk (sorry to stretch the simile), and would likely try to silence me again.  I couldn’t risk a lawsuit.  Even though I knew there was no case against me, I couldn’t afford a lawyer just to prove that I didn’t do anything illegal.
  • The Libel and Slanders Act of Ontario, 1990 hasn’t really caught up with the internet.  There isn’t a clear precedent that defines a blog post, Twitter feed or Facebook post as falling under the umbrella of “broadcast,” which is what the bill addresses.  If I had written the original article in print, Palmer would have had six weeks to file suit against me.  But the internet is only kind of considered ‘broadcast.’  So it could be just six weeks, but he could also have up to two years to act and get a lawyer after me.  Truth is, there’s not a clear demarcation point for our Canadian legal system.
  • Libel laws in Canada are somewhere in between the Plaintiff-favoured UK system, and the Defendant-favoured US system.  On the one hand, if Palmer chose to incur the expense and time to hire a lawyer and file suit against me, the burden of proof would be on me to prove that I did not act with malice.  Easy peasy.  On the other hand, I would have a strong case that I acted in the best interests of Canadians, which would fall under the recent Supreme Court of Canada decision on protecting what has been termed, “Responsible Communication.”  The Supreme Court of Canada decision does not grant bloggers immunity from libel and slander suits, but it is a healthy dose of welcome freedom to discuss issues of importance to Canadians.
  • Palmer himself did not specify anything against me in his threat.  There was nothing particular that he complained about, he just said a version of “Libel and Slander!” at me.  He may as well have said “Boo!”
  • This is not a DBAD discussion (although I wholeheartedly agree with Phil Plait there). 
  • If you’d like to boil my lessons down to an acronym, I suppose the best one would be DBRBC: Don’t be reckless. Be Careful.
  • I wrote a piece that, although it was not incorrect in any measurable way, was written with fire and brimstone, piss and vinegar.  I stand by my piece, but I caution others to be a little more careful with the language they use.  Not because I think it is any less or more tactically advantageous (because I’m not sure anyone can conclusively demonstrate that being an aggressive jerk is an inherently better or worse communication tool), but because the risks aren’t always worth it.
  • I’m not saying don’t go after a person.  There are egomaniacs out there who deserve to be called out and taken down (verbally, of course).  But be very careful with what you say.
  • ask yourself some questions first: 1) What goal(s) are you trying to accomplish with this piece? Are you trying to convince people that there is a scientific misunderstanding here?  Are you trying to attract the attention of the mainstream media to a particular facet of the issue?  Are you really just pissed off and want to vent a little bit?  Is this article a catharsis, or is it communicative?  Be brutally honest with your intentions, it’s not as easy as you think.  Venting is okay.  So is vicious venting, but be careful what you dress it up as.
  • 2) In order to attain your goals, did you use data, or personalities?  If the former, are you citing the best, most current data you have available to you? Have you made a reasonable effort to check your data against any conflicting data that might be out there? If the latter, are you providing a mountain of evidence, and not just projecting onto personalities?  There is nothing inherently immoral or incorrect with going after the personalities.  But it is a very risky undertaking. You have to be damn sure you know what you’re talking about, and damn ready to defend yourself.  If you’re even a little loose with your claims, you will be called out for it, and a legal threat is very serious and stressful. So if you’re going after a personality, is it worth it?
  • 3) Are you letting the science speak for itself?  Are you editorializing?  Are you pointing out what part of your piece is data and what part is your opinion?
  • 4) If this piece was written in anger, frustration, or otherwise motivated by a powerful emotion, take a day.  Let your anger subside.  It will.  There are many cathartic enterprises out there, and you don’t need to react to the first one that comes your way.  Let someone else read your work before you share it with the internet.  Cooler heads definitely do think more clearly.
Weiye Loh

Land Destroyer: Alternative Economics - 0 views

  • Peer to peer file sharing (P2P) has made media distribution free and has become the bane of media monopolies. P2P file sharing means digital files can be copied and distributed at no cost. CD's, DVD's, and other older forms of holding media are no longer necessary, nor is the cost involved in making them or distributing them along a traditional logistical supply chain. Disc burners, however, allow users the ability to create their own physical copies at a fraction of the cost of buying the media from the stores. Supply and demand is turned on its head as the more popular a certain file becomes via demand, the more of it that is available for sharing, and the easier it is to obtain. Supply and demand increase in tandem towards a lower "price" of obtaining the said file.Consumers demand more as price decreases. Producersnaturally want to produce more of something as priceincreases. Somewhere in between consumers and producers meet at the market price or "marketequilibrium."P2P technology eliminates material scarcity, thus the more afile is in demand, the more people end up downloading it, andthe easier it is for others to find it and download it. Considerthe implications this would have if technology made physicalobjects as easy to "share" as information is now.
  • In the end, it is not government regulations, legal contrivances, or licenses that govern information, but rather the free market mechanism commonly referred to as Adam Smith's self regulating "Invisible Hand of the Market." In other words, people selfishly seeking accurate information for their own benefit encourage producers to provide the best possible information to meet their demand. While this is not possible in a monopoly, particularly the corporate media monopoly of the "left/right paradigm" of false choice, it is inevitable in the field of real competition that now exists online due to information technology.
  • Compounding the establishment's troubles are cheaper cameras and cheaper, more capable software for 3D graphics, editing, mixing, and other post production tasks, allowing for the creation of an alternative publishing, audio and video industry. "Underground" counter-corporate music and film has been around for a long time but through the combination of technology and the zealous corporate lawyers disenfranchising a whole new generation that now seeks an alternative, it is truly coming of age.
  • ...3 more annotations...
  • With a growing community of people determined to become collaborative producers rather than fit into the producer/consumer paradigm, and 3D files for physical objects already being shared like movies and music, the implications are profound. Products, and the manufacturing technology used to make them will continue to drop in price, become easier to make for individuals rather than large corporations, just as media is now shifting into the hands of the common people. And like the shift of information, industry will move from the elite and their agenda of preserving their power, to the end of empowering the people.
  • In a future alternative economy where everyone is a collaborative designer, producer, and manufacturer instead of passive consumers and when problems like "global climate change," "overpopulation," and "fuel crises" cross our path, we will counter them with technical solutions, not political indulgences like carbon taxes, and not draconian decrees like "one-child policies."
  • We will become the literal architects of our own future in this "personal manufacturing" revolution. While these technologies may still appear primitive, or somewhat "useless" or "impractical" we must remember where our personal computers stood on the eve of the dawning of the information age and how quickly they changed our lives. And while many of us may be unaware of this unfolding revolution, you can bet the globalists, power brokers, and all those that stand to lose from it not only see it but are already actively fighting against it.Understandably it takes some technical know-how to jump into the personal manufacturing revolution. In part 2 of "Alternative Economics" we will explore real world "low-tech" solutions to becoming self-sufficient, local, and rediscover the empowerment granted by doing so.
Weiye Loh

Rationally Speaking: On ethics, part III: Deontology - 0 views

  • Plato showed convincingly in his Euthyphro dialogue that even if gods existed they would not help at all settling the question of morality.
  • Broadly speaking, deontological approaches fall into the same category as consequentialism — they are concerned with what we ought to do, as opposed to what sort of persons we ought to be (the latter is, most famously, the concern of virtue ethics). That said, deontology is the chief rival of consequentialism, and the two have distinct advantages and disadvantages that seem so irreducible
  • Here is one way to understand the difference between consequentialism and deontology: for the former the consequences of an action are moral if they increase the Good (which, as we have seen, can be specified in different ways, including increasing happiness and/or decreasing pain). For the latter, the fundamental criterion is conformity to moral duties. You could say that for the deontologist the Right (sometimes) trumps the Good. Of course, as a result consequentialists have to go through the trouble of defining and justifying the Good, while deontologists have to tackle the task of defining and justifying the Right.
  • ...10 more annotations...
  • two major “modes” of deontology: agent-centered and victim-centered. Agent-centered deontology is concerned with permissions and obligations to act toward other agents, the typical example being parents’ duty to protect and nurture their children. Notice the immediate departure from consequentialism, here, since the latter is an agent-neutral type of ethics (we have seen that it has trouble justifying the idea of special treatment of relatives or friends). Where do such agent-relative obligations come from? From the fact that we make explicit or implicit promises to some agents but not others. By bringing my child into the world, for instance, I make a special promise to that particular individual, a promise that I do not make to anyone else’s children. While this certainly doesn’t mean that I don’t have duties toward other children (like inflicting no intentional harm), it does mean that I have additional duties toward my own children as a result of the simple fact that they are mine.
  • Agent-centered deontology gets into trouble because of its close philosophical association to some doctrines that originated within Catholic theology, like the idea of double effect. (I should immediately clarify that the trouble is not due to the fact that these doctrines are rooted in a religious framework, it’s their intrinsic moral logic that is at issue here.) For instance, for agent-centered deontologists we are morally forbidden from killing innocent others (reasonably enough), but this prohibition extends even to cases when so doing would actually save even more innocents.
  • Those familiar with trolleology will recognize one of the classic forms of the trolley dilemma here: is it right to throw an innocent person in front of the out of control trolley in order to save five others? For consequentialists the answer is a no-brainer: of course yes, you are saving a net of four lives! But for the deontologist you are now using another person (the innocent you are throwing to stop the trolley) as a means to an end, thus violating one of the forms of Kant’s imperative:“Act in such a way that you treat humanity, whether in your own person or in the person of any other, always at the same time as an end and never merely as a means to an end.”
  • The other form, in case you are wondering, is: “Act only according to that maxim whereby you can at the same time will that it should become a universal law without contradiction.”
  • Victim-centered deontologies are right- rather than duty-based, which of course does raise the question of why we think of them as deontological to begin with.
  • The fundamental idea about victim-centered deontology is the right that people have not to be used by others without their consent. This is were we find Robert Nozick-style libertarianism, which I have already criticized on this blog. One of the major implications of this version of deontology is that there is no strong moral duty to help others.
  • contractarian deontological theories. These deal with social contracts of the type, for instance, discussed by John Rawls in his theory of justice. However, I will devote a separate post to contractarianism, in part because it is so important in ethics, and in part because one can argue that contractarianism is really a meta-ethical theory, and therefore does not strictly fall under deontology per se.
  • deontological theories have the advantage over consequentialism in that they account for special concerns for one’s relatives and friends, as we have seen above. Consequentialism, by comparison, comes across as alienating and unreasonably demanding. Another advantage of deontology over consequentialism is that it accounts for the intuition that even if an act is not morally demanded it may still be praiseworthy. For a consequentialist, on the contrary, if something is not morally demanded it is then morally forbidden. (Another way to put this is that consequentialism is a more minimalist approach to ethics than deontology.) Moreover, deontology also deals much better than consequentialism with the idea of rights.
  • deontological theories run into the problem that they seem to give us permission, and sometimes even require, to make things actually morally worse in the world. Indeed, a strict deontologist could actually cause human catastrophes by adhering to Kant’s imperative and still think he acted morally (Kant at one point remarked that it is “better the whole people should perish” than that injustice be done — one wonders injustice to whom, since nobody would be left standing). Deontologists also have trouble dealing with the seemingly contradictory ideas that our duties are categorical (i.e., they do not admit of exceptions), and yet that some duties are more important than others. (Again, Kant famously stated that “a conflict of duties is inconceivable” while forgetting to provide any argument in defense of such a bold statement.)
  • . One famous attempt at this reconciliation was proposed by Thomas Nagel (he of “what is it like to be a bat?” fame). Nagel suggested that perhaps we should be consequentialists when it comes to agent-neutral reasoning, and deontologists when we engage in agent-relative reasoning. He neglected to specify, however, any non-mysterious way to decide what to do in those situations in which the same moral dilemma can be seen from both perspectives.
Weiye Loh

Referees' quotes - 2010 - 2010 - Environmental Microbiology - Wiley Online Library - 0 views

  • This paper is desperate. Please reject it completely and then block the author's email ID so they can't use the online system in future.
  • The type of lava vs. diversity has no meaning if only one of each sample is analyzed; multiple samples are required for generality. This controls provenance (e.g. maybe some beetle took a pee on one or the other of the samples, seriously skewing relevance to lava composition).
  • Merry X-mas! First, my recommendation was reject with new submission, because it is necessary to investigate further, but reading a well written manuscript before X-mas makes me feel like Santa Claus.
  • ...6 more annotations...
  • Season's Greetings! I apologise for my slow response but a roast goose prevented me from answering emails for a few days.• I started to review this but could not get much past the abstract.
  • Stating that the study is confirmative is not a good start for the Discussion. Rephrasing the first sentence of the Discussion would seem to be a good idea.
  • Reject – More holes than my grandad's string vest!• The writing and data presentation are so bad that I had to leave work and go home early and then spend time to wonder what life is about.
  • Sorry for the overdue, it seems to me that ‘overdue’ is my constant, persistent and chronic EMI status. Good that the reviewers are not getting red cards! The editors could create, in addition to the referees quotes, a ranking for ‘on-time’ referees. I would get the bottom place. But fast is not equal to good (I am consoling myself!)
  • It hurts me a little to have so little criticism of a manuscript.
  • Based on titles seen in journals, many authors seem to be more fascinated these days by their methods than by their science. The authors should be encouraged to abstract the main scientific (i.e., novel) finding into the title.
Weiye Loh

Roger Pielke Jr.'s Blog: Innovation in Drug Development: An Inverse Moore's Law? - 0 views

  • Today's FT has this interesting graph and an accompanying story, showing a sort of inverse Moore's Law of drug development.  Over almost 60 years the number of new drugs developed per unit of investment has declined in a fairly constant manner, and some drug companies are now slashing their R&D budgets.
  • why this trend has occurred.  The FT points to a combination of low-hanging fruit that has been plucked and increasing costs of drug development. To some observers, that reflects the end of the mid to late 20th century golden era for drug discovery, when first-generation medicines such as antibiotics and beta-blockers to treat high blood pressure transformed healthcare. At the same time, regulatory demands to prove safety and efficacy have grown firmer. The result is larger and more costly clinical trials, and high failure rates for experimental drugs.
  • Others point to flawed innovation policies in industry and governments: “The markets treat drug companies as though research and development spending destroys value,” says Jack Scannell, an analyst at Bernstein Research. “People have stopped distinguishing the good from the bad. All those which performed well returned cash to shareholders. Unless the industry can articulate what the problem is, I don’t expect that to change.”
  • ...6 more annotations...
  • Mr [Andrew] Baum [of Morgan Stanley] argues that the solution for drug companies is to share the risks of research with others. That means reducing in-house investment in research, and instead partnering and licensing experimental medicines from smaller companies after some of the early failures have been eliminated.
  • Chas Bountra of Oxford university calls for a more radical partnership combining industry and academic research. “What we are trying to do is just too difficult,” he says. “No one organisation can do it, so we have to pool resources and expertise.” He suggests removing intellectual property rights until a drug is in mid-stage testing in humans, which would make academics more willing to co-operate because they could publish their results freely. The sharing of data would enable companies to avoid duplicating work.
  • The challenge is for academia and biotech companies to fill the research gap. Mr Ratcliffe argues that after a lull in 2009 and 2010, private capital is returning to the sector – as demonstrated by a particular buzz at JPMorgan’s new year biotech conference in California.
  • Patrick Vallance, senior vice-president for discovery at GSK, is cautious about deferring patents until so late, arguing that drug companies need to be able to protect their intellectual property in order to fund expensive late-stage development. But he too is experimenting with ways to co-operate more closely with academics over longer periods. He is also championing the “externalisation” of the company’s pipeline, with biotech and university partners accounting for half the total. GSK has earmarked £50m to support fledgling British companies, many “wrapped around” the group’s sites. One such example is Convergence, a spin-out from a GSK lab researching pain relief.
  • Big pharmaceutical companies are scrambling to find ways to overcome the loss of tens of billions of dollars in revenue as patents on top-selling drugs run out. Many sound similar notes about encouraging entrepreneurialism in their ranks, making smart deals and capitalizing on emerging-market growth, But their actual plans are often quite different—and each carries significant risks. Novartis AG, for instance, is so convinced that diversification is the best course that the company has a considerable business selling low-priced generics. Meantime, Bristol-Myers Squibb Co. has decided to concentrate on innovative medicines, shedding so many nonpharmaceutical units that it' has become midsize. GlaxoSmithKline PLC is still investing in research, but like Pfizer it has narrowed the range of disease areas in which it's seeking new treatments. Underlying the divergence is a deep-seated philosophical dispute over the merits of the heavy investment that companies must make to discover new drugs. By most estimates, bringing a new molecule to market costs drug makers more than $1 billion. Industry officials have been engaged in a vigorous debate over whether the investment is worth it, or whether they should leave it to others whose work they can acquire or license after a demonstration of strong potential.
  • To what extent can approached to innovation influence the trend line in the graph above?  I don't think that anyone really knows the answer.  The different approaches being taken by Merck and Pfizer, for instance, represent a real world policy experiment: The contrast between Merck and Pfizer reflects the very different personal approaches of their CEOs. An accountant by training, Mr. Read has held various business positions during a three-decade career at Pfizer. The 57-year-old cited torcetrapib, a cholesterol medicine that the company spent more than $800 million developing but then pulled due to safety concerns, as an example of the kind of wasteful spending Pfizer would avoid. "We're going to have metrics," Mr. Read said. He wants Pfizer to stop "always investing on hope rather than strong signals and the quality of the science, the quality of the medicine." Mr. Frazier, 56, a Harvard-educated lawyer who joined Merck in 1994 from private practice, said the company was sticking by its own troubled heart drug, vorapaxar. Mr. Frazier said he wanted to see all of the data from the trials before rushing to judgment. "We believe in the innovation approach," he said.
Weiye Loh

Tom Morris - Catholicism and copyright - 0 views

  • One of the most amusing things about Scientology
  • is the fact that the scriptures of the church are copyright and some are kept very secret. The business model is simple: you have to pay to read more
  • The Bible isn’t copyright. The Qu’ran isn’t copyright. If you want to publish your own version of a huge range of religious texts, you can. Pop over to Wikisource and you can read copyright-free editions of the Bible, prayers, the Apocrypha and the Tao Te Ching among many thousands of other religious texts (and why not some atheist/humanist manifestos too?). This enables scholarship: theologians, historians and others can make their own commentaries building atop these scriptures. Critical scholarship of the sort Biblical commentators do is helped by not having the threat of a lawsuit hanging over one if one quotes a bit too much from the text.
  • ...4 more annotations...
  • What makes the Scientology situation so egregious is that no independent theological, philosophical or critical reflection can happen when the text is locked away. There seems to me to be a conflict here. If you believe you have access to a truth that has the ability to save people in the afterlife or to dramatically make their life better in this one, you have some kind of duty to share it. Or rather, if you are keeping your religious truths to yourself and not sharing them, people have very good reason to believe you might be a huckster.
  • But I found out today that Scientology is not alone in locking up their teachings behind the wall of copyright. The Catholic Church does too. All of the copyright in the papal writings of Pope Benedict XVI now belong to the Vatican publishing house, Libreria Editrice Vaticana.
  • The writings of the Pope will not go out of copyright until 70 years after his death.
  • What benefit is this to anyone? Did the lack of copyright protection for writings of Popes before the current copyright regime prevent the spread of Catholicism? If everything the Pope wrote was in public domain, would it prevent the development of the “useful Arts and Sciences”, as the U.S. Constitution puts it? The motivation of the Pope is really not the same as the motivation of the Walt Disney company. Without copyright protection, the Church will not fall to bits. Indeed, one interesting question is what the copyright status of the Catholic Catechism is. This is the basic doctrine of the Catholic faith. I would presume it is copyright in much the same way. If we criticise Scientology for locking it’s scriptures up behind copyright, surely the same could be said for the Catechism? For a body like the Catholic Church, it would seem totally reasonable and straight-forward to simply release all their materials completely as public domain.
Weiye Loh

Climategate: Hiding the Decline? - 0 views

  • Regarding the “hide the decline” email, Jones has explained that when he used the word “trick”, he simply meant “a mathematical approach brought to bear to solve a problem”. The inquiry made the following criticism of the resulting graph (its emphasis): [T]he figure supplied for the WMO Report was misleading. We do not find that it is misleading to curtail reconstructions at some point per se, or to splice data, but we believe that both of these procedures should have been made plain — ideally in the figure but certainly clearly described in either the caption or the text. [1.3.2] But this was one isolated instance that occurred more than a decade ago. The Review did not find anything wrong with the overall picture painted about divergence (or uncertainties generally) in the literature and in IPCC reports. The Review notes that the WMO report in question “does not have the status or importance of the IPCC reports”, and concludes that divergence “is not hidden” and “the subject is openly and extensively discussed in the literature, including CRU papers.” [1.3.2]
  • As for the treatment of uncertainty in the AR4’s paleoclimate chapter, the Review concludes that the central Figure 6.10 is not misleading, that “[t]he variation within and between lines, as well as the depiction of uncertainty is quite apparent to any reader”, that “there has been no exclusion of other published temperature reconstructions which would show a very different picture”, and that “[t]he general discussion of sources of uncertainty in the text is extensive, including reference to divergence”. [7.3.1]
  • Regarding CRU’s selections of tree ring series, the Review does not presume to say whether one series is better than another, though it does point out that CRU have responded to the accusation that Briffa misused the Yamal data on their website. The Review found no evidence that CRU scientists knowingly promoted non-representative series or that their input cast doubt on the IPCC’s conclusions. The much-maligned Yamal series was included in only 4 of the 12 temperature reconstructions in the AR4 (and not at all in the TAR).
  • ...1 more annotation...
  • What about the allegation that CRU withheld the Yamal data? The Review found that “CRU did not withhold the underlying raw data (having correctly directed the single request to the owners)”, although “we believe that CRU should have ensured that the data they did not own, but on which their publications relied, was archived in a more timely way.” [1.3.2]
  •  
    Regarding the "hide the decline" email, Jones has explained that when he used the word "trick", he simply meant "a mathematical approach brought to bear to solve a problem". The inquiry made the following criticism of the resulting graph (its emphasis): [T]he figure supplied for the WMO Report was misleading. We do not find that it is misleading to curtail reconstructions at some point per se, or to splice data, but we believe that both of these procedures should have been made plain - ideally in the figure but certainly clearly described in either the caption or the text. [1.3.2] But this was one isolated instance that occurred more than a decade ago. The Review did not find anything wrong with the overall picture painted about divergence (or uncertainties generally) in the literature and in IPCC reports. The Review notes that the WMO report in question "does not have the status or importance of the IPCC reports", and concludes that divergence "is not hidden" and "the subject is openly and extensively discussed in the literature, including CRU papers." [1.3.2]
Weiye Loh

Rationally Speaking: Is modern moral philosophy still in thrall to religion? - 0 views

  • Recently I re-read Richard Taylor’s An Introduction to Virtue Ethics, a classic published by Prometheus
  • Taylor compares virtue ethics to the other two major approaches to moral philosophy: utilitarianism (a la John Stuart Mill) and deontology (a la Immanuel Kant). Utilitarianism, of course, is roughly the idea that ethics has to do with maximizing pleasure and minimizing pain; deontology is the idea that reason can tell us what we ought to do from first principles, as in Kant’s categorical imperative (e.g., something is right if you can agree that it could be elevated to a universally acceptable maxim).
  • Taylor argues that utilitarianism and deontology — despite being wildly different in a variety of respects — share one common feature: both philosophies assume that there is such a thing as moral right and wrong, and a duty to do right and avoid wrong. But, he says, on the face of it this is nonsensical. Duty isn’t something one can have in the abstract, duty is toward a law or a lawgiver, which begs the question of what could arguably provide us with a universal moral law, or who the lawgiver could possibly be.
  • ...11 more annotations...
  • His answer is that both utilitarianism and deontology inherited the ideas of right, wrong and duty from Christianity, but endeavored to do without Christianity’s own answers to those questions: the law is given by God and the duty is toward Him. Taylor says that Mill, Kant and the like simply absorbed the Christian concept of morality while rejecting its logical foundation (such as it was). As a result, utilitarians and deontologists alike keep talking about the right thing to do, or the good as if those concepts still make sense once we move to a secular worldview. Utilitarians substituted pain and pleasure for wrong and right respectively, and Kant thought that pure reason can arrive at moral universals. But of course neither utilitarians nor deontologist ever give us a reason why it would be irrational to simply decline to pursue actions that increase global pleasure and diminish global pain, or why it would be irrational for someone not to find the categorical imperative particularly compelling.
  • The situation — again according to Taylor — is dramatically different for virtue ethics. Yes, there too we find concepts like right and wrong and duty. But, for the ancient Greeks they had completely different meanings, which made perfect sense then and now, if we are not mislead by the use of those words in a different context. For the Greeks, an action was right if it was approved by one’s society, wrong if it wasn’t, and duty was to one’s polis. And they understood perfectly well that what was right (or wrong) in Athens may or may not be right (or wrong) in Sparta. And that an Athenian had a duty to Athens, but not to Sparta, and vice versa for a Spartan.
  • But wait a minute. Does that mean that Taylor is saying that virtue ethics was founded on moral relativism? That would be an extraordinary claim indeed, and he does not, in fact, make it. His point is a bit more subtle. He suggests that for the ancient Greeks ethics was not (principally) about right, wrong and duty. It was about happiness, understood in the broad sense of eudaimonia, the good or fulfilling life. Aristotle in particular wrote in his Ethics about both aspects: the practical ethics of one’s duty to one’s polis, and the universal (for human beings) concept of ethics as the pursuit of the good life. And make no mistake about it: for Aristotle the first aspect was relatively trivial and understood by everyone, it was the second one that represented the real challenge for the philosopher.
  • For instance, the Ethics is famous for Aristotle’s list of the virtues (see Table), and his idea that the right thing to do is to steer a middle course between extreme behaviors. But this part of his work, according to Taylor, refers only to the practical ways of being a good Athenian, not to the universal pursuit of eudaimonia. Vice of Deficiency Virtuous Mean Vice of Excess Cowardice Courage Rashness Insensibility Temperance Intemperance Illiberality Liberality Prodigality Pettiness Munificence Vulgarity Humble-mindedness High-mindedness Vaingloriness Want of Ambition Right Ambition Over-ambition Spiritlessness Good Temper Irascibility Surliness Friendly Civility Obsequiousness Ironical Depreciation Sincerity Boastfulness Boorishness Wittiness Buffoonery</t
  • How, then, is one to embark on the more difficult task of figuring out how to live a good life? For Aristotle eudaimonia meant the best kind of existence that a human being can achieve, which in turns means that we need to ask what it is that makes humans different from all other species, because it is the pursuit of excellence in that something that provides for a eudaimonic life.
  • Now, Plato - writing before Aristotle - ended up construing the good life somewhat narrowly and in a self-serving fashion. He reckoned that the thing that distinguishes humanity from the rest of the biological world is our ability to use reason, so that is what we should be pursuing as our highest goal in life. And of course nobody is better equipped than a philosopher for such an enterprise... Which reminds me of Bertrand Russell’s quip that “A process which led from the amoeba to man appeared to the philosophers to be obviously a progress, though whether the amoeba would agree with this opinion is not known.”
  • But Aristotle's conception of "reason" was significantly broader, and here is where Taylor’s own update of virtue ethics begins to shine, particularly in Chapter 16 of the book, aptly entitled “Happiness.” Taylor argues that the proper way to understand virtue ethics is as the quest for the use of intelligence in the broadest possible sense, in the sense of creativity applied to all walks of life. He says: “Creative intelligence is exhibited by a dancer, by athletes, by a chess player, and indeed in virtually any activity guided by intelligence [including — but certainly not limited to — philosophy].” He continues: “The exercise of skill in a profession, or in business, or even in such things as gardening and farming, or the rearing of a beautiful family, all such things are displays of creative intelligence.”
  • what we have now is a sharp distinction between utilitarianism and deontology on the one hand and virtue ethics on the other, where the first two are (mistakenly, in Taylor’s assessment) concerned with the impossible question of what is right or wrong, and what our duties are — questions inherited from religion but that in fact make no sense outside of a religious framework. Virtue ethics, instead, focuses on the two things that really matter and to which we can find answers: the practical pursuit of a life within our polis, and the lifelong quest of eudaimonia understood as the best exercise of our creative faculties
  • &gt; So if one's profession is that of assassin or torturer would being the best that you can be still be your duty and eudaimonic? And what about those poor blighters who end up with an ugly family? &lt;Aristotle's philosophy is ver much concerned with virtue, and being an assassin or a torturer is not a virtue, so the concept of a eudaimonic life for those characters is oxymoronic. As for ending up in a "ugly" family, Aristotle did write that eudaimonia is in part the result of luck, because it is affected by circumstances.
  • &gt; So to the title question of this post: "Is modern moral philosophy still in thrall to religion?" one should say: Yes, for some residual forms of philosophy and for some philosophers &lt;That misses Taylor's contention - which I find intriguing, though I have to give it more thought - that *all* modern moral philosophy, except virtue ethics, is in thrall to religion, without realizing it.
  • “The exercise of skill in a profession, or in business, or even in such things as gardening and farming, or the rearing of a beautiful family, all such things are displays of creative intelligence.”So if one's profession is that of assassin or torturer would being the best that you can be still be your duty and eudaimonic? And what about those poor blighters who end up with an ugly family?
1 - 20 of 112 Next › Last »
Showing 20 items per page