Skip to main content

Home/ New Media Ethics 2009 course/ Group items tagged Secularism

Rss Feed Group items tagged

Weiye Loh

Religion as a catalyst of rationalization « The Immanent Frame - 0 views

  • For Habermas, religion has been a continuous concern precisely because it is related to both the emergence of reason and the development of a public space of reason-giving. Religious ideas, according to Habermas, are never mere irrational speculation. Rather, they possess a form, a grammar or syntax, that unleashes rational insights, even arguments; they contain, not just specific semantic contents about God, but also a particular structure that catalyzes rational argumentation.
  • in his earliest, anthropological-philosophical stage, Habermas approaches religion from a predominantly philosophical perspective. But as he undertakes the task of “transforming historical materialism” that will culminate in his magnum opus, The Theory of Communicative Action, there is a shift from philosophy to sociology and, more generally, social theory. With this shift, religion is treated, not as a germinal for philosophical concepts, but instead as the source of the social order.
  • What is noteworthy about this juncture in Habermas’s writings is that secularization is explained as “pressure for rationalization” from “above,” which meets the force of rationalization from below, from the realm of technical and practical action oriented to instrumentalization. Additionally, secularization here is not simply the process of the profanation of the world—that is, the withdrawal of religious perspectives as worldviews and the privatization of belief—but, perhaps most importantly, religion itself becomes the means for the translation and appropriation of the rational impetus released by its secularization.
  • ...6 more annotations...
  • religion becomes its own secular catalyst, or, rather, secularization itself is the result of religion. This approach will mature in the most elaborate formulation of what Habermas calls the “linguistification of the sacred,” in volume two of The Theory of Communicative Action. There, basing himself on Durkheim and Mead, Habermas shows how ritual practices and religious worldviews release rational imperatives through the establishment of a communicative grammar that conditions how believers can and should interact with each other, and how they relate to the idea of a supreme being. Habermas writes: worldviews function as a kind of drive belt that transforms the basic religious consensus into the energy of social solidarity and passes it on to social institutions, thus giving them a moral authority. [. . .] Whereas ritual actions take place at a pregrammatical level, religious worldviews are connected with full-fledged communicative actions.
  • The thrust of Habermas’s argumentation in this section of The Theory of Communicative Action is to show that religion is the source of the normative binding power of ethical and moral commandments. Yet there is an ambiguity here. While the contents of worldviews may be sublimated into the normative, binding of social systems, it is not entirely clear that the structure, or the grammar, of religious worldviews is itself exhausted. Indeed, in “A Genealogical Analysis of the Cognitive Content of Morality,” Habermas resolves this ambiguity by claiming that the horizontal relationship among believers and the vertical relationship between each believer and God shape the structure of our moral relationship to our neighbour, but now under two corresponding aspects: that of solidarity and that of justice. Here, the grammar of one’s religious relationship to God and the corresponding community of believers are like the exoskeleton of a magnificent species, which, once the religious worldviews contained in them have desiccated under the impact of the forces of secularization, leave behind a casing to be used as a structuring shape for other contents.
  • Metaphysical thinking, which for Habermas has become untenable by the very logic of philosophical development, is characterized by three aspects: identity thinking, or the philosophy of origins that postulates the correspondence between being and thought; the doctrine of ideas, which becomes the foundation for idealism, which in turn postulates a tension between what is perceived and what can be conceptualized; and a concomitant strong concept of theory, where the bios theoretikos takes on a quasi-sacred character, and where philosophy becomes the path to salvation through dedication to a life of contemplation. By “postmetaphysical” Habermas means the new self-understanding of reason that we are able to obtain after the collapse of the Hegelian idealist system—the historicization of reason, or the de-substantivation that turns it into a procedural rationality, and, above all, its humbling. It is noteworthy that one of the main aspects of the new postmetaphysical constellation is that in the wake of the collapse of metaphysics, philosophy is forced to recognize that it must co-exist with religious practices and language: Philosophy, even in its postmetaphysical form, will be able neither to replace nor to repress religion as long as religious language is the bearer of semantic content that is inspiring and even indispensable, for this content eludes (for the time being?) the explanatory force of philosophical language and continues to resist translation into reasoning discourses.
  • metaphysical thinking either surrendered philosophy to religion or sought to eliminate religion altogether. In contrast, postmetaphysical thinking recognizes that philosophy can neither replace nor dismissively reject religion, for religion continues to articulate a language whose syntax and content elude philosophy, but from which philosophy continues to derive insights into the universal dimensions of human existence.
  • Habermas claims that even moral discourse cannot translate religious language without something being lost: “Secular languages which only eliminate the substance once intended leave irritations. When sin was converted to culpability, and the breaking of divine commands to an offence against human laws, something was lost.” Still, Habermas’s concern with religion is no longer solely philosophical, nor merely socio-theoretical, but has taken on political urgency. Indeed, he now asks whether modern rule of law and constitutional democracies can generate the motivational resources that nourish them and make them durable. In a series of essays, now gathered in Between Naturalism and Religion, as well as in his Europe: The Faltering Project, Habermas argues that as we have become members of a world society (Weltgesellschaft), we have also been forced to adopt a societal “post-secular self-consciousness.” By this term Habermas does not mean that secularization has come to an end, and even less that it has to be reversed. Instead, he now clarifies that secularization refers very specifically to the secularization of state power and to the general dissolution of metaphysical, overarching worldviews (among which religious views are to be counted). Additionally, as members of a world society that has, if not a fully operational, at least an incipient global public sphere, we have been forced to witness the endurance and vitality of religion. As members of this emergent global public sphere, we are also forced to recognize the plurality of forms of secularization. Secularization did not occur in one form, but in a variety of forms and according to different chronologies.
  • through a critical reading of Rawls, Habermas has begun to translate the postmetaphysical orientation of modern philosophy into a postsecular self-understanding of modern rule of law societies in such a way that religious citizens as well as secular citizens can co-exist, not just by force of a modus vivendi, but out of a sincere mutual respect. “Mutual recognition implies, among other things, that religious and secular citizens are willing to listen and to learn from each other in public debates. The political virtue of treating each other civilly is an expression of distinctive cognitive attitudes.” The cognitive attitudes Habermas is referring to here are the very cognitive competencies that are distinctive of modern, postconventional social agents. Habermas’s recent work on religion, then, is primarily concerned with rescuing for the modern liberal state those motivational and moral resources that it cannot generate or provide itself. At the same time, his recent work is concerned with foregrounding the kind of ethical and moral concerns, preoccupations, and values that can guide us between the Scylla of a society administered from above by the system imperatives of a global economy and political power and the Charybdis of a technological frenzy that places us on the slippery slope of a liberally sanctioned eugenics.
  •  
    Religion in the public sphere: Religion as a catalyst of rationalization posted by Eduardo Mendieta
Weiye Loh

The new SingaNews - 13 views

Hi Valerie, I fully agree with your reply. However, there are some issues I will like to raise. "It seems a Christian cannot do anything in the secular realm without drawing criticisms or at th...

SingaNews Christian Fundamentalism Family Objectivity

Weiye Loh

Our Kind of Truth - Ian Buruma - Project Syndicate - 0 views

  • Of course, not everything in the mainstream media is always true. Mistakes are made. News organizations have political biases, sometimes reflecting the views and interests of their owners. But high-quality journalism has always relied on its reputation for probity. Editors, as well as reporters, at least tried to get the facts right. That is why people read Le Monde, The New York Times, or, indeed, the Washington Post. Filtering nonsense was one of their duties – and their main selling point.
  • It is unlikely that Rick Santorum, or many of his followers, have read any post-modern theorists. Santorum, after all, recently called Obama a “snob” for claiming that all Americans should be entitled to a college education. So he must surely loath writers who represent everything that the Tea Party and other radical right-wingers abhor: the highly educated, intellectual, urban, secular, and not always white. These writers are the left-wing elite, at least in academia.
  • But, as so often happens, ideas have a way of migrating in unexpected ways. The blogger who dismissed The Washington Post’s corrections of Santorum’s fictional portrayal of the Netherlands expressed himself like a perfect post-modernist. The most faithful followers of obscure leftist thinkers in Paris, New York, or Berkeley are the most reactionary elements in the American heartland. Of course, if this were pointed out to them, they would no doubt dismiss it as elitist propaganda.
  •  
    It is unlikely that Rick Santorum, or many of his followers, have read any post-modern theorists. Santorum, after all, recently called Obama a "snob" for claiming that all Americans should be entitled to a college education. So he must surely loath writers who represent everything that the Tea Party and other radical right-wingers abhor: the highly educated, intellectual, urban, secular, and not always white. These writers are the left-wing elite, at least in academia. But, as so often happens, ideas have a way of migrating in unexpected ways. The blogger who dismissed The Washington Post's corrections of Santorum's fictional portrayal of the Netherlands expressed himself like a perfect post-modernist. The most faithful followers of obscure leftist thinkers in Paris, New York, or Berkeley are the most reactionary elements in the American heartland. Of course, if this were pointed out to them, they would no doubt dismiss it as elitist propaganda.
Weiye Loh

On Forgiveness - NYTimes.com - 0 views

  • What is forgiveness? When is it appropriate? Why is it considered to be commendable?  Some claim that forgiveness is merely about ridding oneself of vengeful anger; do that, and you have forgiven.  But if you were able to banish anger from your soul simply by taking a pill, would the result really be forgiveness?
  • The timing of forgiveness is also disputed. Some say that it should wait for the offender to take responsibility and suffer due punishment, others hold that the victim must first overcome anger altogether, and still others that forgiveness should be unilaterally bestowed at the earliest possible moment.  But what if you have every good reason to be angry and even to take your sweet revenge as well?  Is forgiveness then really to be commended? Some object that it lets the offender off the hook, confesses to one’s own weakness and vulnerability, and papers over the legitimate demands of vengeful anger.  And yet, legions praise forgiveness and think of it as an indispensable virtue
  • Many people assume that the notion of forgiveness is Christian in origin, at least in the West, and that the contemporary understanding of interpersonal forgiveness has always been the core Christian teaching on the subject.  These contestable assumptions are explored by David Konstan in “Before Forgiveness: The Origins of a Moral Idea.”  Religious origins of the notion would not invalidate a secular philosophical approach to the topic, any more than a secular origin of some idea precludes a religious appropriation of it.  While religious and secular perspectives on forgiveness are not necessarily consistent with each other, however, they agree in their attempt to address the painful fact of the pervasiveness of moral wrong in human life. They also agree on this: few of us are altogether innocent of the need for forgiveness.
  • ...2 more annotations...
  • It’s not simply a matter of lifting the burden of toxic resentment or of immobilizing guilt, however beneficial that may be ethically and psychologically.  It is not a merely therapeutic matter, as though this were just about you.  Rather, when the requisite conditions are met, forgiveness is what a good person would seek because it expresses fundamental moral ideals.  These include ideals of spiritual growth and renewal; truth-telling; mutual respectful address; responsibility and respect; reconciliation and peace.
  • Are any wrongdoers unforgivable?  People who have committed heinous acts such as torture or child molestation are often cited as examples.  The question is not primarily about the psychological ability of the victim to forswear anger, but whether a wrongdoer can rightly be judged not-to-be-forgiven no matter what offender and victim say or do.  I do not see that a persuasive argument for that thesis can be made; there is no such thing as the unconditionally unforgivable.  For else we would be faced with the bizarre situation of declaring illegitimate the forgiveness reached by victim and perpetrator after each has taken every step one could possibly wish for.  The implication may distress you: Osama bin Laden, for example, is not unconditionally unforgivable for his role in the attacks of 9/11.  That being said, given the extent of the injury done by grave wrongs, their author may be rightly unforgiven for an appropriate period even if he or she has taken all reasonable steps.  There is no mathematically precise formula for determining when it is appropriate to forgive.
Weiye Loh

Skepticblog » Further Thoughts on Atheism - 0 views

  • Even before I started writing Evolution: How We and All Living Things Came to Be I knew that it would very briefly mention religion, make a mild assertion that religious questions are out of scope for science, and move on. I knew this was likely to provoke blow-back from some in the atheist community, and I knew mentioning that blow-back in my recent post “The Standard Pablum — Science and Atheism” would generate more.
  • Still, I was surprised by the quantity of the responses to the blog post (208 comments as of this moment, many of them substantial letters), and also by the fierceness of some of those responses. For example, according to one poster, “you not only pandered, you lied. And even if you weren’t lying, you lied.” (Several took up this “lying” theme.) Another, disappointed that my children’s book does not tell a general youth audience to look to “secular humanism for guidance,” declared  that “I’d have to tear out that page if I bought the book.”
  • I don’t mean to suggest that there are not points of legitimate disagreement in the mix — there are, many of them stated powerfully. There are also statements of support, vigorous debate, and (for me at least) a good deal of food for thought. I invite anyone to browse the thread, although I’d urge you to skim some of it. (The internet is after all a hyperbole-generating machine.)
  • ...10 more annotations...
  • I lack any belief in any deity. More than that, I am persuaded (by philosophical argument, not scientific evidence) to a high degree of confidence that gods and an afterlife do not exist.
  • do try to distinguish between my work as a science writer and skeptical activist on the one hand, and my personal opinions about religion and humanism on the other.
  • Atheism is a practical handicap for science outreach. I’m not naive about this, but I’m not cynical either. I’m a writer. I’m in the business of communicating ideas about science, not throwing up roadblocks and distractions. It’s good communication to keep things as clear, focused, and on-topic as possible.
  • Atheism is divisive for the skeptical community, and it distracts us from our core mandate. I was blunt about this in my 2007 essay “Where Do We Go From Here?”, writing, I’m both an atheist and a secular humanist, but it is clear to me that atheism is an albatross for the skeptical movement. It divides us, it distracts us, and it marginalizes us. Frankly, we can’t afford that. We need all the help we can get.
  • In What Do I Do Next? I urged skeptics to remember that there are many other skeptics who do hold or identify with some religion. Indeed, the modern skeptical movement is built partly on the work of people of faith (including giants like Harry Houdini and Martin Gardner). You don’t, after all, have to be against god to be against fraud.
  • In my Skeptical Inquirer article “The Paradoxical Future of Skepticism” I argued that skeptics must set aside the conceit that our goal is a cultural revolution or the dawning of a new Enlightenment. … When we focus on that distant, receding, and perhaps illusory goal, we fail to see the practical good we can do, the harm-reduction opportunities right in front of us. The long view subverts our understanding of the scale and hazard of paranormal beliefs, leading to sentiments that the paranormal is “trivial” or “played out.” By contrast, the immediate, local, human view — the view that asks “Will this help someone?” — sees obvious opportunities for every local group and grassroots skeptic to make a meaningful difference.
  • This practical argument, that skepticism can get more done if we keep our mandate tight and avoid alienating our best friends, seems to me an important one. Even so, it is not my main reason for arguing that atheism and skepticism are different projects.
  • In my opinion, Metaphysics and ethics are out of scope for science — and therefore out of scope for skepticism. This is by far the most important reason I set aside my own atheism when I put on my “skeptic” hat. It’s not that I don’t think atheism is rational — I do. That’s why I’m an atheist. But I know that I cannot claim scientific authority for a conclusion that science cannot test, confirm, or disprove. And so, I restrict myself as much as possible, in my role as a skeptic and science writer, to investigable claims. I’ve become a cheerleader for this “testable claims” criterion (and I’ll discuss it further in future posts) but it’s not a new or radical constriction of the scope of skepticism. It’s the traditional position occupied by skeptical organizations for decades.
  • In much of the commentary, I see an assumption that I must not really believe that testable paranormal and pseudoscientific claims (“I can read minds”) are different in kind from the untestable claims we often find at the core of religion (“god exists”). I acknowledge that many smart people disagree on this point, but I assure you that this is indeed what I think.
  • I’d like to call out one blogger’s response to my “Standard Pablum” post. The author certainly disagrees with me (we’ve discussed the topic often on Twitter), but I thank him for describing my position fairly: From what I’ve read of Daniel’s writings before, this seems to be a very consistent position that he has always maintained, not a new one he adopted for the book release. It appears to me that when Daniel says that science has nothing to say about religion, he really means it. I have nothing to say to that. It also appears to me that when he says skepticism is a “different project than atheism” he also means it.
  •  
    FURTHER THOUGHTS ON ATHEISM by DANIEL LOXTON, Mar 05 2010
Weiye Loh

The Death of Postmodernism And Beyond | Philosophy Now - 0 views

  • Most of the undergraduates who will take ‘Postmodern Fictions’ this year will have been born in 1985 or after, and all but one of the module’s primary texts were written before their lifetime. Far from being ‘contemporary’, these texts were published in another world, before the students were born: The French Lieutenant’s Woman, Nights at the Circus, If on a Winter’s Night a Traveller, Do Androids Dream of Electric Sheep? (and Blade Runner), White Noise: this is Mum and Dad’s culture. Some of the texts (‘The Library of Babel’) were written even before their parents were born. Replace this cache with other postmodern stalwarts – Beloved, Flaubert’s Parrot, Waterland, The Crying of Lot 49, Pale Fire, Slaughterhouse 5, Lanark, Neuromancer, anything by B.S. Johnson – and the same applies. It’s all about as contemporary as The Smiths, as hip as shoulder pads, as happening as Betamax video recorders. These are texts which are just coming to grips with the existence of rock music and television; they mostly do not dream even of the possibility of the technology and communications media – mobile phones, email, the internet, computers in every house powerful enough to put a man on the moon – which today’s undergraduates take for granted.
  • somewhere in the late 1990s or early 2000s, the emergence of new technologies re-structured, violently and forever, the nature of the author, the reader and the text, and the relationships between them.
  • Postmodernism, like modernism and romanticism before it, fetishised [ie placed supreme importance on] the author, even when the author chose to indict or pretended to abolish him or herself. But the culture we have now fetishises the recipient of the text to the degree that they become a partial or whole author of it. Optimists may see this as the democratisation of culture; pessimists will point to the excruciating banality and vacuity of the cultural products thereby generated (at least so far).
  • ...17 more annotations...
  • Pseudo-modernism also encompasses contemporary news programmes, whose content increasingly consists of emails or text messages sent in commenting on the news items. The terminology of ‘interactivity’ is equally inappropriate here, since there is no exchange: instead, the viewer or listener enters – writes a segment of the programme – then departs, returning to a passive role. Pseudo-modernism also includes computer games, which similarly place the individual in a context where they invent the cultural content, within pre-delineated limits. The content of each individual act of playing the game varies according to the particular player.
  • The pseudo-modern cultural phenomenon par excellence is the internet. Its central act is that of the individual clicking on his/her mouse to move through pages in a way which cannot be duplicated, inventing a pathway through cultural products which has never existed before and never will again. This is a far more intense engagement with the cultural process than anything literature can offer, and gives the undeniable sense (or illusion) of the individual controlling, managing, running, making up his/her involvement with the cultural product. Internet pages are not ‘authored’ in the sense that anyone knows who wrote them, or cares. The majority either require the individual to make them work, like Streetmap or Route Planner, or permit him/her to add to them, like Wikipedia, or through feedback on, for instance, media websites. In all cases, it is intrinsic to the internet that you can easily make up pages yourself (eg blogs).
  • Where once special effects were supposed to make the impossible appear credible, CGI frequently [inadvertently] works to make the possible look artificial, as in much of Lord of the Rings or Gladiator. Battles involving thousands of individuals have really happened; pseudo-modern cinema makes them look as if they have only ever happened in cyberspace.
  • Similarly, television in the pseudo-modern age favours not only reality TV (yet another unapt term), but also shopping channels, and quizzes in which the viewer calls to guess the answer to riddles in the hope of winning money.
  • The purely ‘spectacular’ function of television, as with all the arts, has become a marginal one: what is central now is the busy, active, forging work of the individual who would once have been called its recipient. In all of this, the ‘viewer’ feels powerful and is indeed necessary; the ‘author’ as traditionally understood is either relegated to the status of the one who sets the parameters within which others operate, or becomes simply irrelevant, unknown, sidelined; and the ‘text’ is characterised both by its hyper-ephemerality and by its instability. It is made up by the ‘viewer’, if not in its content then in its sequence – you wouldn’t read Middlemarch by going from page 118 to 316 to 401 to 501, but you might well, and justifiably, read Ceefax that way.
  • A pseudo-modern text lasts an exceptionally brief time. Unlike, say, Fawlty Towers, reality TV programmes cannot be repeated in their original form, since the phone-ins cannot be reproduced, and without the possibility of phoning-in they become a different and far less attractive entity.
  • If scholars give the date they referenced an internet page, it is because the pages disappear or get radically re-cast so quickly. Text messages and emails are extremely difficult to keep in their original form; printing out emails does convert them into something more stable, like a letter, but only by destroying their essential, electronic state.
  • The cultural products of pseudo-modernism are also exceptionally banal
  • Much text messaging and emailing is vapid in comparison with what people of all educational levels used to put into letters.
  • A triteness, a shallowness dominates all.
  • In music, the pseudo-modern supersedingof the artist-dominated album as monolithic text by the downloading and mix-and-matching of individual tracks on to an iPod, selected by the listener, was certainly prefigured by the music fan’s creation of compilation tapes a generation ago. But a shift has occurred, in that what was a marginal pastime of the fan has become the dominant and definitive way of consuming music, rendering the idea of the album as a coherent work of art, a body of integrated meaning, obsolete.
  • To a degree, pseudo-modernism is no more than a technologically motivated shift to the cultural centre of something which has always existed (similarly, metafiction has always existed, but was never so fetishised as it was by postmodernism). Television has always used audience participation, just as theatre and other performing arts did before it; but as an option, not as a necessity: pseudo-modern TV programmes have participation built into them.
  • Whereas postmodernism called ‘reality’ into question, pseudo-modernism defines the real implicitly as myself, now, ‘interacting’ with its texts. Thus, pseudo-modernism suggests that whatever it does or makes is what is reality, and a pseudo-modern text may flourish the apparently real in an uncomplicated form: the docu-soap with its hand-held cameras (which, by displaying individuals aware of being regarded, give the viewer the illusion of participation); The Office and The Blair Witch Project, interactive pornography and reality TV; the essayistic cinema of Michael Moore or Morgan Spurlock.
  • whereas postmodernism favoured the ironic, the knowing and the playful, with their allusions to knowledge, history and ambivalence, pseudo-modernism’s typical intellectual states are ignorance, fanaticism and anxiety
  • pseudo-modernism lashes fantastically sophisticated technology to the pursuit of medieval barbarism – as in the uploading of videos of beheadings onto the internet, or the use of mobile phones to film torture in prisons. Beyond this, the destiny of everyone else is to suffer the anxiety of getting hit in the cross-fire. But this fatalistic anxiety extends far beyond geopolitics, into every aspect of contemporary life; from a general fear of social breakdown and identity loss, to a deep unease about diet and health; from anguish about the destructiveness of climate change, to the effects of a new personal ineptitude and helplessness, which yield TV programmes about how to clean your house, bring up your children or remain solvent.
  • Pseudo-modernism belongs to a world pervaded by the encounter between a religiously fanatical segment of the United States, a largely secular but definitionally hyper-religious Israel, and a fanatical sub-section of Muslims scattered across the planet: pseudo-modernism was not born on 11 September 2001, but postmodernism was interred in its rubble.
  • pseudo-modernist communicates constantly with the other side of the planet, yet needs to be told to eat vegetables to be healthy, a fact self-evident in the Bronze Age. He or she can direct the course of national television programmes, but does not know how to make him or herself something to eat – a characteristic fusion of the childish and the advanced, the powerful and the helpless. For varying reasons, these are people incapable of the “disbelief of Grand Narratives” which Lyotard argued typified postmodernists
  •  
    Postmodern philosophy emphasises the elusiveness of meaning and knowledge. This is often expressed in postmodern art as a concern with representation and an ironic self-awareness. And the argument that postmodernism is over has already been made philosophically. There are people who have essentially asserted that for a while we believed in postmodern ideas, but not any more, and from now on we're going to believe in critical realism. The weakness in this analysis is that it centres on the academy, on the practices and suppositions of philosophers who may or may not be shifting ground or about to shift - and many academics will simply decide that, finally, they prefer to stay with Foucault [arch postmodernist] than go over to anything else. However, a far more compelling case can be made that postmodernism is dead by looking outside the academy at current cultural production.
Weiye Loh

Liberal Democrat conference | Libel laws silence scientists | Richard Dawkins | Comment... - 0 views

  • Scientists often disagree with one another, sometimes passionately. But they don't go to court to sort out their differences, they go into the lab, repeat the experiments, carefully examine the controls and the statistical analysis. We care about whether something is true, supported by the evidence. We are not interested in whether somebody sincerely believes he is right.
    • Weiye Loh
       
      Exactly the reason why appeals to faith cannot work in secularism!!! Unfortunately, people who are unable to prove their point usually resort to underhand straw-in-nose methods; throw enough shit and hopefully some will stay.
  • Why doesn't it submit its case to the higher court of scientific test? I think we all know the answer.
Weiye Loh

Rationally Speaking: On banning the veil - 0 views

  • surely many women wear them just because they in general accept their social and religious customs. As stupidly degrading as such customs are, if the women are simply accepting their religious and social position, then it at least becomes on the scale of the high heels, thin jeans, woman-as-object problem that women face in this country, one that they cannot reject without becoming socially outcasted, at least to some degree, as well.
  •  
    THURSDAY, JULY 22, 2010 On banning the veil
Weiye Loh

Balderdash: Anthony Grayling on Atheism - 0 views

  • if you think that the reasons you have for thinking that there are fairies are very poor reasons. That it's irrational to think that there are such things, then belief in supernatural agencies in general is irrational... [Agnostics] fall foul of this picture...
  • we're all familiar with Popper's dictum that if a theory, a claim explains everything, if everything is consistent with the truth of the claim, then it's empty. It doesn't explain anything at all. [On the claim that Science purports to explain everything, or that it claims that it will be able to eventually] I don't think Science does claim that at all, in fact. Science at its normal best: it is a public, a testable, a challengeable project. Always having to maintain its own respectability by saying what would count as counter-evidence against it. And when people put forward views in Science, they publish them so that other people can test them, review them, try to replicate results, and I think that is absolutely the model of how an epistemology should proceed. Out there in the open and inviting the very toughest kind of response from other people...
  • [On the claim that there is no morality without God] In classical antiquity, in the Classical Tradition, there are deep, rich, powerful thoughts about the nature of morality, the foundations of ethics. The nature of the good life, which make no appeal whatever to any divine command. Or any government via this sort of spirit monarch in disguise, who will reward you if you do what he or she requires, and punish you if you don't. All the very best and deepest thinking about ethics has come from non-religious traditions...
  •  
    Anthony Grayling on Atheism "Everyone is a genius at least once a year. The real geniuses simply have their bright ideas closer together." - Georg Christoph Lichtenberg
Weiye Loh

Kevin Kelly and Steven Johnson on Where Ideas Come From | Magazine - 0 views

  • Say the word “inventor” and most people think of a solitary genius toiling in a basement. But two ambitious new books on the history of innovation—by Steven Johnson and Kevin Kelly, both longtime wired contributors—argue that great discoveries typically spring not from individual minds but from the hive mind. In Where Good Ideas Come From: The Natural History of Innovation, Johnson draws on seven centuries of scientific and technological progress, from Gutenberg to GPS, to show what sorts of environments nurture ingenuity. He finds that great creative milieus, whether MIT or Los Alamos, New York City or the World Wide Web, are like coral reefs—teeming, diverse colonies of creators who interact with and influence one another.
  • Seven centuries are an eyeblink in the scope of Kelly’s book, What Technology Wants, which looks back over some 50,000 years of history and peers nearly that far into the future. His argument is similarly sweeping: Technology, Kelly believes, can be seen as a sort of autonomous life-form, with intrinsic goals toward which it gropes over the course of its long development. Those goals, he says, are much like the tendencies of biological life, which over time diversifies, specializes, and (eventually) becomes more sentient.
  • We share a fascination with the long history of simultaneous invention: cases where several people come up with the same idea at almost exactly the same time. Calculus, the electrical battery, the telephone, the steam engine, the radio—all these groundbreaking innovations were hit upon by multiple inventors working in parallel with no knowledge of one another.
  • ...25 more annotations...
  • It’s amazing that the myth of the lone genius has persisted for so long, since simultaneous invention has always been the norm, not the exception. Anthropologists have shown that the same inventions tended to crop up in prehistory at roughly similar times, in roughly the same order, among cultures on different continents that couldn’t possibly have contacted one another.
  • Also, there’s a related myth—that innovation comes primarily from the profit motive, from the competitive pressures of a market society. If you look at history, innovation doesn’t come just from giving people incentives; it comes from creating environments where their ideas can connect.
  • The musician Brian Eno invented a wonderful word to describe this phenomenon: scenius. We normally think of innovators as independent geniuses, but Eno’s point is that innovation comes from social scenes,from passionate and connected groups of people.
  • It turns out that the lone genius entrepreneur has always been a rarity—there’s far more innovation coming out of open, nonmarket networks than we tend to assume.
  • Really, we should think of ideas as connections,in our brains and among people. Ideas aren’t self-contained things; they’re more like ecologies and networks. They travel in clusters.
  • ideas are networks
  • In part, that’s because ideas that leap too far ahead are almost never implemented—they aren’t even valuable. People can absorb only one advance, one small hop, at a time. Gregor Mendel’s ideas about genetics, for example: He formulated them in 1865, but they were ignored for 35 years because they were too advanced. Nobody could incorporate them. Then, when the collective mind was ready and his idea was only one hop away, three different scientists independently rediscovered his work within roughly a year of one another.
  • Charles Babbage is another great case study. His “analytical engine,” which he started designing in the 1830s, was an incredibly detailed vision of what would become the modern computer, with a CPU, RAM, and so on. But it couldn’t possibly have been built at the time, and his ideas had to be rediscovered a hundred years later.
  • I think there are a lot of ideas today that are ahead of their time. Human cloning, autopilot cars, patent-free law—all are close technically but too many steps ahead culturally. Innovating is about more than just having the idea yourself; you also have to bring everyone else to where your idea is. And that becomes really difficult if you’re too many steps ahead.
  • The scientist Stuart Kauffman calls this the “adjacent possible.” At any given moment in evolution—of life, of natural systems, or of cultural systems—there’s a space of possibility that surrounds any current configuration of things. Change happens when you take that configuration and arrange it in a new way. But there are limits to how much you can change in a single move.
  • Which is why the great inventions are usually those that take the smallest possible step to unleash the most change. That was the difference between Tim Berners-Lee’s successful HTML code and Ted Nelson’s abortive Xanadu project. Both tried to jump into the same general space—a networked hypertext—but Tim’s approach did it with a dumb half-step, while Ted’s earlier, more elegant design required that everyone take five steps all at once.
  • Also, the steps have to be taken in the right order. You can’t invent the Internet and then the digital computer. This is true of life as well. The building blocks of DNA had to be in place before evolution could build more complex things. One of the key ideas I’ve gotten from you, by the way—when I read your book Out of Control in grad school—is this continuity between biological and technological systems.
  • technology is something that can give meaning to our lives, particularly in a secular world.
  • He had this bleak, soul-sucking vision of technology as an autonomous force for evil. You also present technology as a sort of autonomous force—as wanting something, over the long course of its evolution—but it’s a more balanced and ultimately positive vision, which I find much more appealing than the alternative.
  • As I started thinking about the history of technology, there did seem to be a sense in which, during any given period, lots of innovations were in the air, as it were. They came simultaneously. It appeared as if they wanted to happen. I should hasten to add that it’s not a conscious agency; it’s a lower form, something like the way an organism or bacterium can be said to have certain tendencies, certain trends, certain urges. But it’s an agency nevertheless.
  • technology wants increasing diversity—which is what I think also happens in biological systems, as the adjacent possible becomes larger with each innovation. As tech critics, I think we have to keep this in mind, because when you expand the diversity of a system, that leads to an increase in great things and an increase in crap.
  • the idea that the most creative environments allow for repeated failure.
  • And for wastes of time and resources. If you knew nothing about the Internet and were trying to figure it out from the data, you would reasonably conclude that it was designed for the transmission of spam and porn. And yet at the same time, there’s more amazing stuff available to us than ever before, thanks to the Internet.
  • To create something great, you need the means to make a lot of really bad crap. Another example is spectrum. One reason we have this great explosion of innovation in wireless right now is that the US deregulated spectrum. Before that, spectrum was something too precious to be wasted on silliness. But when you deregulate—and say, OK, now waste it—then you get Wi-Fi.
  • If we didn’t have genetic mutations, we wouldn’t have us. You need error to open the door to the adjacent possible.
  • image of the coral reef as a metaphor for where innovation comes from. So what, today, are some of the most reeflike places in the technological realm?
  • Twitter—not to see what people are having for breakfast, of course, but to see what people are talking about, the links to articles and posts that they’re passing along.
  • second example of an information coral reef, and maybe the less predictable one, is the university system. As much as we sometimes roll our eyes at the ivory-tower isolation of universities, they continue to serve as remarkable engines of innovation.
  • Life seems to gravitate toward these complex states where there’s just enough disorder to create new things. There’s a rate of mutation just high enough to let interesting new innovations happen, but not so many mutations that every new generation dies off immediately.
  • , technology is an extension of life. Both life and technology are faces of the same larger system.
  •  
    Kevin Kelly and Steven Johnson on Where Ideas Come From By Wired September 27, 2010  |  2:00 pm  |  Wired October 2010
Weiye Loh

Rationally Speaking: Human, know thy place! - 0 views

  • I kicked off a recent episode of the Rationally Speaking podcast on the topic of transhumanism by defining it as “the idea that we should be pursuing science and technology to improve the human condition, modifying our bodies and our minds to make us smarter, healthier, happier, and potentially longer-lived.”
  • Massimo understandably expressed some skepticism about why there needs to be a transhumanist movement at all, given how incontestable their mission statement seems to be. As he rhetorically asked, “Is transhumanism more than just the idea that we should be using technologies to improve the human condition? Because that seems a pretty uncontroversial point.” Later in the episode, referring to things such as radical life extension and modifications of our minds and genomes, Massimo said, “I don't think these are things that one can necessarily have objections to in principle.”
  • There are a surprising number of people whose reaction, when they are presented with the possibility of making humanity much healthier, smarter and longer-lived, is not “That would be great,” nor “That would be great, but it's infeasible,” nor even “That would be great, but it's too risky.” Their reaction is, “That would be terrible.”
  • ...14 more annotations...
  • The people with this attitude aren't just fringe fundamentalists who are fearful of messing with God's Plan. Many of them are prestigious professors and authors whose arguments make no mention of religion. One of the most prominent examples is political theorist Francis Fukuyama, author of End of History, who published a book in 2003 called “Our Posthuman Future: Consequences of the Biotechnology Revolution.” In it he argues that we will lose our “essential” humanity by enhancing ourselves, and that the result will be a loss of respect for “human dignity” and a collapse of morality.
  • Fukuyama's reasoning represents a prominent strain of thought about human enhancement, and one that I find doubly fallacious. (Fukuyama is aware of the following criticisms, but neither I nor other reviewers were impressed by his attempt to defend himself against them.) The idea that the status quo represents some “essential” quality of humanity collapses when you zoom out and look at the steady change in the human condition over previous millennia. Our ancestors were less knowledgable, more tribalistic, less healthy, shorter-lived; would Fukuyama have argued for the preservation of all those qualities on the grounds that, in their respective time, they constituted an “essential human nature”? And even if there were such a thing as a persistent “human nature,” why is it necessarily worth preserving? In other words, I would argue that Fukuyama is committing both the fallacy of essentialism (there exists a distinct thing that is “human nature”) and the appeal to nature (the way things naturally are is how they ought to be).
  • Writer Bill McKibben, who was called “probably the nation's leading environmentalist” by the Boston Globe this year, and “the world's best green journalist” by Time magazine, published a book in 2003 called “Enough: Staying Human in an Engineered Age.” In it he writes, “That is the choice... one that no human should have to make... To be launched into a future without bounds, where meaning may evaporate.” McKibben concludes that it is likely that “meaning and pain, meaning and transience are inextricably intertwined.” Or as one blogger tartly paraphrased: “If we all live long healthy happy lives, Bill’s favorite poetry will become obsolete.”
  • President George W. Bush's Council on Bioethics, which advised him from 2001-2009, was steeped in it. Harvard professor of political philosophy Michael J. Sandel served on the Council from 2002-2005 and penned an article in the Atlantic Monthly called “The Case Against Perfection,” in which he objected to genetic engineering on the grounds that, basically, it’s uppity. He argues that genetic engineering is “the ultimate expression of our resolve to see ourselves astride the world, the masters of our nature.” Better we should be bowing in submission than standing in mastery, Sandel feels. Mastery “threatens to banish our appreciation of life as a gift,” he warns, and submitting to forces outside our control “restrains our tendency toward hubris.”
  • If you like Sandel's “It's uppity” argument against human enhancement, you'll love his fellow Councilmember Dr. William Hurlbut's argument against life extension: “It's unmanly.” Hurlbut's exact words, delivered in a 2007 debate with Aubrey de Grey: “I actually find a preoccupation with anti-aging technologies to be, I think, somewhat spiritually immature and unmanly... I’m inclined to think that there’s something profound about aging and death.”
  • And Council chairman Dr. Leon Kass, a professor of bioethics from the University of Chicago who served from 2001-2005, was arguably the worst of all. Like McKibben, Kass has frequently argued against radical life extension on the grounds that life's transience is central to its meaningfulness. “Could the beauty of flowers depend on the fact that they will soon wither?” he once asked. “How deeply could one deathless ‘human’ being love another?”
  • Kass has also argued against human enhancements on the same grounds as Fukuyama, that we shouldn't deviate from our proper nature as human beings. “To turn a man into a cockroach— as we don’t need Kafka to show us —would be dehumanizing. To try to turn a man into more than a man might be so as well,” he said. And Kass completes the anti-transhumanist triad (it robs life of meaning; it's dehumanizing; it's hubris) by echoing Sandel's call for humility and gratitude, urging, “We need a particular regard and respect for the special gift that is our own given nature.”
  • By now you may have noticed a familiar ring to a lot of this language. The idea that it's virtuous to suffer, and to humbly surrender control of your own fate, is a cornerstone of Christian morality.
  • it's fairly representative of standard Christian tropes: surrendering to God, submitting to God, trusting that God has good reasons for your suffering.
  • I suppose I can understand that if you believe in an all-powerful entity who will become irate if he thinks you are ungrateful for anything, then this kind of groveling might seem like a smart strategic move. But what I can't understand is adopting these same attitudes in the absence of any religious context. When secular people chastise each other for the “hubris” of trying to improve the “gift” of life they've received, I want to ask them: just who, exactly, are you groveling to? Who, exactly, are you afraid of affronting if you dare to reach for better things?
  • This is why transhumanism is most needed, from my perspective – to counter the astoundingly widespread attitude that suffering and 80-year-lifespans are good things that are worth preserving. That attitude may make sense conditional on certain peculiarly masochistic theologies, but the rest of us have no need to defer to it. It also may have been a comforting thing to tell ourselves back when we had no hope of remedying our situation, but that's not necessarily the case anymore.
  • I think there is a seperation of Transhumanism and what Massimo is referring to. Things like robotic arms and the like come from trying to deal with a specific defect and thus seperate it from Transhumanism. I would define transhumanism the same way you would (the achievement of a better human), but I would exclude the inventions of many life altering devices as transhumanism. If we could invent a device that just made you smarter, then ideed that would be transhumanism, but if we invented a device that could make someone that was metally challenged to be able to be normal, I would define this as modern medicine. I just want to make sure we seperate advances in modern medicine from transhumanism. Modern medicine being the one that advances to deal with specific medical issues to improve quality of life (usually to restore it to normal conditions) and transhumanism being the one that can advance every single human (perhaps equally?).
    • Weiye Loh
       
      Assumes that "normal conditions" exist. 
  • I agree with all your points about why the arguments against transhumanism and for suffering are ridiculous. That being said, when I first heard about the ideas of Transhumanism, after the initial excitement wore off (since I'm a big tech nerd), my reaction was more of less the same as Massimo's. I don't particularly see the need for a philosophical movement for this.
  • if people believe that suffering is something God ordained for us, you're not going to convince them otherwise with philosophical arguments any more than you'll convince them there's no God at all. If the technologies do develop, acceptance of them will come as their use becomes more prevalent, not with arguments.
  •  
    Human, know thy place!
Weiye Loh

Skepticblog » A Creationist Challenge - 0 views

  • The commenter starts with some ad hominems, asserting that my post is biased and emotional. They provide no evidence or argument to support this assertion. And of course they don’t even attempt to counter any of the arguments I laid out. They then follow up with an argument from authority – he can link to a PhD creationist – so there.
  • The article that the commenter links to is by Henry M. Morris, founder for the Institute for Creation Research (ICR) – a young-earth creationist organization. Morris was (he died in 2006 following a stroke) a PhD – in civil engineering. This point is irrelevant to his actual arguments. I bring it up only to put the commenter’s argument from authority into perspective. No disrespect to engineers – but they are not biologists. They have no expertise relevant to the question of evolution – no more than my MD. So let’s stick to the arguments themselves.
  • The article by Morris is an overview of so-called Creation Science, of which Morris was a major architect. The arguments he presents are all old creationist canards, long deconstructed by scientists. In fact I address many of them in my original refutation. Creationists generally are not very original – they recycle old arguments endlessly, regardless of how many times they have been destroyed.
  • ...26 more annotations...
  • Morris also makes heavy use of the “taking a quote out of context” strategy favored by creationists. His quotes are often from secondary sources and are incomplete.
  • A more scholarly (i.e. intellectually honest) approach would be to cite actual evidence to support a point. If you are going to cite an authority, then make sure the quote is relevant, in context, and complete.
  • And even better, cite a number of sources to show that the opinion is representative. Rather we get single, partial, and often outdated quotes without context.
  • (nature is not, it turns out, cleanly divided into “kinds”, which have no operational definition). He also repeats this canard: Such variation is often called microevolution, and these minor horizontal (or downward) changes occur fairly often, but such changes are not true “vertical” evolution. This is the microevolution/macroevolution false dichotomy. It is only “often called” this by creationists – not by actual evolutionary scientists. There is no theoretical or empirical division between macro and micro evolution. There is just evolution, which can result in the full spectrum of change from minor tweaks to major changes.
  • Morris wonders why there are no “dats” – dog-cat transitional species. He misses the hierarchical nature of evolution. As evolution proceeds, and creatures develop a greater and greater evolutionary history behind them, they increasingly are committed to their body plan. This results in a nestled hierarchy of groups – which is reflected in taxonomy (the naming scheme of living things).
  • once our distant ancestors developed the basic body plan of chordates, they were committed to that body plan. Subsequent evolution resulted in variations on that plan, each of which then developed further variations, etc. But evolution cannot go backward, undo evolutionary changes and then proceed down a different path. Once an evolutionary line has developed into a dog, evolution can produce variations on the dog, but it cannot go backwards and produce a cat.
  • Stephen J. Gould described this distinction as the difference between disparity and diversity. Disparity (the degree of morphological difference) actually decreases over evolutionary time, as lineages go extinct and the surviving lineages are committed to fewer and fewer basic body plans. Meanwhile, diversity (the number of variations on a body plan) within groups tends to increase over time.
  • the kind of evolutionary changes that were happening in the past, when species were relatively undifferentiated (compared to contemporary species) is indeed not happening today. Modern multi-cellular life has 600 million years of evolutionary history constraining their future evolution – which was not true of species at the base of the evolutionary tree. But modern species are indeed still evolving.
  • Here is a list of research documenting observed instances of speciation. The list is from 1995, and there are more recent examples to add to the list. Here are some more. And here is a good list with references of more recent cases.
  • Next Morris tries to convince the reader that there is no evidence for evolution in the past, focusing on the fossil record. He repeats the false claim (again, which I already dealt with) that there are no transitional fossils: Even those who believe in rapid evolution recognize that a considerable number of generations would be required for one distinct “kind” to evolve into another more complex kind. There ought, therefore, to be a considerable number of true transitional structures preserved in the fossils — after all, there are billions of non-transitional structures there! But (with the exception of a few very doubtful creatures such as the controversial feathered dinosaurs and the alleged walking whales), they are not there.
  • I deal with this question at length here, pointing out that there are numerous transitional fossils for the evolution of terrestrial vertebrates, mammals, whales, birds, turtles, and yes – humans from ape ancestors. There are many more examples, these are just some of my favorites.
  • Much of what follows (as you can see it takes far more space to correct the lies and distortions of Morris than it did to create them) is classic denialism – misinterpreting the state of the science, and confusing lack of information about the details of evolution with lack of confidence in the fact of evolution. Here are some examples – he quotes Niles Eldridge: “It is a simple ineluctable truth that virtually all members of a biota remain basically stable, with minor fluctuations, throughout their durations. . . .“ So how do evolutionists arrive at their evolutionary trees from fossils of organisms which didn’t change during their durations? Beware the “….” – that means that meaningful parts of the quote are being omitted. I happen to have the book (The Pattern of Evolution) from which Morris mined that particular quote. Here’s the rest of it: (Remember, by “biota” we mean the commonly preserved plants and animals of a particular geological interval, which occupy regions often as large as Roger Tory Peterson’s “eastern” region of North American birds.) And when these systems change – when the older species disappear, and new ones take their place – the change happens relatively abruptly and in lockstep fashion.”
  • Eldridge was one of the authors (with Gould) of punctuated equilibrium theory. This states that, if you look at the fossil record, what we see are species emerging, persisting with little change for a while, and then disappearing from the fossil record. They theorize that most species most of the time are at equilibrium with their environment, and so do not change much. But these periods of equilibrium are punctuated by disequilibrium – periods of change when species will have to migrate, evolve, or go extinct.
  • This does not mean that speciation does not take place. And if you look at the fossil record we see a pattern of descendant species emerging from ancestor species over time – in a nice evolutionary pattern. Morris gives a complete misrepresentation of Eldridge’s point – once again we see intellectual dishonesty in his methods of an astounding degree.
  • Regarding the atheism = religion comment, it reminds me of a great analogy that I first heard on twitter from Evil Eye. (paraphrase) “those that say atheism is a religion, is like saying ‘not collecting stamps’ is a hobby too.”
  • Morris next tackles the genetic evidence, writing: More often is the argument used that similar DNA structures in two different organisms proves common evolutionary ancestry. Neither argument is valid. There is no reason whatever why the Creator could not or would not use the same type of genetic code based on DNA for all His created life forms. This is evidence for intelligent design and creation, not evolution.
  • Here is an excellent summary of the multiple lines of molecular evidence for evolution. Basically, if we look at the sequence of DNA, the variations in trinucleotide codes for amino acids, and amino acids for proteins, and transposons within DNA we see a pattern that can only be explained by evolution (or a mischievous god who chose, for some reason, to make life look exactly as if it had evolved – a non-falsifiable notion).
  • The genetic code is essentially comprised of four letters (ACGT for DNA), and every triplet of three letters equates to a specific amino acid. There are 64 (4^3) possible three letter combinations, and 20 amino acids. A few combinations are used for housekeeping, like a code to indicate where a gene stops, but the rest code for amino acids. There are more combinations than amino acids, so most amino acids are coded for by multiple combinations. This means that a mutation that results in a one-letter change might alter from one code for a particular amino acid to another code for the same amino acid. This is called a silent mutation because it does not result in any change in the resulting protein.
  • It also means that there are very many possible codes for any individual protein. The question is – which codes out of the gazillions of possible codes do we find for each type of protein in different species. If each “kind” were created separately there would not need to be any relationship. Each kind could have it’s own variation, or they could all be identical if they were essentially copied (plus any mutations accruing since creation, which would be minimal). But if life evolved then we would expect that the exact sequence of DNA code would be similar in related species, but progressively different (through silent mutations) over evolutionary time.
  • This is precisely what we find – in every protein we have examined. This pattern is necessary if evolution were true. It cannot be explained by random chance (the probability is absurdly tiny – essentially zero). And it makes no sense from a creationist perspective. This same pattern (a branching hierarchy) emerges when we look at amino acid substitutions in proteins and other aspects of the genetic code.
  • Morris goes for the second law of thermodynamics again – in the exact way that I already addressed. He responds to scientists correctly pointing out that the Earth is an open system, by writing: This naive response to the entropy law is typical of evolutionary dissimulation. While it is true that local order can increase in an open system if certain conditions are met, the fact is that evolution does not meet those conditions. Simply saying that the earth is open to the energy from the sun says nothing about how that raw solar heat is converted into increased complexity in any system, open or closed. The fact is that the best known and most fundamental equation of thermodynamics says that the influx of heat into an open system will increase the entropy of that system, not decrease it. All known cases of decreased entropy (or increased organization) in open systems involve a guiding program of some sort and one or more energy conversion mechanisms.
  • Energy has to be transformed into a usable form in order to do the work necessary to decrease entropy. That’s right. That work is done by life. Plants take solar energy (again – I’m not sure what “raw solar heat” means) and convert it into food. That food fuels the processes of life, which include development and reproduction. Evolution emerges from those processes- therefore the conditions that Morris speaks of are met.
  • But Morris next makes a very confused argument: Evolution has neither of these. Mutations are not “organizing” mechanisms, but disorganizing (in accord with the second law). They are commonly harmful, sometimes neutral, but never beneficial (at least as far as observed mutations are concerned). Natural selection cannot generate order, but can only “sieve out” the disorganizing mutations presented to it, thereby conserving the existing order, but never generating new order.
  • The notion that evolution (as if it’s a thing) needs to use energy is hopelessly confused. Evolution is a process that emerges from the system of life – and life certainly can use solar energy to decrease its entropy, and by extension the entropy of the biosphere. Morris slips into what is often presented as an information argument.  (Yet again – already dealt with. The pattern here is that we are seeing a shuffling around of the same tired creationists arguments.) It is first not true that most mutations are harmful. Many are silent, and many of those that are not silent are not harmful. They may be neutral, they may be a mixed blessing, and their relative benefit vs harm is likely to be situational. They may be fatal. And they also may be simply beneficial.
  • Morris finishes with a long rambling argument that evolution is religion. Evolution is promoted by its practitioners as more than mere science. Evolution is promulgated as an ideology, a secular religion — a full-fledged alternative to Christianity, with meaning and morality . . . . Evolution is a religion. This was true of evolution in the beginning, and it is true of evolution still today. Morris ties evolution to atheism, which, he argues, makes it a religion. This assumes, of course, that atheism is a religion. That depends on how you define atheism and how you define religion – but it is mostly wrong. Atheism is a lack of belief in one particular supernatural claim – that does not qualify it as a religion.
  • But mutations are not “disorganizing” – that does not even make sense. It seems to be based on a purely creationist notion that species are in some privileged perfect state, and any mutation can only take them farther from that perfection. For those who actually understand biology, life is a kluge of compromises and variation. Mutations are mostly lateral moves from one chaotic state to another. They are not directional. But they do provide raw material, variation, for natural selection. Natural selection cannot generate variation, but it can select among that variation to provide differential survival. This is an old game played by creationists – mutations are not selective, and natural selection is not creative (does not increase variation). These are true but irrelevant, because mutations increase variation and information, and selection is a creative force that results in the differential survival of better adapted variation.
  •  
    One of my earlier posts on SkepticBlog was Ten Major Flaws in Evolution: A Refutation, published two years ago. Occasionally a creationist shows up to snipe at the post, like this one:i read this and found it funny. It supposedly gives a scientific refutation, but it is full of more bias than fox news, and a lot of emotion as well.here's a scientific case by an actual scientists, you know, one with a ph. D, and he uses statements by some of your favorite evolutionary scientists to insist evolution doesn't exist.i challenge you to write a refutation on this one.http://www.icr.org/home/resources/resources_tracts_scientificcaseagainstevolution/Challenge accepted.
Weiye Loh

Rationally Speaking: Is modern moral philosophy still in thrall to religion? - 0 views

  • Recently I re-read Richard Taylor’s An Introduction to Virtue Ethics, a classic published by Prometheus
  • Taylor compares virtue ethics to the other two major approaches to moral philosophy: utilitarianism (a la John Stuart Mill) and deontology (a la Immanuel Kant). Utilitarianism, of course, is roughly the idea that ethics has to do with maximizing pleasure and minimizing pain; deontology is the idea that reason can tell us what we ought to do from first principles, as in Kant’s categorical imperative (e.g., something is right if you can agree that it could be elevated to a universally acceptable maxim).
  • Taylor argues that utilitarianism and deontology — despite being wildly different in a variety of respects — share one common feature: both philosophies assume that there is such a thing as moral right and wrong, and a duty to do right and avoid wrong. But, he says, on the face of it this is nonsensical. Duty isn’t something one can have in the abstract, duty is toward a law or a lawgiver, which begs the question of what could arguably provide us with a universal moral law, or who the lawgiver could possibly be.
  • ...11 more annotations...
  • His answer is that both utilitarianism and deontology inherited the ideas of right, wrong and duty from Christianity, but endeavored to do without Christianity’s own answers to those questions: the law is given by God and the duty is toward Him. Taylor says that Mill, Kant and the like simply absorbed the Christian concept of morality while rejecting its logical foundation (such as it was). As a result, utilitarians and deontologists alike keep talking about the right thing to do, or the good as if those concepts still make sense once we move to a secular worldview. Utilitarians substituted pain and pleasure for wrong and right respectively, and Kant thought that pure reason can arrive at moral universals. But of course neither utilitarians nor deontologist ever give us a reason why it would be irrational to simply decline to pursue actions that increase global pleasure and diminish global pain, or why it would be irrational for someone not to find the categorical imperative particularly compelling.
  • The situation — again according to Taylor — is dramatically different for virtue ethics. Yes, there too we find concepts like right and wrong and duty. But, for the ancient Greeks they had completely different meanings, which made perfect sense then and now, if we are not mislead by the use of those words in a different context. For the Greeks, an action was right if it was approved by one’s society, wrong if it wasn’t, and duty was to one’s polis. And they understood perfectly well that what was right (or wrong) in Athens may or may not be right (or wrong) in Sparta. And that an Athenian had a duty to Athens, but not to Sparta, and vice versa for a Spartan.
  • But wait a minute. Does that mean that Taylor is saying that virtue ethics was founded on moral relativism? That would be an extraordinary claim indeed, and he does not, in fact, make it. His point is a bit more subtle. He suggests that for the ancient Greeks ethics was not (principally) about right, wrong and duty. It was about happiness, understood in the broad sense of eudaimonia, the good or fulfilling life. Aristotle in particular wrote in his Ethics about both aspects: the practical ethics of one’s duty to one’s polis, and the universal (for human beings) concept of ethics as the pursuit of the good life. And make no mistake about it: for Aristotle the first aspect was relatively trivial and understood by everyone, it was the second one that represented the real challenge for the philosopher.
  • For instance, the Ethics is famous for Aristotle’s list of the virtues (see Table), and his idea that the right thing to do is to steer a middle course between extreme behaviors. But this part of his work, according to Taylor, refers only to the practical ways of being a good Athenian, not to the universal pursuit of eudaimonia. Vice of Deficiency Virtuous Mean Vice of Excess Cowardice Courage Rashness Insensibility Temperance Intemperance Illiberality Liberality Prodigality Pettiness Munificence Vulgarity Humble-mindedness High-mindedness Vaingloriness Want of Ambition Right Ambition Over-ambition Spiritlessness Good Temper Irascibility Surliness Friendly Civility Obsequiousness Ironical Depreciation Sincerity Boastfulness Boorishness Wittiness Buffoonery</t
  • How, then, is one to embark on the more difficult task of figuring out how to live a good life? For Aristotle eudaimonia meant the best kind of existence that a human being can achieve, which in turns means that we need to ask what it is that makes humans different from all other species, because it is the pursuit of excellence in that something that provides for a eudaimonic life.
  • Now, Plato - writing before Aristotle - ended up construing the good life somewhat narrowly and in a self-serving fashion. He reckoned that the thing that distinguishes humanity from the rest of the biological world is our ability to use reason, so that is what we should be pursuing as our highest goal in life. And of course nobody is better equipped than a philosopher for such an enterprise... Which reminds me of Bertrand Russell’s quip that “A process which led from the amoeba to man appeared to the philosophers to be obviously a progress, though whether the amoeba would agree with this opinion is not known.”
  • But Aristotle's conception of "reason" was significantly broader, and here is where Taylor’s own update of virtue ethics begins to shine, particularly in Chapter 16 of the book, aptly entitled “Happiness.” Taylor argues that the proper way to understand virtue ethics is as the quest for the use of intelligence in the broadest possible sense, in the sense of creativity applied to all walks of life. He says: “Creative intelligence is exhibited by a dancer, by athletes, by a chess player, and indeed in virtually any activity guided by intelligence [including — but certainly not limited to — philosophy].” He continues: “The exercise of skill in a profession, or in business, or even in such things as gardening and farming, or the rearing of a beautiful family, all such things are displays of creative intelligence.”
  • what we have now is a sharp distinction between utilitarianism and deontology on the one hand and virtue ethics on the other, where the first two are (mistakenly, in Taylor’s assessment) concerned with the impossible question of what is right or wrong, and what our duties are — questions inherited from religion but that in fact make no sense outside of a religious framework. Virtue ethics, instead, focuses on the two things that really matter and to which we can find answers: the practical pursuit of a life within our polis, and the lifelong quest of eudaimonia understood as the best exercise of our creative faculties
  • &gt; So if one's profession is that of assassin or torturer would being the best that you can be still be your duty and eudaimonic? And what about those poor blighters who end up with an ugly family? &lt;Aristotle's philosophy is ver much concerned with virtue, and being an assassin or a torturer is not a virtue, so the concept of a eudaimonic life for those characters is oxymoronic. As for ending up in a "ugly" family, Aristotle did write that eudaimonia is in part the result of luck, because it is affected by circumstances.
  • &gt; So to the title question of this post: "Is modern moral philosophy still in thrall to religion?" one should say: Yes, for some residual forms of philosophy and for some philosophers &lt;That misses Taylor's contention - which I find intriguing, though I have to give it more thought - that *all* modern moral philosophy, except virtue ethics, is in thrall to religion, without realizing it.
  • “The exercise of skill in a profession, or in business, or even in such things as gardening and farming, or the rearing of a beautiful family, all such things are displays of creative intelligence.”So if one's profession is that of assassin or torturer would being the best that you can be still be your duty and eudaimonic? And what about those poor blighters who end up with an ugly family?
Weiye Loh

Does Anything Matter? by Peter Singer - Project Syndicate - 0 views

  • Although this view of ethics has often been challenged, many of the objections have come from religious thinkers who appealed to God’s commands. Such arguments have limited appeal in the largely secular world of Western philosophy. Other defenses of objective truth in ethics made no appeal to religion, but could make little headway against the prevailing philosophical mood.
  • Many people assume that rationality is always instrumental: reason can tell us only how to get what we want, but our basic wants and desires are beyond the scope of reasoning. Not so, Parfit argues. Just as we can grasp the truth that 1 + 1 = 2, so we can see that I have a reason to avoid suffering agony at some future time, regardless of whether I now care about, or have desires about, whether I will suffer agony at that time. We can also have reasons (though not always conclusive reasons) to prevent others from suffering agony. Such self-evident normative truths provide the basis for Parfit’s defense of objectivity in ethics.
  • One major argument against objectivism in ethics is that people disagree deeply about right and wrong, and this disagreement extends to philosophers who cannot be accused of being ignorant or confused. If great thinkers like Immanuel Kant and Jeremy Bentham disagree about what we ought to do, can there really be an objectively true answer to that question? Parfit’s response to this line of argument leads him to make a claim that is perhaps even bolder than his defense of objectivism in ethics. He considers three leading theories about what we ought to do – one deriving from Kant, one from the social-contract tradition of Hobbes, Locke, Rousseau, and the contemporary philosophers John Rawls and T.M. Scanlon, and one from Bentham’s utilitarianism – and argues that the Kantian and social-contract theories must be revised in order to be defensible.
  • ...3 more annotations...
  • he argues that these revised theories coincide with a particular form of consequentialism, which is a theory in the same broad family as utilitarianism. If Parfit is right, there is much less disagreement between apparently conflicting moral theories than we all thought. The defenders of each of these theories are, in Parfit’s vivid phrase, “climbing the same mountain on different sides.”
  • Parfit’s real interest is in combating subjectivism and nihilism. Unless he can show that objectivism is true, he believes, nothing matters.
  • When Parfit does come to the question of “what matters,” his answer might seem surprisingly obvious. He tells us, for example, that what matters most now is that “we rich people give up some of our luxuries, ceasing to overheat the Earth’s atmosphere, and taking care of this planet in other ways, so that it continues to support intelligent life.” Many of us had already reached that conclusion. What we gain from Parfit’s work is the possibility of defending these and other moral claims as objective truths.
  •  
    Can moral judgments be true or false? Or is ethics, at bottom, a purely subjective matter, for individuals to choose, or perhaps relative to the culture of the society in which one lives? We might have just found out the answer. Among philosophers, the view that moral judgments state objective truths has been out of fashion since the 1930's, when logical positivists asserted that, because there seems to be no way of verifying the truth of moral judgments, they cannot be anything other than expressions of our feelings or attitudes. So, for example, when we say, "You ought not to hit that child," all we are really doing is expressing our disapproval of your hitting the child, or encouraging you to stop hitting the child. There is no truth to the matter of whether or not it is wrong for you to hit the child.
Weiye Loh

Rationally Speaking: Ray Kurzweil and the Singularity: visionary genius or pseudoscient... - 0 views

  • I will focus on a single detailed essay he wrote entitled “Superintelligence and Singularity,” which was originally published as chapter 1 of his The Singularity is Near (Viking 2005), and has been reprinted in an otherwise insightful collection edited by Susan Schneider, Science Fiction and Philosophy.
  • Kurzweil begins by telling us that he gradually became aware of the coming Singularity, in a process that, somewhat peculiarly, he describes as a “progressive awakening” — a phrase with decidedly religious overtones. He defines the Singularity as “a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed.” Well, by that definition, we have been through several “singularities” already, as technology has often rapidly and irreversibly transformed our lives.
  • The major piece of evidence for Singularitarianism is what “I [Kurzweil] have called the law of accelerating returns (the inherent acceleration of the rate of evolution, with technological evolution as a continuation of biological evolution).”
  • ...9 more annotations...
  • the first obvious serious objection is that technological “evolution” is in no logical way a continuation of biological evolution — the word “evolution” here being applied with completely different meanings. And besides, there is no scientifically sensible way in which biological evolution has been accelerating over the several billion years of its operation on our planet. So much for scientific accuracy and logical consistency.
  • here is a bit that will give you an idea of why some people think of Singularitarianism as a secular religion: “The Singularity will allow us to transcend [the] limitations of our biological bodies and brains. We will gain power over our fates. Our mortality will be in our own hands. We will be able to live as long as we want.”
  • Fig. 2 of that essay shows a progression through (again, entirely arbitrary) six “epochs,” with the next one (#5) occurring when there will be a merger between technological and human intelligence (somehow, a good thing), and the last one (#6) labeled as nothing less than “the universe wakes up” — a nonsensical outcome further described as “patterns of matter and energy in the universe becom[ing] saturated with intelligence processes and knowledge.” This isn’t just science fiction, it is bad science fiction.
  • “a serious assessment of the history of technology reveals that technological change is exponential. Exponential growth is a feature of any evolutionary process.” First, it is highly questionable that one can even measure “technological change” on a coherent uniform scale. Yes, we can plot the rate of, say, increase in microprocessor speed, but that is but one aspect of “technological change.” As for the idea that any evolutionary process features exponential growth, I don’t know where Kurzweil got it, but it is simply wrong, for one thing because biological evolution does not have any such feature — as any student of Biology 101 ought to know.
  • Kurzweil’s ignorance of evolution is manifested again a bit later, when he claims — without argument, as usual — that “Evolution is a process of creating patterns of increasing order. ... It’s the evolution of patterns that constitutes the ultimate story of the world. ... Each stage or epoch uses the information-processing methods of the previous epoch to create the next.” I swear, I was fully expecting a scholarly reference to Deepak Chopra at the end of that sentence. Again, “evolution” is a highly heterogeneous term that picks completely different concepts, such as cosmic “evolution” (actually just change over time), biological evolution (which does have to do with the creation of order, but not in Kurzweil’s blatantly teleological sense), and technological “evolution” (which is certainly yet another type of beast altogether, since it requires intelligent design). And what on earth does it mean that each epoch uses the “methods” of the previous one to “create” the next one?
  • As we have seen, the whole idea is that human beings will merge with machines during the ongoing process of ever accelerating evolution, an event that will eventually lead to the universe awakening to itself, or something like that. Now here is the crucial question: how come this has not happened already?
  • To appreciate the power of this argument you may want to refresh your memory about the Fermi Paradox, a serious (though in that case, not a knockdown) argument against the possibility of extraterrestrial intelligent life. The story goes that physicist Enrico Fermi (the inventor of the first nuclear reactor) was having lunch with some colleagues, back in 1950. His companions were waxing poetic about the possibility, indeed the high likelihood, that the galaxy is teeming with intelligent life forms. To which Fermi asked something along the lines of: “Well, where are they, then?”
  • The idea is that even under very pessimistic (i.e., very un-Kurzweil like) expectations about how quickly an intelligent civilization would spread across the galaxy (without even violating the speed of light limit!), and given the mind boggling length of time the galaxy has already existed, it becomes difficult (though, again, not impossible) to explain why we haven’t seen the darn aliens yet.
  • Now, translate that to Kurzweil’s much more optimistic predictions about the Singularity (which allegedly will occur around 2045, conveniently just a bit after Kurzweil’s expected demise, given that he is 63 at the time of this writing). Considering that there is no particular reason to think that planet earth, or the human species, has to be the one destined to trigger the big event, why is it that the universe hasn’t already “awakened” as a result of a Singularity occurring somewhere else at some other time?
Weiye Loh

Rationally Speaking: A pluralist approach to ethics - 0 views

  • The history of Western moral philosophy includes numerous attempts to ground ethics in one rational principle, standard, or rule. This narrative stretches back 2,500 years to the Greeks, who were interested mainly in virtue ethics and the moral character of the person. The modern era has seen two major additions. In 1785, Immanuel Kant introduced the categorical imperative: act only under the assumption that what you do could be made into a universal law. And in 1789, Jeremy Bentham proposed utilitarianism: work toward the greatest happiness of the greatest number of people (the “utility” principle).
  • Many people now think projects to build a reasonable and coherent moral system are doomed. Still, most secular and religious people reject the alternative of moral relativism, and have spent much ink criticizing it (among my favorite books on the topic is Moral Relativism by Stephen Lukes). The most recent and controversial work in this area comes from Sam Harris. In The Moral Landscape, Harris argues for a morality based on (a science of) well-being and flourishing, rather than religious dogma.
  • I am interested in another oft-heard criticism of Harris’ book, which is that words like “well-being” and “flourishing” are too general to form any relevant basis for morality. This criticism has some force to it, as these certainly are somewhat vague terms. But what if “well-being” and “flourishing” were to be used only as a starting point for a moral framework? These concepts would still put us on a better grounding than religious faith. But they cannot stand alone. Nor do they need to.
  • ...4 more annotations...
  • 1. The harm principle bases our ethical considerations on other beings’ capacity for higher-level subjective experience. Human beings (and some animals) have the potential — and desire — to experience deep pleasure and happiness while seeking to avoid pain and suffering. We have the obligation, then, to afford creatures with these capacities, desires and relations a certain level of respect. They also have other emotional and social interests: for instance, friends and families concerned with their health and enjoyment. These actors also deserve consideration.
  • 2. If we have a moral obligation to act a certain way toward someone, that should be reflected in law. Rights theory is the idea that there are certain rights worth granting to people with very few, if any, caveats. Many of these rights were spelled out in the founding documents of this country, the Declaration of Independence (which admittedly has no legal pull) and the Constitution (which does). They have been defended in a long history of U.S. Supreme Court rulings. They have also been expanded on in the U.N.’s 1948 Universal Declaration of Human Rights and in the founding documents of other countries around the world. To name a few, they include: freedom of belief, speech and expression, due process, equal treatment, health care, and education.
  • 3. While we ought to consider our broader moral efforts, and focus on our obligations to others, it is also important to place attention on our quality as moral agents. A vital part of fostering a respectable pluralist moral framework is to encourage virtues, and cultivate moral character. A short list of these virtues would include prudence, justice, wisdom, honesty, compassion, and courage. One should study these, and strive to put these into practice and work to be a better human being, as Aristotle advised us to do.
  • most people already are ethical pluralists. Life and society are complex to navigate, and one cannot rely on a single idea for guidance. It is probably accurate to say that people lean more toward one theory, rather than practice it to the exclusion of all others. Of course, this only describes the fact that people think about morality in a pluralistic way. But the outlined approach is supported, sound reasoning — that is, unless you are ready to entirely dismiss 2,500 years of Western moral philosophy.
  •  
    while each ethical system discussed so far has its shortcomings, put together they form a solid possibility. One system might not be able to do the job required, but we can assemble a mature moral outlook containing parts drawn from different systems put forth by philosophers over the centuries (plus some biology, but that's Massimo's area). The following is a rough sketch of what I think a decent pluralist approach to ethics might look like.
1 - 16 of 16
Showing 20 items per page