Skip to main content

Home/ New Media Ethics 2009 course/ Group items tagged PHilosophy

Rss Feed Group items tagged

Weiye Loh

Rationally Speaking: Studying folk morality: philosophy, psychology, or what? - 0 views

  • in the magazine article Joshua mentions several studies of “folk morality,” i.e. of how ordinary people think about moral problems. The results are fascinating. It turns out that people’s views are correlated with personality traits, with subjects who score high on “openness to experience” being reliably more relativists than objectivists about morality (I am not using the latter term in the infamous Randyan meaning here, but as Knobe does, to indicate the idea that morality has objective bases).
  • Other studies show that people who are capable of considering multiple options in solving mathematical puzzles also tend to be moral relativists, and — in a study co-authored by Knobe himself — the very same situation (infanticide) was judged along a sliding scale from objectivism to relativism depending on whether the hypothetical scenario involved a fellow American (presumably sharing our same general moral values), the member of an imaginary Amazonian tribe (for which infanticide was acceptable), and an alien from the planet Pentar (belonging to a race whose only goal in life is to turn everything into equilateral pentagons, and killing individuals that might get in the way of that lofty objective is a duty). Oh, and related research also shows that young children tend to be objectivists, while young adults are usually relativists — but that later in life one’s primordial objectivism apparently experiences a comeback.
  • This is all very interesting social science, but is it philosophy? Granted, the differences between various disciplines are often not clear cut, and of course whenever people engage in truly inter-disciplinary work we should simply applaud the effort and encourage further work. But I do wonder in what sense, if any, the kinds of results that Joshua and his colleagues find have much to do with moral philosophy.
  • ...6 more annotations...
  • there seems to me the potential danger of confusing various categories of moral discourse. For instance, are the “folks” studied in these cases actually relativist, or perhaps adherents to one of several versions of moral anti-realism? The two are definitely not the same, but I doubt that the subjects in question could tell the difference (and I wouldn’t expect them to, after all they are not philosophers).
  • why do we expect philosophers to learn from “folk morality” when we do not expect, say, physicists to learn from folk physics (which tends to be Aristotelian in nature), or statisticians from people’s understanding of probability theory (which is generally remarkably poor, as casino owners know very well)? Or even, while I’m at it, why not ask literary critics to discuss Shakespeare in light of what common folks think about the bard (making sure, perhaps, that they have at least read his works, and not just watched the movies)?
  • Hence, my other examples of stat (i.e., math) and literary criticism. I conceive of philosophy in general, and moral philosophy in particular, as more akin to a (science-informed, to be sure) mix between logic and criticism. Some moral philosophy consists in engaging an “if ... then” sort of scenario, akin to logical-mathematical thinking, where one begins with certain axioms and attempts to derive the consequences of such axioms. In other respects, moral philosophers exercise reflective criticism concerning those consequences as they might be relevant to practical problems.
  • For instance, we may write philosophically about abortion, and begin our discussion from a comparison of different conceptions of “person.” We might conclude that “if” one adopts conception X of what a person is, “then” abortion is justifiable under such and such conditions; while “if” one adopts conception Y of a person, “then” abortion is justifiable under a different set of conditions, or not justifiable at all. We could, of course, back up even further and engage in a discussion of what “personhood” is, thus moving from moral philosophy to metaphysics.
  • Nowhere in the above are we going to ask “folks” what they think a person is, or how they think their implicit conception of personhood informs their views on abortion. Of course people’s actual views on abortion are crucial — especially for public policy — and they are intrinsically interesting to social scientists. But they don’t seem to me to make much more contact with philosophy than the above mentioned popular opinions on Shakespeare make contact with serious literary criticism. And please, let’s not play the cheap card of “elitism,” unless we are willing to apply the label to just about any intellectual endeavor, in any discipline.
  • There is one area in which experimental philosophy can potentially contribute to philosophy proper (as opposed to social science). Once we have a more empirically grounded understanding of what people’s moral reasoning actually is, then we can analyze the likely consequences of that reasoning for a variety of societal issues. But now we would be doing something more akin to political than moral philosophy.
  •  
    My colleague Joshua Knobe at Yale University recently published an intriguing article in The Philosopher's Magazine about the experimental philosophy of moral decision making. Joshua and I have had a nice chat during a recent Rationally Speaking podcast dedicated to experimental philosophy, but I'm still not convinced about the whole enterprise.
Weiye Loh

Religion as a catalyst of rationalization « The Immanent Frame - 0 views

  • For Habermas, religion has been a continuous concern precisely because it is related to both the emergence of reason and the development of a public space of reason-giving. Religious ideas, according to Habermas, are never mere irrational speculation. Rather, they possess a form, a grammar or syntax, that unleashes rational insights, even arguments; they contain, not just specific semantic contents about God, but also a particular structure that catalyzes rational argumentation.
  • in his earliest, anthropological-philosophical stage, Habermas approaches religion from a predominantly philosophical perspective. But as he undertakes the task of “transforming historical materialism” that will culminate in his magnum opus, The Theory of Communicative Action, there is a shift from philosophy to sociology and, more generally, social theory. With this shift, religion is treated, not as a germinal for philosophical concepts, but instead as the source of the social order.
  • What is noteworthy about this juncture in Habermas’s writings is that secularization is explained as “pressure for rationalization” from “above,” which meets the force of rationalization from below, from the realm of technical and practical action oriented to instrumentalization. Additionally, secularization here is not simply the process of the profanation of the world—that is, the withdrawal of religious perspectives as worldviews and the privatization of belief—but, perhaps most importantly, religion itself becomes the means for the translation and appropriation of the rational impetus released by its secularization.
  • ...6 more annotations...
  • religion becomes its own secular catalyst, or, rather, secularization itself is the result of religion. This approach will mature in the most elaborate formulation of what Habermas calls the “linguistification of the sacred,” in volume two of The Theory of Communicative Action. There, basing himself on Durkheim and Mead, Habermas shows how ritual practices and religious worldviews release rational imperatives through the establishment of a communicative grammar that conditions how believers can and should interact with each other, and how they relate to the idea of a supreme being. Habermas writes: worldviews function as a kind of drive belt that transforms the basic religious consensus into the energy of social solidarity and passes it on to social institutions, thus giving them a moral authority. [. . .] Whereas ritual actions take place at a pregrammatical level, religious worldviews are connected with full-fledged communicative actions.
  • The thrust of Habermas’s argumentation in this section of The Theory of Communicative Action is to show that religion is the source of the normative binding power of ethical and moral commandments. Yet there is an ambiguity here. While the contents of worldviews may be sublimated into the normative, binding of social systems, it is not entirely clear that the structure, or the grammar, of religious worldviews is itself exhausted. Indeed, in “A Genealogical Analysis of the Cognitive Content of Morality,” Habermas resolves this ambiguity by claiming that the horizontal relationship among believers and the vertical relationship between each believer and God shape the structure of our moral relationship to our neighbour, but now under two corresponding aspects: that of solidarity and that of justice. Here, the grammar of one’s religious relationship to God and the corresponding community of believers are like the exoskeleton of a magnificent species, which, once the religious worldviews contained in them have desiccated under the impact of the forces of secularization, leave behind a casing to be used as a structuring shape for other contents.
  • Metaphysical thinking, which for Habermas has become untenable by the very logic of philosophical development, is characterized by three aspects: identity thinking, or the philosophy of origins that postulates the correspondence between being and thought; the doctrine of ideas, which becomes the foundation for idealism, which in turn postulates a tension between what is perceived and what can be conceptualized; and a concomitant strong concept of theory, where the bios theoretikos takes on a quasi-sacred character, and where philosophy becomes the path to salvation through dedication to a life of contemplation. By “postmetaphysical” Habermas means the new self-understanding of reason that we are able to obtain after the collapse of the Hegelian idealist system—the historicization of reason, or the de-substantivation that turns it into a procedural rationality, and, above all, its humbling. It is noteworthy that one of the main aspects of the new postmetaphysical constellation is that in the wake of the collapse of metaphysics, philosophy is forced to recognize that it must co-exist with religious practices and language: Philosophy, even in its postmetaphysical form, will be able neither to replace nor to repress religion as long as religious language is the bearer of semantic content that is inspiring and even indispensable, for this content eludes (for the time being?) the explanatory force of philosophical language and continues to resist translation into reasoning discourses.
  • metaphysical thinking either surrendered philosophy to religion or sought to eliminate religion altogether. In contrast, postmetaphysical thinking recognizes that philosophy can neither replace nor dismissively reject religion, for religion continues to articulate a language whose syntax and content elude philosophy, but from which philosophy continues to derive insights into the universal dimensions of human existence.
  • Habermas claims that even moral discourse cannot translate religious language without something being lost: “Secular languages which only eliminate the substance once intended leave irritations. When sin was converted to culpability, and the breaking of divine commands to an offence against human laws, something was lost.” Still, Habermas’s concern with religion is no longer solely philosophical, nor merely socio-theoretical, but has taken on political urgency. Indeed, he now asks whether modern rule of law and constitutional democracies can generate the motivational resources that nourish them and make them durable. In a series of essays, now gathered in Between Naturalism and Religion, as well as in his Europe: The Faltering Project, Habermas argues that as we have become members of a world society (Weltgesellschaft), we have also been forced to adopt a societal “post-secular self-consciousness.” By this term Habermas does not mean that secularization has come to an end, and even less that it has to be reversed. Instead, he now clarifies that secularization refers very specifically to the secularization of state power and to the general dissolution of metaphysical, overarching worldviews (among which religious views are to be counted). Additionally, as members of a world society that has, if not a fully operational, at least an incipient global public sphere, we have been forced to witness the endurance and vitality of religion. As members of this emergent global public sphere, we are also forced to recognize the plurality of forms of secularization. Secularization did not occur in one form, but in a variety of forms and according to different chronologies.
  • through a critical reading of Rawls, Habermas has begun to translate the postmetaphysical orientation of modern philosophy into a postsecular self-understanding of modern rule of law societies in such a way that religious citizens as well as secular citizens can co-exist, not just by force of a modus vivendi, but out of a sincere mutual respect. “Mutual recognition implies, among other things, that religious and secular citizens are willing to listen and to learn from each other in public debates. The political virtue of treating each other civilly is an expression of distinctive cognitive attitudes.” The cognitive attitudes Habermas is referring to here are the very cognitive competencies that are distinctive of modern, postconventional social agents. Habermas’s recent work on religion, then, is primarily concerned with rescuing for the modern liberal state those motivational and moral resources that it cannot generate or provide itself. At the same time, his recent work is concerned with foregrounding the kind of ethical and moral concerns, preoccupations, and values that can guide us between the Scylla of a society administered from above by the system imperatives of a global economy and political power and the Charybdis of a technological frenzy that places us on the slippery slope of a liberally sanctioned eugenics.
  •  
    Religion in the public sphere: Religion as a catalyst of rationalization posted by Eduardo Mendieta
Weiye Loh

Talking Philosophy | Ethicists, Courtesy & Morals - 0 views

  • research raises questions about the extent to which studying ethics improves moral behavior. To the extent that practical effect is among one’s aims in studying (or as an administrator, in requiring) philosophy, I think there is reason for concern. I’m inclined to think that either philosophy should be justified differently, or we should work harder to try to figure out whether there is a *way* of studying philosophy that is more effective in changing moral behavior than the ordinary (21st century, Anglophone) way of studying philosophy is.”
  • I think it’s fairly common that professionals in any field are skeptical about it. Professional politicians are much more skeptical or even cynical about politics than your average informed citizen. Most of the doctors whom I’ve talked to off the record are fairly skeptical about the merits of medical care. Those who specialize in giving investment “advice” will generally admit that they have no idea about the future of markets with the inevitable comment: “if I really knew how the market will react, I’d be on my yacht, not advising you”.
  •  
    For all their pondering on matters moral, ethicists are no better mannered than other philosophers, and they behave no better morally than other philosophers or other academics either. Or such, at least, are the conclusions suggested by the research of philosophers Eric Schwitzgebel (at the University of California, at Riverside) and Joshua Rust (of Stetson University, Florida). On Ethicists' courtesy at philosophy conferences as recently published in Philosophical Psychology', Schwitzgebel & Rust report on a study that suggests that audiences in ethics sessions do not behave any better than those attending seminars on other areas of philosophy. Not when it comes to talking audibly whilst a speaker is addressing the room and not when it comes to 'allowing the door to slam shut while entering or exiting mid-session'. And though, appropriately enough "audiences in environmental ethics sessions … appear to leave behind less trash" generally speaking, the ethicists are just as likely to leave a mess as the epistemologists and metaphysicians.
Weiye Loh

Rationally Speaking: Is modern moral philosophy still in thrall to religion? - 0 views

  • Recently I re-read Richard Taylor’s An Introduction to Virtue Ethics, a classic published by Prometheus
  • Taylor compares virtue ethics to the other two major approaches to moral philosophy: utilitarianism (a la John Stuart Mill) and deontology (a la Immanuel Kant). Utilitarianism, of course, is roughly the idea that ethics has to do with maximizing pleasure and minimizing pain; deontology is the idea that reason can tell us what we ought to do from first principles, as in Kant’s categorical imperative (e.g., something is right if you can agree that it could be elevated to a universally acceptable maxim).
  • Taylor argues that utilitarianism and deontology — despite being wildly different in a variety of respects — share one common feature: both philosophies assume that there is such a thing as moral right and wrong, and a duty to do right and avoid wrong. But, he says, on the face of it this is nonsensical. Duty isn’t something one can have in the abstract, duty is toward a law or a lawgiver, which begs the question of what could arguably provide us with a universal moral law, or who the lawgiver could possibly be.
  • ...11 more annotations...
  • His answer is that both utilitarianism and deontology inherited the ideas of right, wrong and duty from Christianity, but endeavored to do without Christianity’s own answers to those questions: the law is given by God and the duty is toward Him. Taylor says that Mill, Kant and the like simply absorbed the Christian concept of morality while rejecting its logical foundation (such as it was). As a result, utilitarians and deontologists alike keep talking about the right thing to do, or the good as if those concepts still make sense once we move to a secular worldview. Utilitarians substituted pain and pleasure for wrong and right respectively, and Kant thought that pure reason can arrive at moral universals. But of course neither utilitarians nor deontologist ever give us a reason why it would be irrational to simply decline to pursue actions that increase global pleasure and diminish global pain, or why it would be irrational for someone not to find the categorical imperative particularly compelling.
  • The situation — again according to Taylor — is dramatically different for virtue ethics. Yes, there too we find concepts like right and wrong and duty. But, for the ancient Greeks they had completely different meanings, which made perfect sense then and now, if we are not mislead by the use of those words in a different context. For the Greeks, an action was right if it was approved by one’s society, wrong if it wasn’t, and duty was to one’s polis. And they understood perfectly well that what was right (or wrong) in Athens may or may not be right (or wrong) in Sparta. And that an Athenian had a duty to Athens, but not to Sparta, and vice versa for a Spartan.
  • But wait a minute. Does that mean that Taylor is saying that virtue ethics was founded on moral relativism? That would be an extraordinary claim indeed, and he does not, in fact, make it. His point is a bit more subtle. He suggests that for the ancient Greeks ethics was not (principally) about right, wrong and duty. It was about happiness, understood in the broad sense of eudaimonia, the good or fulfilling life. Aristotle in particular wrote in his Ethics about both aspects: the practical ethics of one’s duty to one’s polis, and the universal (for human beings) concept of ethics as the pursuit of the good life. And make no mistake about it: for Aristotle the first aspect was relatively trivial and understood by everyone, it was the second one that represented the real challenge for the philosopher.
  • For instance, the Ethics is famous for Aristotle’s list of the virtues (see Table), and his idea that the right thing to do is to steer a middle course between extreme behaviors. But this part of his work, according to Taylor, refers only to the practical ways of being a good Athenian, not to the universal pursuit of eudaimonia. Vice of Deficiency Virtuous Mean Vice of Excess Cowardice Courage Rashness Insensibility Temperance Intemperance Illiberality Liberality Prodigality Pettiness Munificence Vulgarity Humble-mindedness High-mindedness Vaingloriness Want of Ambition Right Ambition Over-ambition Spiritlessness Good Temper Irascibility Surliness Friendly Civility Obsequiousness Ironical Depreciation Sincerity Boastfulness Boorishness Wittiness Buffoonery</t
  • How, then, is one to embark on the more difficult task of figuring out how to live a good life? For Aristotle eudaimonia meant the best kind of existence that a human being can achieve, which in turns means that we need to ask what it is that makes humans different from all other species, because it is the pursuit of excellence in that something that provides for a eudaimonic life.
  • Now, Plato - writing before Aristotle - ended up construing the good life somewhat narrowly and in a self-serving fashion. He reckoned that the thing that distinguishes humanity from the rest of the biological world is our ability to use reason, so that is what we should be pursuing as our highest goal in life. And of course nobody is better equipped than a philosopher for such an enterprise... Which reminds me of Bertrand Russell’s quip that “A process which led from the amoeba to man appeared to the philosophers to be obviously a progress, though whether the amoeba would agree with this opinion is not known.”
  • But Aristotle's conception of "reason" was significantly broader, and here is where Taylor’s own update of virtue ethics begins to shine, particularly in Chapter 16 of the book, aptly entitled “Happiness.” Taylor argues that the proper way to understand virtue ethics is as the quest for the use of intelligence in the broadest possible sense, in the sense of creativity applied to all walks of life. He says: “Creative intelligence is exhibited by a dancer, by athletes, by a chess player, and indeed in virtually any activity guided by intelligence [including — but certainly not limited to — philosophy].” He continues: “The exercise of skill in a profession, or in business, or even in such things as gardening and farming, or the rearing of a beautiful family, all such things are displays of creative intelligence.”
  • what we have now is a sharp distinction between utilitarianism and deontology on the one hand and virtue ethics on the other, where the first two are (mistakenly, in Taylor’s assessment) concerned with the impossible question of what is right or wrong, and what our duties are — questions inherited from religion but that in fact make no sense outside of a religious framework. Virtue ethics, instead, focuses on the two things that really matter and to which we can find answers: the practical pursuit of a life within our polis, and the lifelong quest of eudaimonia understood as the best exercise of our creative faculties
  • &gt; So if one's profession is that of assassin or torturer would being the best that you can be still be your duty and eudaimonic? And what about those poor blighters who end up with an ugly family? &lt;Aristotle's philosophy is ver much concerned with virtue, and being an assassin or a torturer is not a virtue, so the concept of a eudaimonic life for those characters is oxymoronic. As for ending up in a "ugly" family, Aristotle did write that eudaimonia is in part the result of luck, because it is affected by circumstances.
  • &gt; So to the title question of this post: "Is modern moral philosophy still in thrall to religion?" one should say: Yes, for some residual forms of philosophy and for some philosophers &lt;That misses Taylor's contention - which I find intriguing, though I have to give it more thought - that *all* modern moral philosophy, except virtue ethics, is in thrall to religion, without realizing it.
  • “The exercise of skill in a profession, or in business, or even in such things as gardening and farming, or the rearing of a beautiful family, all such things are displays of creative intelligence.”So if one's profession is that of assassin or torturer would being the best that you can be still be your duty and eudaimonic? And what about those poor blighters who end up with an ugly family?
Weiye Loh

Understanding the universe: Order of creation | The Economist - 0 views

  • In their “The Grand Design”, the authors discuss “M-theory”, a composite of various versions of cosmological “string” theory that was developed in the mid-1990s, and announce that, if it is confirmed by observation, “we will have found the grand design.” Yet this is another tease. Despite much talk of the universe appearing to be “fine-tuned” for human existence, the authors do not in fact think that it was in any sense designed. And once more we are told that we are on the brink of understanding everything.
  • The authors rather fancy themselves as philosophers, though they would presumably balk at the description, since they confidently assert on their first page that “philosophy is dead.” It is, allegedly, now the exclusive right of scientists to answer the three fundamental why-questions with which the authors purport to deal in their book. Why is there something rather than nothing? Why do we exist? And why this particular set of laws and not some other?
  • It is hard to evaluate their case against recent philosophy, because the only subsequent mention of it, after the announcement of its death, is, rather oddly, an approving reference to a philosopher’s analysis of the concept of a law of nature, which, they say, “is a more subtle question than one may at first think.” There are actually rather a lot of questions that are more subtle than the authors think. It soon becomes evident that Professor Hawking and Mr Mlodinow regard a philosophical problem as something you knock off over a quick cup of tea after you have run out of Sudoku puzzles.
  • ...2 more annotations...
  • The main novelty in “The Grand Design” is the authors’ application of a way of interpreting quantum mechanics, derived from the ideas of the late Richard Feynman, to the universe as a whole. According to this way of thinking, “the universe does not have just a single existence or history, but rather every possible version of the universe exists simultaneously.” The authors also assert that the world’s past did not unfold of its own accord, but that “we create history by our observation, rather than history creating us.” They say that these surprising ideas have passed every experimental test to which they have been put, but that is misleading in a way that is unfortunately typical of the authors. It is the bare bones of quantum mechanics that have proved to be consistent with what is presently known of the subatomic world. The authors’ interpretations and extrapolations of it have not been subjected to any decisive tests, and it is not clear that they ever could be.
  • Once upon a time it was the province of philosophy to propose ambitious and outlandish theories in advance of any concrete evidence for them. Perhaps science, as Professor Hawking and Mr Mlodinow practice it in their airier moments, has indeed changed places with philosophy, though probably not quite in the way that they think.
  •  
    Order of creation Even Stephen Hawking doesn't quite manage to explain why we are here
Weiye Loh

MacIntyre on money « Prospect Magazine - 0 views

  • MacIntyre has often given the impression of a robe-ripping Savonarola. He has lambasted the heirs to the principal western ethical schools: John Locke’s social contract, Immanuel Kant’s categorical imperative, Jeremy Bentham’s utilitarian “the greatest happiness for the greatest number.” Yet his is not a lone voice in the wilderness. He can claim connections with a trio of 20th-century intellectual heavyweights: the late Elizabeth Anscombe, her surviving husband, Peter Geach, and the Canadian philosopher Charles Taylor, winner in 2007 of the Templeton prize. What all four have in common is their Catholic faith, enthusiasm for Aristotle’s telos (life goals), and promotion of Thomism, the philosophy of St Thomas Aquinas who married Christianity and Aristotle. Leo XIII (pope from 1878 to 1903), who revived Thomism while condemning communism and unfettered capitalism, is also an influence.
  • MacIntyre’s key moral and political idea is that to be human is to be an Aristotelian goal-driven, social animal. Being good, according to Aristotle, consists in a creature (whether plant, animal, or human) acting according to its nature—its telos, or purpose. The telos for human beings is to generate a communal life with others; and the good society is composed of many independent, self-reliant groups.
  • MacIntyre differs from all these influences and alliances, from Leo XIII onwards, in his residual respect for Marx’s critique of capitalism.
  • ...6 more annotations...
  • MacIntyre begins his Cambridge talk by asserting that the 2008 economic crisis was not due to a failure of business ethics.
  • he has argued that moral behaviour begins with the good practice of a profession, trade, or art: playing the violin, cutting hair, brick-laying, teaching philosophy.
  • In other words, the virtues necessary for human flourishing are not a result of the top-down application of abstract ethical principles, but the development of good character in everyday life.
  • After Virtue, which is in essence an attack on the failings of the Enlightenment, has in its sights a catalogue of modern assumptions of beneficence: liberalism, humanism, individualism, capitalism. MacIntyre yearns for a single, shared view of the good life as opposed to modern pluralism’s assumption that there can be many competing views of how to live well.
  • In philosophy he attacks consequentialism, the view that what matters about an action is its consequences, which is usually coupled with utilitarianism’s “greatest happiness” principle. He also rejects Kantianism—the identification of universal ethical maxims based on reason and applied to circumstances top down. MacIntyre’s critique routinely cites the contradictory moral principles adopted by the allies in the second world war. Britain invoked a Kantian reason for declaring war on Germany: that Hitler could not be allowed to invade his neighbours. But the bombing of Dresden (which for a Kantian involved the treatment of people as a means to an end, something that should never be countenanced) was justified under consequentialist or utilitarian arguments: to bring the war to a swift end.
  • MacIntyre seeks to oppose utilitarianism on the grounds that people are called on by their very nature to be good, not merely to perform acts that can be interpreted as good. The most damaging consequence of the Enlightenment, for MacIntyre, is the decline of the idea of a tradition within which an individual’s desires are disciplined by virtue. And that means being guided by internal rather than external “goods.” So the point of being a good footballer is the internal good of playing beautifully and scoring lots of goals, not the external good of earning a lot of money. The trend away from an Aristotelian perspective has been inexorable: from the empiricism of David Hume, to Darwin’s account of nature driven forward without a purpose, to the sterile analytical philosophy of AJ Ayer and the “demolition of metaphysics” in his 1936 book Language, Truth and Logic.
  •  
    The influential moral philosopher Alasdair MacIntyre has long stood outside the mainstream. Has the financial crisis finally vindicated his critique of global capitalism?
Weiye Loh

Science Warriors' Ego Trips - The Chronicle Review - The Chronicle of Higher Education - 0 views

  • By Carlin Romano Standing up for science excites some intellectuals the way beautiful actresses arouse Warren Beatty, or career liberals boil the blood of Glenn Beck and Rush Limbaugh. It's visceral.
  • A brave champion of beleaguered science in the modern age of pseudoscience, this Ayn Rand protagonist sarcastically derides the benighted irrationalists and glows with a self-anointed superiority. Who wouldn't want to feel that sense of power and rightness?
  • You hear the voice regularly—along with far more sensible stuff—in the latest of a now common genre of science patriotism, Nonsense on Stilts: How to Tell Science From Bunk (University of Chicago Press), by Massimo Pigliucci, a philosophy professor at the City University of New York.
  • ...24 more annotations...
  • it mixes eminent common sense and frequent good reporting with a cocksure hubris utterly inappropriate to the practice it apotheosizes.
  • According to Pigliucci, both Freudian psychoanalysis and Marxist theory of history "are too broad, too flexible with regard to observations, to actually tell us anything interesting." (That's right—not one "interesting" thing.) The idea of intelligent design in biology "has made no progress since its last serious articulation by natural theologian William Paley in 1802," and the empirical evidence for evolution is like that for "an open-and-shut murder case."
  • Pigliucci offers more hero sandwiches spiced with derision and certainty. Media coverage of science is "characterized by allegedly serious journalists who behave like comedians." Commenting on the highly publicized Dover, Pa., court case in which U.S. District Judge John E. Jones III ruled that intelligent-design theory is not science, Pigliucci labels the need for that judgment a "bizarre" consequence of the local school board's "inane" resolution. Noting the complaint of intelligent-design advocate William Buckingham that an approved science textbook didn't give creationism a fair shake, Pigliucci writes, "This is like complaining that a textbook in astronomy is too focused on the Copernican theory of the structure of the solar system and unfairly neglects the possibility that the Flying Spaghetti Monster is really pulling each planet's strings, unseen by the deluded scientists."
  • Or is it possible that the alternate view unfairly neglected could be more like that of Harvard scientist Owen Gingerich, who contends in God's Universe (Harvard University Press, 2006) that it is partly statistical arguments—the extraordinary unlikelihood eons ago of the physical conditions necessary for self-conscious life—that support his belief in a universe "congenially designed for the existence of intelligent, self-reflective life"?
  • Even if we agree that capital "I" and "D" intelligent-design of the scriptural sort—what Gingerich himself calls "primitive scriptural literalism"—is not scientifically credible, does that make Gingerich's assertion, "I believe in intelligent design, lowercase i and lowercase d," equivalent to Flying-Spaghetti-Monsterism? Tone matters. And sarcasm is not science.
  • The problem with polemicists like Pigliucci is that a chasm has opened up between two groups that might loosely be distinguished as "philosophers of science" and "science warriors."
  • Philosophers of science, often operating under the aegis of Thomas Kuhn, recognize that science is a diverse, social enterprise that has changed over time, developed different methodologies in different subsciences, and often advanced by taking putative pseudoscience seriously, as in debunking cold fusion
  • The science warriors, by contrast, often write as if our science of the moment is isomorphic with knowledge of an objective world-in-itself—Kant be damned!—and any form of inquiry that doesn't fit the writer's criteria of proper science must be banished as "bunk." Pigliucci, typically, hasn't much sympathy for radical philosophies of science. He calls the work of Paul Feyerabend "lunacy," deems Bruno Latour "a fool," and observes that "the great pronouncements of feminist science have fallen as flat as the similarly empty utterances of supporters of intelligent design."
  • It doesn't have to be this way. The noble enterprise of submitting nonscientific knowledge claims to critical scrutiny—an activity continuous with both philosophy and science—took off in an admirable way in the late 20th century when Paul Kurtz, of the University at Buffalo, established the Committee for the Scientific Investigation of Claims of the Paranormal (Csicop) in May 1976. Csicop soon after launched the marvelous journal Skeptical Inquirer
  • Although Pigliucci himself publishes in Skeptical Inquirer, his contributions there exhibit his signature smugness. For an antidote to Pigliucci's overweening scientism 'tude, it's refreshing to consult Kurtz's curtain-raising essay, "Science and the Public," in Science Under Siege (Prometheus Books, 2009, edited by Frazier)
  • Kurtz's commandment might be stated, "Don't mock or ridicule—investigate and explain." He writes: "We attempted to make it clear that we were interested in fair and impartial inquiry, that we were not dogmatic or closed-minded, and that skepticism did not imply a priori rejection of any reasonable claim. Indeed, I insisted that our skepticism was not totalistic or nihilistic about paranormal claims."
  • Kurtz combines the ethos of both critical investigator and philosopher of science. Describing modern science as a practice in which "hypotheses and theories are based upon rigorous methods of empirical investigation, experimental confirmation, and replication," he notes: "One must be prepared to overthrow an entire theoretical framework—and this has happened often in the history of science ... skeptical doubt is an integral part of the method of science, and scientists should be prepared to question received scientific doctrines and reject them in the light of new evidence."
  • Pigliucci, alas, allows his animus against the nonscientific to pull him away from sensitive distinctions among various sciences to sloppy arguments one didn't see in such earlier works of science patriotism as Carl Sagan's The Demon-Haunted World: Science as a Candle in the Dark (Random House, 1995). Indeed, he probably sets a world record for misuse of the word "fallacy."
  • To his credit, Pigliucci at times acknowledges the nondogmatic spine of science. He concedes that "science is characterized by a fuzzy borderline with other types of inquiry that may or may not one day become sciences." Science, he admits, "actually refers to a rather heterogeneous family of activities, not to a single and universal method." He rightly warns that some pseudoscience—for example, denial of HIV-AIDS causation—is dangerous and terrible.
  • But at other points, Pigliucci ferociously attacks opponents like the most unreflective science fanatic
  • He dismisses Feyerabend's view that "science is a religion" as simply "preposterous," even though he elsewhere admits that "methodological naturalism"—the commitment of all scientists to reject "supernatural" explanations—is itself not an empirically verifiable principle or fact, but rather an almost Kantian precondition of scientific knowledge. An article of faith, some cold-eyed Feyerabend fans might say.
  • He writes, "ID is not a scientific theory at all because there is no empirical observation that can possibly contradict it. Anything we observe in nature could, in principle, be attributed to an unspecified intelligent designer who works in mysterious ways." But earlier in the book, he correctly argues against Karl Popper that susceptibility to falsification cannot be the sole criterion of science, because science also confirms. It is, in principle, possible that an empirical observation could confirm intelligent design—i.e., that magic moment when the ultimate UFO lands with representatives of the intergalactic society that planted early life here, and we accept their evidence that they did it.
  • "As long as we do not venture to make hypotheses about who the designer is and why and how she operates," he writes, "there are no empirical constraints on the 'theory' at all. Anything goes, and therefore nothing holds, because a theory that 'explains' everything really explains nothing."
  • Here, Pigliucci again mixes up what's likely or provable with what's logically possible or rational. The creation stories of traditional religions and scriptures do, in effect, offer hypotheses, or claims, about who the designer is—e.g., see the Bible.
  • Far from explaining nothing because it explains everything, such an explanation explains a lot by explaining everything. It just doesn't explain it convincingly to a scientist with other evidentiary standards.
  • A sensible person can side with scientists on what's true, but not with Pigliucci on what's rational and possible. Pigliucci occasionally recognizes that. Late in his book, he concedes that "nonscientific claims may be true and still not qualify as science." But if that's so, and we care about truth, why exalt science to the degree he does? If there's really a heaven, and science can't (yet?) detect it, so much the worse for science.
  • Pigliucci quotes a line from Aristotle: "It is the mark of an educated mind to be able to entertain a thought without accepting it." Science warriors such as Pigliucci, or Michael Ruse in his recent clash with other philosophers in these pages, should reflect on a related modern sense of "entertain." One does not entertain a guest by mocking, deriding, and abusing the guest. Similarly, one does not entertain a thought or approach to knowledge by ridiculing it.
  • Long live Skeptical Inquirer! But can we deep-six the egomania and unearned arrogance of the science patriots? As Descartes, that immortal hero of scientists and skeptics everywhere, pointed out, true skepticism, like true charity, begins at home.
  • Carlin Romano, critic at large for The Chronicle Review, teaches philosophy and media theory at the University of Pennsylvania.
  •  
    April 25, 2010 Science Warriors' Ego Trips
Weiye Loh

Rationally Speaking: Are Intuitions Good Evidence? - 0 views

  • Is it legitimate to cite one’s intuitions as evidence in a philosophical argument?
  • appeals to intuitions are ubiquitous in philosophy. What are intuitions? Well, that’s part of the controversy, but most philosophers view them as intellectual “seemings.” George Bealer, perhaps the most prominent defender of intuitions-as-evidence, writes, “For you to have an intuition that A is just for it to seem to you that A… Of course, this kind of seeming is intellectual, not sensory or introspective (or imaginative).”2 Other philosophers have characterized them as “noninferential belief due neither to perception nor introspection”3 or alternatively as “applications of our ordinary capacities for judgment.”4
  • Philosophers may not agree on what, exactly, intuition is, but that doesn’t stop them from using it. “Intuitions often play the role that observation does in science – they are data that must be explained, confirmers or the falsifiers of theories,” Brian Talbot says.5 Typically, the way this works is that a philosopher challenges a theory by applying it to a real or hypothetical case and showing that it yields a result which offends his intuitions (and, he presumes, his readers’ as well).
  • ...16 more annotations...
  • For example, John Searle famously appealed to intuition to challenge the notion that a computer could ever understand language: “Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output)… If the man in the room does not understand Chinese on the basis of implementing the appropriate program for understanding Chinese then neither does any other digital computer solely on that basis because no computer, qua computer, has anything the man does not have.” Should we take Searle’s intuition that such a system would not constitute “understanding” as good evidence that it would not? Many critics of the Chinese Room argument argue that there is no reason to expect our intuitions about intelligence and understanding to be reliable.
  • Ethics leans especially heavily on appeals to intuition, with a whole school of ethicists (“intuitionists”) maintaining that a person can see the truth of general ethical principles not through reason, but because he “just sees without argument that they are and must be true.”6
  • Intuitions are also called upon to rebut ethical theories such as utilitarianism: maximizing overall utility would require you to kill one innocent person if, in so doing, you could harvest her organs and save five people in need of transplants. Such a conclusion is taken as a reductio ad absurdum, requiring utilitarianism to be either abandoned or radically revised – not because the conclusion is logically wrong, but because it strikes nearly everyone as intuitively wrong.
  • British philosopher G.E. Moore used intuition to argue that the existence of beauty is good irrespective of whether anyone ever gets to see and enjoy that beauty. Imagine two planets, he said, one full of stunning natural wonders – trees, sunsets, rivers, and so on – and the other full of filth. Now suppose that nobody will ever have the opportunity to glimpse either of those two worlds. Moore concluded, “Well, even so, supposing them quite apart from any possible contemplation by human beings; still, is it irrational to hold that it is better that the beautiful world should exist than the one which is ugly? Would it not be well, in any case, to do what we could to produce it rather than the other? Certainly I cannot help thinking that it would."7
  • Although similar appeals to intuition can be found throughout all the philosophical subfields, their validity as evidence has come under increasing scrutiny over the last two decades, from philosophers such as Hilary Kornblith, Robert Cummins, Stephen Stich, Jonathan Weinberg, and Jaakko Hintikka (links go to representative papers from each philosopher on this issue). The severity of their criticisms vary from Weinberg’s warning that “We simply do not know enough about how intuitions work,” to Cummins’ wholesale rejection of philosophical intuition as “epistemologically useless.”
  • One central concern for the critics is that a single question can inspire totally different, and mutually contradictory, intuitions in different people.
  • For example, I disagree with Moore’s intuition that it would be better for a beautiful planet to exist than an ugly one even if there were no one around to see it. I can’t understand what the words “better” and “worse,” let alone “beautiful” and “ugly,” could possibly mean outside the domain of the experiences of conscious beings
  • If we want to take philosophers’ intuitions as reason to believe a proposition, then the existence of opposing intuitions leaves us in the uncomfortable position of having reason to believe both a proposition and its opposite.
  • “I suspect there is overall less agreement than standard philosophical practice presupposes, because having the ‘right’ intuitions is the entry ticket to various subareas of philosophy,” Weinberg says.
  • But the problem that intuitions are often not universally shared is overshadowed by another problem: even if an intuition is universally shared, that doesn’t mean it’s accurate. For in fact there are many universal intuitions that are demonstrably false.
  • People who have not been taught otherwise typically assume that an object dropped out of a moving plane will fall straight down to earth, at exactly the same latitude and longitude from which it was dropped. What will actually happen is that, because the object begins its fall with the same forward momentum it had while it was on the plane, it will continue to travel forward, tracing out a curve as it falls and not a straight line. “Considering the inadequacies of ordinary physical intuitions, it is natural to wonder whether ordinary moral intuitions might be similarly inadequate,” Princeton’s Gilbert Harman has argued,9 and the same could be said for our intuitions about consciousness, metaphysics, and so on.
  • We can’t usually “check” the truth of our philosophical intuitions externally, with an experiment or a proof, the way we can in physics or math. But it’s not clear why we should expect intuitions to be true. If we have an innate tendency towards certain intuitive beliefs, it’s likely because they were useful to our ancestors.
  • But there’s no reason to expect that the intuitions which were true in the world of our ancestors would also be true in other, unfamiliar contexts
  • And for some useful intuitions, such as moral ones, “truth” may have been beside the point. It’s not hard to see how moral intuitions in favor of fairness and generosity would have been crucial to the survival of our ancestors’ tribes, as would the intuition to condemn tribe members who betrayed those reciprocal norms. If we can account for the presence of these moral intuitions by the fact that they were useful, is there any reason left to hypothesize that they are also “true”? The same question could be asked of the moral intuitions which Jonathan Haidt has classified as “purity-based” – an aversion to incest, for example, would clearly have been beneficial to our ancestors. Since that fact alone suffices to explain the (widespread) presence of the “incest is morally wrong” intuition, why should we take that intuition as evidence that “incest is morally wrong” is true?
  • The still-young debate over intuition will likely continue to rage, especially since it’s intertwined with a rapidly growing body of cognitive and social psychological research examining where our intuitions come from and how they vary across time and place.
  • its resolution bears on the work of literally every field of analytic philosophy, except perhaps logic. Can analytic philosophy survive without intuition? (If so, what would it look like?) And can the debate over the legitimacy of appeals to intuition be resolved with an appeal to intuition?
Weiye Loh

The Death of Postmodernism And Beyond | Philosophy Now - 0 views

  • Most of the undergraduates who will take ‘Postmodern Fictions’ this year will have been born in 1985 or after, and all but one of the module’s primary texts were written before their lifetime. Far from being ‘contemporary’, these texts were published in another world, before the students were born: The French Lieutenant’s Woman, Nights at the Circus, If on a Winter’s Night a Traveller, Do Androids Dream of Electric Sheep? (and Blade Runner), White Noise: this is Mum and Dad’s culture. Some of the texts (‘The Library of Babel’) were written even before their parents were born. Replace this cache with other postmodern stalwarts – Beloved, Flaubert’s Parrot, Waterland, The Crying of Lot 49, Pale Fire, Slaughterhouse 5, Lanark, Neuromancer, anything by B.S. Johnson – and the same applies. It’s all about as contemporary as The Smiths, as hip as shoulder pads, as happening as Betamax video recorders. These are texts which are just coming to grips with the existence of rock music and television; they mostly do not dream even of the possibility of the technology and communications media – mobile phones, email, the internet, computers in every house powerful enough to put a man on the moon – which today’s undergraduates take for granted.
  • somewhere in the late 1990s or early 2000s, the emergence of new technologies re-structured, violently and forever, the nature of the author, the reader and the text, and the relationships between them.
  • Postmodernism, like modernism and romanticism before it, fetishised [ie placed supreme importance on] the author, even when the author chose to indict or pretended to abolish him or herself. But the culture we have now fetishises the recipient of the text to the degree that they become a partial or whole author of it. Optimists may see this as the democratisation of culture; pessimists will point to the excruciating banality and vacuity of the cultural products thereby generated (at least so far).
  • ...17 more annotations...
  • Pseudo-modernism also encompasses contemporary news programmes, whose content increasingly consists of emails or text messages sent in commenting on the news items. The terminology of ‘interactivity’ is equally inappropriate here, since there is no exchange: instead, the viewer or listener enters – writes a segment of the programme – then departs, returning to a passive role. Pseudo-modernism also includes computer games, which similarly place the individual in a context where they invent the cultural content, within pre-delineated limits. The content of each individual act of playing the game varies according to the particular player.
  • The pseudo-modern cultural phenomenon par excellence is the internet. Its central act is that of the individual clicking on his/her mouse to move through pages in a way which cannot be duplicated, inventing a pathway through cultural products which has never existed before and never will again. This is a far more intense engagement with the cultural process than anything literature can offer, and gives the undeniable sense (or illusion) of the individual controlling, managing, running, making up his/her involvement with the cultural product. Internet pages are not ‘authored’ in the sense that anyone knows who wrote them, or cares. The majority either require the individual to make them work, like Streetmap or Route Planner, or permit him/her to add to them, like Wikipedia, or through feedback on, for instance, media websites. In all cases, it is intrinsic to the internet that you can easily make up pages yourself (eg blogs).
  • Where once special effects were supposed to make the impossible appear credible, CGI frequently [inadvertently] works to make the possible look artificial, as in much of Lord of the Rings or Gladiator. Battles involving thousands of individuals have really happened; pseudo-modern cinema makes them look as if they have only ever happened in cyberspace.
  • Similarly, television in the pseudo-modern age favours not only reality TV (yet another unapt term), but also shopping channels, and quizzes in which the viewer calls to guess the answer to riddles in the hope of winning money.
  • The purely ‘spectacular’ function of television, as with all the arts, has become a marginal one: what is central now is the busy, active, forging work of the individual who would once have been called its recipient. In all of this, the ‘viewer’ feels powerful and is indeed necessary; the ‘author’ as traditionally understood is either relegated to the status of the one who sets the parameters within which others operate, or becomes simply irrelevant, unknown, sidelined; and the ‘text’ is characterised both by its hyper-ephemerality and by its instability. It is made up by the ‘viewer’, if not in its content then in its sequence – you wouldn’t read Middlemarch by going from page 118 to 316 to 401 to 501, but you might well, and justifiably, read Ceefax that way.
  • A pseudo-modern text lasts an exceptionally brief time. Unlike, say, Fawlty Towers, reality TV programmes cannot be repeated in their original form, since the phone-ins cannot be reproduced, and without the possibility of phoning-in they become a different and far less attractive entity.
  • If scholars give the date they referenced an internet page, it is because the pages disappear or get radically re-cast so quickly. Text messages and emails are extremely difficult to keep in their original form; printing out emails does convert them into something more stable, like a letter, but only by destroying their essential, electronic state.
  • The cultural products of pseudo-modernism are also exceptionally banal
  • Much text messaging and emailing is vapid in comparison with what people of all educational levels used to put into letters.
  • A triteness, a shallowness dominates all.
  • In music, the pseudo-modern supersedingof the artist-dominated album as monolithic text by the downloading and mix-and-matching of individual tracks on to an iPod, selected by the listener, was certainly prefigured by the music fan’s creation of compilation tapes a generation ago. But a shift has occurred, in that what was a marginal pastime of the fan has become the dominant and definitive way of consuming music, rendering the idea of the album as a coherent work of art, a body of integrated meaning, obsolete.
  • To a degree, pseudo-modernism is no more than a technologically motivated shift to the cultural centre of something which has always existed (similarly, metafiction has always existed, but was never so fetishised as it was by postmodernism). Television has always used audience participation, just as theatre and other performing arts did before it; but as an option, not as a necessity: pseudo-modern TV programmes have participation built into them.
  • Whereas postmodernism called ‘reality’ into question, pseudo-modernism defines the real implicitly as myself, now, ‘interacting’ with its texts. Thus, pseudo-modernism suggests that whatever it does or makes is what is reality, and a pseudo-modern text may flourish the apparently real in an uncomplicated form: the docu-soap with its hand-held cameras (which, by displaying individuals aware of being regarded, give the viewer the illusion of participation); The Office and The Blair Witch Project, interactive pornography and reality TV; the essayistic cinema of Michael Moore or Morgan Spurlock.
  • whereas postmodernism favoured the ironic, the knowing and the playful, with their allusions to knowledge, history and ambivalence, pseudo-modernism’s typical intellectual states are ignorance, fanaticism and anxiety
  • pseudo-modernism lashes fantastically sophisticated technology to the pursuit of medieval barbarism – as in the uploading of videos of beheadings onto the internet, or the use of mobile phones to film torture in prisons. Beyond this, the destiny of everyone else is to suffer the anxiety of getting hit in the cross-fire. But this fatalistic anxiety extends far beyond geopolitics, into every aspect of contemporary life; from a general fear of social breakdown and identity loss, to a deep unease about diet and health; from anguish about the destructiveness of climate change, to the effects of a new personal ineptitude and helplessness, which yield TV programmes about how to clean your house, bring up your children or remain solvent.
  • Pseudo-modernism belongs to a world pervaded by the encounter between a religiously fanatical segment of the United States, a largely secular but definitionally hyper-religious Israel, and a fanatical sub-section of Muslims scattered across the planet: pseudo-modernism was not born on 11 September 2001, but postmodernism was interred in its rubble.
  • pseudo-modernist communicates constantly with the other side of the planet, yet needs to be told to eat vegetables to be healthy, a fact self-evident in the Bronze Age. He or she can direct the course of national television programmes, but does not know how to make him or herself something to eat – a characteristic fusion of the childish and the advanced, the powerful and the helpless. For varying reasons, these are people incapable of the “disbelief of Grand Narratives” which Lyotard argued typified postmodernists
  •  
    Postmodern philosophy emphasises the elusiveness of meaning and knowledge. This is often expressed in postmodern art as a concern with representation and an ironic self-awareness. And the argument that postmodernism is over has already been made philosophically. There are people who have essentially asserted that for a while we believed in postmodern ideas, but not any more, and from now on we're going to believe in critical realism. The weakness in this analysis is that it centres on the academy, on the practices and suppositions of philosophers who may or may not be shifting ground or about to shift - and many academics will simply decide that, finally, they prefer to stay with Foucault [arch postmodernist] than go over to anything else. However, a far more compelling case can be made that postmodernism is dead by looking outside the academy at current cultural production.
Weiye Loh

Philosophy Bites - 0 views

  •  
    Mon, 30 July 2007 Anthony Grayling on Atheism Is belief in the existence of a God or gods the equivalent of believing that there are fairies at the bottom of the garden? Or can it be defended on the basis of reason or evidence? In this interview for Philosophy Bites  Anthony Grayling gives a philosophical defence of atheism and explains why he believes it to be a well-grounded and ultimately life-affirming position to hold.
Weiye Loh

Rationally Speaking: The sorry state of higher education - 0 views

  • two disconcerting articles crossed my computer screen, both highlighting the increasingly sorry state of higher education, though from very different perspectives. The first is “Ed Dante’s” (actually a pseudonym) piece in the Chronicle of Higher Education, entitled The Shadow Scholar. The second is Gregory Petsko’s A Faustian Bargain, published of all places in Genome Biology.
  • There is much to be learned by educators in the Shadow Scholar piece, except the moral that “Dante” would like us to take from it. The anonymous author writes:“Pointing the finger at me is too easy. Why does my business thrive? Why do so many students prefer to cheat rather than do their own work? Say what you want about me, but I am not the reason your students cheat.
  • The point is that plagiarism and cheating happen for a variety of reasons, one of which is the existence of people like Mr. Dante and his company, who set up a business that is clearly unethical and should be illegal. So, pointing fingers at him and his ilk is perfectly reasonable. Yes, there obviously is a “market” for cheating in higher education, and there are complex reasons for it, but he is in a position similar to that of the drug dealer who insists that he is simply providing the commodity to satisfy society’s demand. Much too easy of a way out, and one that doesn’t fly in the case of drug dealers, and shouldn’t fly in the case of ghost cheaters.
  • ...16 more annotations...
  • As a teacher at the City University of New York, I am constantly aware of the possibility that my students might cheat on their tests. I do take some elementary precautionary steps
  • Still, my job is not that of the policeman. My students are adults who theoretically are there to learn. If they don’t value that learning and prefer to pay someone else to fake it, so be it, ultimately it is they who lose in the most fundamental sense of the term. Just like drug addicts, to return to my earlier metaphor. And just as in that other case, it is enablers like Mr. Dante who simply can’t duck the moral blame.
  • n open letter to the president of SUNY-Albany, penned by molecular biologist Gregory Petsko. The SUNY-Albany president has recently announced the closing — for budgetary reasons — of the departments of French, Italian, Classics, Russian and Theater Arts at his university.
  • Petsko begins by taking on one of the alleged reasons why SUNY-Albany is slashing the humanities: low enrollment. He correctly points out that the problem can be solved overnight at the stroke of a pen: stop abdicating your responsibilities as educators and actually put constraints on what your students have to take in order to graduate. Make courses in English literature, foreign languages, philosophy and critical thinking, the arts and so on, mandatory or one of a small number of options that the students must consider in order to graduate.
  • But, you might say, that’s cheating the market! Students clearly don’t want to take those courses, and a business should cater to its customers. That type of reasoning is among the most pernicious and idiotic I’ve ever heard. Students are not clients (if anything, their parents, who usually pay the tuition, are), they are not shopping for a new bag or pair of shoes. They do not know what is best for them educationally, that’s why they go to college to begin with. If you are not convinced about how absurd the students-as-clients argument is, consider an analogy: does anyone with functioning brain cells argue that since patients in a hospital pay a bill, they should be dictating how the brain surgeon operates? I didn’t think so.
  • Petsko then tackles the second lame excuse given by the president of SUNY-Albany (and common among the upper administration of plenty of public universities): I can’t do otherwise because of the legislature’s draconian cuts. Except that university budgets are simply too complicated for there not to be any other option. I know this first hand, I’m on a special committee at my own college looking at how to creatively deal with budget cuts handed down to us from the very same (admittedly small minded and dysfunctional) New York state legislature that has prompted SUNY-Albany’s action. As Petsko points out, the president there didn’t even think of involving the faculty and staff in a broad discussion of how to deal with the crisis, he simply announced the cuts on a Friday afternoon and then ran for cover. An example of very poor leadership to say the least, and downright hypocrisy considering all the talk that the same administrator has been dishing out about the university “community.”
  • Finally, there is the argument that the humanities don’t pay for their own way, unlike (some of) the sciences (some of the time). That is indubitably true, but irrelevant. Universities are not businesses, they are places of higher learning. Yes, of course they need to deal with budgets, fund raising and all the rest. But the financial and administrative side has one goal and one goal only: to provide the best education to the students who attend that university.
  • That education simply must include the sciences, philosophy, literature, and the arts, as well as more technical or pragmatic offerings such as medicine, business and law. Why? Because that’s the kind of liberal education that makes for an informed and intelligent citizenry, without which our democracy is but empty talk, and our lives nothing but slavery to the marketplace.
  • Maybe this is not how education works in the US. I thought that general (or compulsory) education (ie. up to high school) is designed to make sure that citizens in a democratic country can perform their civil duties. A balanced and well-rounded education, which includes a healthy mixture of science and humanities, is indeed very important for this purpose. However, college-level education is for personal growth and therefore the person must have a large say about what kind of classes he or she chooses to take. I am disturbed by Massimo's hospital analogy. Students are not ill. They don't go to college to be cured, or to be good citizens. They go to college to learn things that *they* want to learn. Patients are passive. Students are not.I agree that students typically do not know what kind of education is good for them. But who does?
  • students do have a saying in their education. They pick their major, and there are electives. But I object to the idea that they can customize their major any way they want. That assumes they know what the best education for them is, they don't. That's the point of education.
  • The students are in your class to get a good grade, any learning that takes place is purely incidental. Those good grades will look good on their transcript and might convince a future employer that they are smart and thus are worth paying more.
  • I don't know what the dollar to GPA exchange rate is these days, but I don't doubt that there is one.
  • Just how many of your students do you think will remember the extensive complex jargon of philosophy more than a couple of months after they leave your classroom?
  • and our lives nothing but slavery to the marketplace.We are there. Welcome. Where have you been all this time? In a capitalistic/plutocratic society money is power (and free speech too according to the supreme court). Money means a larger/better house/car/clothing/vacation than your neighbor and consequently better mating opportunities. You can mostly blame the women for that one I think just like the peacock's tail.
  • If a student of surgery fails to learn they might maim, kill or cripple someone. If an engineer of airplanes fails to learn they might design a faulty aircraft that fails and kills people. If a student of chemistry fails to learn they might design a faulty drug with unintended and unfortunate side effects, but what exactly would be the harm if a student of philosophy fails to learn Aristotle had to say about elements or Plato had to say about perfect forms? These things are so divorced from people's everyday activities as to be rendered all but meaningless.
  • human knowledge grows by leaps and bounds every day, but human brain capacity does not, so the portion of human knowledge you can personally hold gets smaller by the minute. Learn (and remember) as much as you can as fast as you can and you will still lose ground. You certainly have your work cut out for you emphasizing the importance of Thales in the Age of Twitter and whatever follows it next year.
Weiye Loh

Mystery and Evidence - NYTimes.com - 0 views

  • a very natural way for atheists to react to religious claims: to ask for evidence, and reject these claims in the absence of it. Many of the several hundred comments that followed two earlier Stone posts “Philosophy and Faith” and “On Dawkins’s Atheism: A Response,” both by Gary Gutting, took this stance. Certainly this is the way that today’s “new atheists” &nbsp;tend to approach religion. According to their view, religions — by this they mean basically Christianity, Judaism and Islam and I will follow them in this — are largely in the business of making claims about the universe that are a bit like scientific hypotheses. In other words, they are claims — like the claim that God created the world — that are supported by evidence, that are proved by arguments and tested against our experience of the world. And against the evidence, these hypotheses do not seem to fare well.
  • But is this the right way to think about religion? Here I want to suggest that it is not, and to try and locate what seem to me some significant differences between science and religion
  • To begin with, scientific explanation is a very specific and technical kind of knowledge. It requires patience, pedantry, a narrowing of focus and (in the case of the most profound scientific theories) considerable mathematical knowledge and ability. No-one can understand quantum theory — by any account, the most successful physical theory there has ever been — unless they grasp the underlying mathematics. Anyone who says otherwise is fooling themselves.
  • ...16 more annotations...
  • Religious belief is a very different kind of thing. It is not restricted only to those with a certain education or knowledge, it does not require years of training, it is not specialized and it is not technical. (I’m talking here about the content of what people who regularly attend church, mosque or synagogue take themselves to be thinking; I’m not talking about how theologians interpret this content.)
  • while religious belief is widespread, scientific knowledge is not. I would guess that very few people in the world are actually interested in the details of contemporary scientific theories. Why? One obvious reason is that many lack access to this knowledge. Another reason is that even when they have access, these theories require sophisticated knowledge and abilities, which not everyone is capable of getting.
  • most people aren’t deeply interested in science, even when they have the opportunity and the basic intellectual capacity to learn about it. Of course, educated people who know about science know roughly what Einstein, Newton and Darwin said. Many educated people accept the modern scientific view of the world and understand its main outlines. But this is not the same as being interested in the details of science, or being immersed in scientific thinking.
  • This lack of interest in science contrasts sharply with the worldwide interest in religion. It’s hard to say whether religion is in decline or growing, partly because it’s hard to identify only one thing as religion — not a question I can address here. But it’s pretty obvious that whatever it is, religion commands and absorbs the passions and intellects of hundreds of millions of people, many more people than science does. Why is this? Is it because — as the new atheists might argue — they want to explain the world in a scientific kind of way, but since they have not been properly educated they haven’t quite got there yet? Or is it because so many people are incurably irrational and are incapable of scientific thinking? Or is something else going on?
  • Some philosophers have said that religion is so unlike science that it has its own “grammar” or “logic” and should not be held accountable to the same standards as scientific or ordinary empirical belief. When Christians express their belief that “Christ has risen,” for example, they should not be taken as making a factual claim, but as expressing their commitment to what Wittgenstein called a certain “form of life,” a way of seeing significance in the world, a moral and practical outlook which is worlds away from scientific explanation.
  • This view has some merits, as we shall see, but it grossly misrepresents some central phenomena of religion. It is absolutely essential to religions that they make certain factual or historical claims. When Saint Paul says “if Christ is not risen, then our preaching is in vain and our faith is in vain” he is saying that the point of his faith depends on a certain historical occurrence.
  • Theologians will debate exactly what it means to claim that Christ has risen, what exactly the meaning and significance of this occurrence is, and will give more or less sophisticated accounts of it. But all I am saying is that whatever its specific nature, Christians must hold that there was such an occurrence. Christianity does make factual, historical claims. But this is not the same as being a kind of proto-science. This will become clear if we reflect a bit on what science involves.
  • The essence of science involves making hypotheses about the causes and natures of things, in order to explain the phenomena we observe around us, and to predict their future behavior. Some sciences — medical science, for example — make hypotheses about the causes of diseases and test them by intervening. Others — cosmology, for example — make hypotheses that are more remote from everyday causes, and involve a high level of mathematical abstraction and idealization. Scientific reasoning involves an obligation to hold a hypothesis only to the extent that the evidence requires it. Scientists should not accept hypotheses which are “ad hoc” — that is, just tailored for one specific situation but cannot be generalized to others. Most scientific theories involve some kind of generalization: they don’t just make claims about one thing, but about things of a general kind. And their hypotheses are designed, on the whole, to make predictions; and if these predictions don’t come out true, then this is something for the scientists to worry about.
  • Religions do not construct hypotheses in this sense. I said above that Christianity rests upon certain historical claims, like the claim of the resurrection. But this is not enough to make scientific hypotheses central to Christianity, any more than it makes such hypotheses central to history. It is true, as I have just said, that Christianity does place certain historical events at the heart of their conception of the world, and to that extent, one cannot be a Christian unless one believes that these events happened. Speaking for myself, it is because I reject the factual basis of the central Christian doctrines that I consider myself an atheist. But I do not reject these claims because I think they are bad hypotheses in the scientific sense. Not all factual claims are scientific hypotheses. So I disagree with Richard Dawkins when he says “religions make existence claims, and this means scientific claims.”
  • Taken as hypotheses, religious claims do very badly: they are ad hoc, they are arbitrary, they rarely make predictions and when they do they almost never come true. Yet the striking fact is that it does not worry Christians when this happens. In the gospels Jesus predicts the end of the world and the coming of the kingdom of God. It does not worry believers that Jesus was wrong (even if it causes theologians to reinterpret what is meant by ‘the kingdom of God’). If Jesus was framing something like a scientific hypothesis, then it should worry them. Critics of religion might say that this just shows the manifest irrationality of religion. But what it suggests to me is that that something else is going on, other than hypothesis formation.
  • Religious belief tolerates a high degree of mystery and ignorance in its understanding of the world. When the devout pray, and their prayers are not answered, they do not take this as evidence which has to be weighed alongside all the other evidence that prayer is effective. They feel no obligation whatsoever to weigh the evidence. If God does not answer their prayers, well, there must be some explanation of this, even though we may never know it. Why do people suffer if an omnipotent God loves them? Many complex answers have been offered, but in the end they come down to this: it’s a mystery.
  • Science too has its share of mysteries (or rather: things that must simply be accepted without further explanation). But one aim of science is to minimize such things, to reduce the number of primitive concepts or primitive explanations. The religious attitude is very different. It does not seek to minimize mystery. Mysteries are accepted as a consequence of what, for the religious, makes the world meaningful.
  • Religion is an attempt to make sense of the world, but it does not try and do this in the way science does. Science makes sense of the world by showing how things conform to its hypotheses. The characteristic mode of scientific explanation is showing how events fit into a general pattern.
  • Religion, on the other hand, attempts to make sense of the world by seeing a kind of meaning or significance in things. This kind of significance does not need laws or generalizations, but just the sense that the everyday world we experience is not all there is, and that behind it all is the mystery of God’s presence. The believer is already convinced that God is present in everything, even if they cannot explain this or support it with evidence. But it makes sense of their life by suffusing it with meaning. This is the attitude (seeing God in everything) expressed in George Herbert’s poem, “The Elixir.” Equipped with this attitude, even the most miserable tasks can come to have value: Who sweeps a room as for Thy laws/ Makes that and th’ action fine.
  • None of these remarks are intended as being for or against religion. Rather, they are part of an attempt (by an atheist, from the outside) to understand what it is. Those who criticize religion should have an accurate understanding of what it is they are criticizing. But to understand a world view, or a philosophy or system of thought, it is not enough just to understand the propositions it contains. You also have to understand what is central and what is peripheral to the view. Religions do make factual and historical claims, and if these claims are false, then the religions fail. But this dependence on fact does not make religious claims anything like hypotheses in the scientific sense. Hypotheses are not central. Rather, what is central is the commitment to the meaningfulness (and therefore the mystery) of the world.
  • while religious thinking is widespread in the world, scientific thinking is not. I don’t think that this can be accounted for merely in terms of the ignorance or irrationality of human beings. Rather, it is because of the kind of intellectual, emotional and practical appeal that religion has for people, which is a very different appeal from the kind of appeal that science has. Stephen Jay Gould once argued that religion and science are “non-overlapping magisteria.” If he meant by this that religion makes no factual claims which can be refuted by empirical investigations, then he was wrong. But if he meant that religion and science are very different kinds of attempt to understand the world, then he was certainly right.
  •  
    Mystery and Evidence By TIM CRANE
Weiye Loh

Rationally Speaking: On Utilitarianism and Consequentialism - 0 views

  • Utilitarianism and consequentialism are different, yet closely related philosophical positions. Utilitarians are usually consequentialists, and the two views mesh in many areas, but each rests on a different claim
  • Utilitarianism's starting point is that we all attempt to seek happiness and avoid pain, and therefore our moral focus ought to center on maximizing happiness (or, human flourishing generally) and minimizing pain for the greatest number of people. This is both about what our goals should be and how to achieve them.
  • Consequentialism asserts that determining the greatest good for the greatest number of people (the utilitarian goal) is a matter of measuring outcome, and so decisions about what is moral should depend on the potential or realized costs and benefits of a moral belief or action.
  • ...17 more annotations...
  • first question we can reasonably ask is whether all moral systems are indeed focused on benefiting human happiness and decreasing pain.
  • Jeremy Bentham, the founder of utilitarianism, wrote the following in his Introduction to the Principles of Morals and Legislation: “When a man attempts to combat the principle of utility, it is with reasons drawn, without his being aware of it, from that very principle itself.”
  • Michael Sandel discusses this line of thought in his excellent book, Justice: What’s the Right Thing to Do?, and sums up Bentham’s argument as such: “All moral quarrels, properly understood, are [for Bentham] disagreements about how to apply the utilitarian principle of maximizing pleasure and minimizing pain, not about the principle itself.”
  • But Bentham’s definition of utilitarianism is perhaps too broad: are fundamentalist Christians or Muslims really utilitarians, just with different ideas about how to facilitate human flourishing?
  • one wonders whether this makes the word so all-encompassing in meaning as to render it useless.
  • Yet, even if pain and happiness are the objects of moral concern, so what? As philosopher Simon Blackburn recently pointed out, “Every moral philosopher knows that moral philosophy is functionally about reducing suffering and increasing human flourishing.” But is that the central and sole focus of all moral philosophies? Don’t moral systems vary in their core focuses?
  • Consider the observation that religious belief makes humans happier, on average
  • Secularists would rightly resist the idea that religious belief is moral if it makes people happier. They would reject the very idea because deep down, they value truth – a value that is non-negotiable.Utilitarians would assert that truth is just another utility, for people can only value truth if they take it to be beneficial to human happiness and flourishing.
  • . We might all agree that morality is “functionally about reducing suffering and increasing human flourishing,” as Blackburn says, but how do we achieve that? Consequentialism posits that we can get there by weighing the consequences of beliefs and actions as they relate to human happiness and pain. Sam Harris recently wrote: “It is true that many people believe that ‘there are non-consequentialist ways of approaching morality,’ but I think that they are wrong. In my experience, when you scratch the surface on any deontologist, you find a consequentialist just waiting to get out. For instance, I think that Kant's Categorical Imperative only qualifies as a rational standard of morality given the assumption that it will be generally beneficial (as J.S. Mill pointed out at the beginning of Utilitarianism). Ditto for religious morality.”
  • we might wonder about the elasticity of words, in this case consequentialism. Do fundamentalist Christians and Muslims count as consequentialists? Is consequentialism so empty of content that to be a consequentialist one need only think he or she is benefiting humanity in some way?
  • Harris’ argument is that one cannot adhere to a certain conception of morality without believing it is beneficial to society
  • This still seems somewhat obvious to me as a general statement about morality, but is it really the point of consequentialism? Not really. Consequentialism is much more focused than that. Consider the issue of corporal punishment in schools. Harris has stated that we would be forced to admit that corporal punishment is moral if studies showed that “subjecting children to ‘pain, violence, and public humiliation’ leads to ‘healthy emotional development and good behavior’ (i.e., it conduces to their general well-being and to the well-being of society). If it did, well then yes, I would admit that it was moral. In fact, it would appear moral to more or less everyone.” Harris is being rhetorical – he does not believe corporal punishment is moral – but the point stands.
  • An immediate pitfall of this approach is that it does not qualify corporal punishment as the best way to raise emotionally healthy children who behave well.
  • The virtue ethicists inside us would argue that we ought not to foster a society in which people beat and humiliate children, never mind the consequences. There is also a reasonable and powerful argument based on personal freedom. Don’t children have the right to be free from violence in the public classroom? Don’t children have the right not to suffer intentional harm without consent? Isn’t that part of their “moral well-being”?
  • If consequences were really at the heart of all our moral deliberations, we might live in a very different society.
  • what if economies based on slavery lead to an increase in general happiness and flourishing for their respective societies? Would we admit slavery was moral? I hope not, because we value certain ideas about human rights and freedom. Or, what if the death penalty truly deterred crime? And what if we knew everyone we killed was guilty as charged, meaning no need for The Innocence Project? I would still object, on the grounds that it is morally wrong for us to kill people, even if they have committed the crime of which they are accused. Certain things hold, no matter the consequences.
  • We all do care about increasing human happiness and flourishing, and decreasing pain and suffering, and we all do care about the consequences of our beliefs and actions. But we focus on those criteria to differing degrees, and we have differing conceptions of how to achieve the respective goals – making us perhaps utilitarians and consequentialists in part, but not in whole.
  •  
    Is everyone a utilitarian and/or consequentialist, whether or not they know it? That is what some people - from Jeremy Bentham and John Stuart Mill to Sam Harris - would have you believe. But there are good reasons to be skeptical of such claims.
Weiye Loh

Balderdash: Liberalism and Tolerance - 0 views

  •  
    "Politics can be a sensitive subject and a number of SNS users have decided to block, unfriend, or hide someone because of their politics or posting activities. In all, 18% of social networking site users have taken one of those steps... Liberals are the most likely to have taken each of these steps to block, unfriend, or hide. In all, 28% of liberals have blocked, unfriended, or hidden someone on SNS because of one of these reasons, compared with 16% of conservatives and 14% of moderates" Tom Lehrer sums up the intolerance of the philosophy of tolerance best: "I know that there are people who do not love their fellow man, and I hate people like that!"
Weiye Loh

Fatalism (Stanford Encyclopedia of Philosophy) - 0 views

  • This view may be argued for in various ways: by appeal to logical laws and metaphysical necessities; by appeal to the existence and nature of God; by appeal to causal determinism. When argued for in the first way, it is commonly called “Logical fatalism” (or, in some cases, “Metaphysical fatalism”); when argued for in the second way, it is commonly called “Theological fatalism”. When argued for in the third way it is not now commonly referred to as “fatalism” at all, and such arguments will not be discussed here.
  •  
    "fatalism" is commonly used to refer to an attitude of resignation in the face of some future event or events which are thought to be inevitable
Weiye Loh

The hidden philosophy of David Foster Wallace - Salon.com Mobile - 0 views

  • Taylor's argument, which he himself found distasteful, was that certain logical and seemingly unarguable premises lead to the conclusion that even in matters of human choice, the future is as set in stone as the past. We may think we can affect it, but we can't.
  • human responsibility — that, with advances in neuroscience, is of increasing urgency in jurisprudence, social codes and personal conduct. And it also shows a brilliant young man struggling against fatalism, performing exquisite exercises to convince others, and maybe himself, that what we choose to do is what determines the future, rather than the future more or less determining what we choose to do. This intellectual struggle on Wallace's part seems now a kind of emotional foreshadowing of his suicide. He was a victim of depression from an early age — even during his undergraduate years — and the future never looks more intractable than it does to someone who is depressed.
  • "Fate, Time, and Language" reminded me of how fond philosophers are of extreme situations in creating their thought experiments. In this book alone we find a naval battle, the gallows, a shotgun, poison, an accident that leads to paraplegia, somebody stabbed and killed, and so on. Why not say "I have a pretzel in my hand today. Tomorrow I will have eaten it or not eaten it" instead of "I have a gun in my hand and I will either shoot you through the heart and feast on your flesh or I won't"? Well, OK — the answer is easy: The extreme and violent scenarios catch our attention more forcefully than pretzels do. Also, philosophers, sequestered and meditative as they must be, may long for real action — beyond beekeeping.
  • ...1 more annotation...
  • Wallace, in his essay, at the very center of trying to show that we can indeed make meaningful choices, places a terrorist in the middle of Amherst's campus with his finger on the trigger mechanism of a nuclear weapon. It is by far the most narratively arresting moment in all of this material, and it says far more about the author's approaching antiestablishment explosions of prose and his extreme emotional makeup than it does about tweedy profs fantasizing about ordering their ships into battle. For, after all, who, besides everyone around him, would the terrorist have killed?
  •  
    In 1962, a philosopher (and world-famous beekeeper) named Richard Taylor published a soon-to-be-notorious essay called "Fatalism" in the Philosophical Review.
Weiye Loh

Haidt Requests Apology from Pigliucci « YourMorals.Org Moral Psychology Blog - 0 views

  • Here is my response to Pigliucci, which I posted as a comment on his blog. (Well, I submitted it as a comment on Feb 13 at 4pm EST, but he has not approved it yet, so it doesn’t show yet over there.)
  • Massimo Pigliucci, the chair of the philosophy department at CUNY-Lehman, wrote a critique of me on his blog, Rationally Speaking, in which he accused me of professional misconduct.
  • Dear Prof. Pigliucci: Let me be certain that I have understood you. You did not watch my talk, even though a link to it was embedded in the Tierney article. Instead, you picked out one piece of my argument (that the near-total absence of conservatives in social psychology is evidence of discrimination) and you made the standard response, the one that most bloggers have made: underrepresentation of any group is not, by itself, evidence of discrimination. That’s a good point; I made it myself quite explicitly in my talk: Of course there are many reasons why conservatives would be underrepresented in social psychology, and most of them have nothing to do with discrimination or hostile climate. Research on personality consistently shows that liberals are higher on openness to experience. They’re more interested in novel ideas, and in trying to use science to improve society. So of course our field is and always will be mostly liberal. I don’t think we should ever strive for exact proportional representation.
  • ...6 more annotations...
  • I made it clear that I’m not concerned about simple underrepresentation. I did not even make the moral argument that we need ideological diversity to right an injustice. Rather, I focused on what happens when a scientific community shares sacred values. A tribal moral community arises, one that actively suppresses ideas that are sacrilegious, and that discourages non-believers from entering. I argued that my field has become a tribal moral community, and the absence of conservatives (not just their underrepresentation) has serious consequences for the quality of our science. We rely on our peers to find flaws in our arguments, but when there is essentially nobody out there to challenge liberal assumptions and interpretations of experimental findings, the peer review process breaks down, at least for work that is related to those sacred values. (
  • The fact that you criticized me without making an effort to understand me is not surprising.
  • Rather, what sets you apart from all other bloggers who are members of the academy is what you did next. You accused me of professional misconduct—lying, essentially–and you speculated as to my true motive: I suspect that Haidt is either an incompetent psychologist (not likely) or is disingenuously saying the sort of things controversial enough to get him in the New York Times (more likely).
  • As far as I can tell your evidence for these accusations is that my argument was so bad that I couldn’t have believed it myself. Here is how you justified your accusations: A serious social scientist doesn’t go around crying out discrimination just on the basis of unequal numbers. If that were the case, the NBA would be sued for discriminating against short people, dance companies against people without spatial coordination, and newspapers against dyslexics
  • Accusations of professional misconduct are sensibly made only if one has a reasonable and detailed understanding of the facts of the case, and can bring forth evidence of misconduct. Pigliucci has made no effort to acquire such an understanding, nor has he presented any evidence to support his accusation. He simply took one claim from the Tierney article and then ran wild with speculation about Haidt’s motives. It was pretty silly of him, and down right irresponsible of Pigliucci to publish that garbage without even knowing what Haidt said.
  • I challenge you to watch the video of my talk (click here) and then either 1) Retract your blog post and apologize publicly for calling me a liar or 2) State on your blog that you stand by your original post. If you do stand by your post, even after hearing my argument, then the world can decide for itself which of us is right, and which of us best models the ideals of science, philosophy, and the Enlightenment which you claim for yourself in the header of your blog, “Rationally Speaking.” Jonathan Haidt
Weiye Loh

The Ashtray: The Ultimatum (Part 1) - NYTimes.com - 0 views

  • “Under no circumstances are you to go to those lectures. Do you hear me?” Kuhn, the head of the Program in the History and Philosophy of Science at Princeton where I was a graduate student, had issued an ultimatum. It concerned the philosopher Saul Kripke’s lectures — later to be called “Naming and Necessity” — which he had originally given at Princeton in 1970 and planned to give again in the Fall, 1972.
  • Whiggishness — in history of science, the tendency to evaluate and interpret past scientific theories not on their own terms, but in the context of current knowledge. The term comes from Herbert Butterfield’s “The Whig Interpretation of History,” written when Butterfield, a future Regius professor of history at Cambridge, was only 31 years old. Butterfield had complained about Whiggishness, describing it as “…the study of the past with direct and perpetual reference to the present” – the tendency to see all history as progressive, and in an extreme form, as an inexorable march to greater liberty and enlightenment. [3] For Butterfield, on the other hand, “…real historical understanding” can be achieved only by “attempting to see life with the eyes of another century than our own.” [4][5].
  • Kuhn had attacked my Whiggish use of the term “displacement current.” [6] I had failed, in his view, to put myself in the mindset of Maxwell’s first attempts at creating a theory of electricity and magnetism. I felt that Kuhn had misinterpreted my paper, and that he — not me — had provided a Whiggish interpretation of Maxwell. I said, “You refuse to look through my telescope.” And he said, “It’s not a telescope, Errol. It’s a kaleidoscope.” (In this respect, he was probably right.) [7].
  • ...9 more annotations...
  • I asked him, “If paradigms are really incommensurable, how is history of science possible? Wouldn’t we be merely interpreting the past in the light of the present? Wouldn’t the past be inaccessible to us? Wouldn’t it be ‘incommensurable?’ ” [8] ¶He started moaning. He put his head in his hands and was muttering, “He’s trying to kill me. He’s trying to kill me.” ¶And then I added, “…except for someone who imagines himself to be God.” ¶It was at this point that Kuhn threw the ashtray at me.
  • I call Kuhn’s reply “The Ashtray Argument.” If someone says something you don’t like, you throw something at him. Preferably something large, heavy, and with sharp edges. Perhaps we were engaged in a debate on the nature of language, meaning and truth. But maybe we just wanted to kill each other.
  • That's the problem with relativism: Who's to say who's right and who's wrong? Somehow I'm not surprised to hear Kuhn was an ashtray-hurler. In the end, what other argument could he make?
  • For us to have a conversation and come to an agreement about the meaning of some word without having to refer to some outside authority like a dictionary, we would of necessity have to be satisfied that our agreement was genuine and not just a polite acknowledgement of each others' right to their opinion, can you agree with that? If so, then let's see if we can agree on the meaning of the word 'know' because that may be the crux of the matter. When I use the word 'know' I mean more than the capacity to apprehend some aspect of the world through language or some other represenational symbolism. Included in the word 'know' is the direct sensorial perception of some aspect of the world. For example, I sense the floor that my feet are now resting upon. I 'know' the floor is really there, I can sense it. Perhaps I don't 'know' what the floor is made of, who put it there, and other incidental facts one could know through the usual symbolism such as language as in a story someone tells me. Nevertheless, the reality I need to 'know' is that the floor, or whatever you may wish to call the solid - relative to my body - flat and level surface supported by more structure then the earth, is really there and reliably capable of supporting me. This is true and useful knowledge that goes directly from the floor itself to my knowing about it - via sensation - that has nothing to do with my interpretive system.
  • Now I am interested in 'knowing' my feet in the same way that my feet and the whole body they are connected to 'know' the floor. I sense my feet sensing the floor. My feet are as real as the floor and I know they are there, sensing the floor because I can sense them. Furthermore, now I 'know' that it is 'I' sensing my feet, sensing the floor. Do you see where I am going with this line of thought? I am including in the word 'know' more meaning than it is commonly given by everyday language. Perhaps it sounds as if I want to expand on the Cartesian formula of cogito ergo sum, and in truth I prefer to say I sense therefore I am. It is my sensations of the world first and foremost that my awareness, such as it is, is actively engaged with reality. Now, any healthy normal animal senses the world but we can't 'know' if they experience reality as we do since we can't have a conversation with them to arrive at agreement. But we humans can have this conversation and possibly agree that we can 'know' the world through sensation. We can even know what is 'I' through sensation. In fact, there is no other way to know 'I' except through sensation. Thought is symbolic representation, not direct sensing, so even though the thoughtful modality of regarding the world may be a far more reliable modality than sensation in predicting what might happen next, its very capacity for such accurate prediction is its biggest weakness, which is its capacity for error
  • Sensation cannot be 'wrong' unless it is used to predict outcomes. Thought can be wrong for both predicting outcomes and for 'knowing' reality. Sensation alone can 'know' reality even though it is relatively unreliable, useless even, for making predictions.
  • If we prioritize our interests by placing predictability over pure knowing through sensation, then of course we will not value the 'knowledge' to be gained through sensation. But if we can switch the priorities - out of sheer curiosity perhaps - then we can enter a realm of knowledge through sensation that is unbelievably spectacular. Our bodies are 'made of' reality, and by methodically exercising our nascent capacity for self sensing, we can connect our knowing 'I' to reality directly. We will not be able to 'know' what it is that we are experiencing in the way we might wish, which is to be able to predict what will happen next or to represent to ourselves symbolically what we might experience when we turn our attention to that sensation. But we can arrive at a depth and breadth of 'knowing' that is utterly unprecedented in our lives by operating that modality.
  • One of the impressions that comes from a sustained practice of self sensing is a clearer feeling for what "I" is and why we have a word for that self referential phenomenon, seemingly located somewhere behind our eyes and between our ears. The thing we call "I" or "me" depending on the context, turns out to be a moving point, a convergence vector for a variety of images, feelings and sensations. It is a reference point into which certain impressions flow and out of which certain impulses to act diverge and which may or may not animate certain muscle groups into action. Following this tricky exercize in attention and sensation, we can quickly see for ourselves that attention is more like a focused beam and awareness is more like a diffuse cloud, but both are composed of energy, and like all energy they vibrate, they oscillate with a certain frequency. That's it for now.
  • I loved the writer's efforts to find a fixed definition of “Incommensurability;” there was of course never a concrete meaning behind the word. Smoke and mirrors.
Weiye Loh

Designing Minds: Uncovered Video Profiles of Prominent Designers | Brain Pickings - 0 views

  • My favorite quote about what is art and what is design and what might be the difference comes from Donald Judd: ‘Design has to work, art doesn’t.’ And these things all have to work. They have a function outside my desire for self-expression.” ~ Stefan Sagmeister

  • When designers are given the opportunity to have a bigger role, real change, real transformation actually happens.” ~ Yves Behar

  •  
    In 2008, a now-defunct podcast program by Adobe called Designing Minds - not to be confused with frogdesign's excellent design mind magazine - did a series of video profiles of prominent artists and designers, including Stefan Sagmeister (whose Things I have learned in my life so far isn't merely one of the best-produced, most beautiful design books of the past decade, it's also a poignant piece of modern existential philosophy), Yves Behar (of One Laptop Per Child fame), Marian Bantjes (whose I Wonder remains my favorite typographic treasure) and many more, offering a rare glimpse of these remarkable creators' life stories, worldviews and the precious peculiarities that make them be who they are and create what they create
Weiye Loh

Rationally Speaking: Evolution as pseudoscience? - 0 views

  • I have been intrigued by an essay by my colleague Michael Ruse, entitled “Evolution and the idea of social progress,” published in a collection that I am reviewing, Biology and Ideology from Descartes to Dawkins (gotta love the title!), edited by Denis Alexander and Ronald Numbers.
  • Ruse's essay in the Alexander-Numbers collection questions the received story about the early evolution of evolutionary theory, which sees the stuff that immediately preceded Darwin — from Lamarck to Erasmus Darwin — as protoscience, the immature version of the full fledged science that biology became after Chuck's publication of the Origin of Species. Instead, Ruse thinks that pre-Darwinian evolutionists really engaged in pseudoscience, and that it took a very conscious and precise effort on Darwin’s part to sweep away all the garbage and establish a discipline with empirical and theoretical content analogous to that of the chemistry and physics of the time.
  • Ruse asserts that many serious intellectuals of the late 18th and early 19th century actually thought of evolution as pseudoscience, and he is careful to point out that the term “pseudoscience” had been used at least since 1843 (by the physiologist Francois Magendie)
  • ...17 more annotations...
  • Ruse’s somewhat surprising yet intriguing claim is that “before Charles Darwin, evolution was an epiphenomenon of the ideology of [social] progress, a pseudoscience and seen as such. Liked by some for that very reason, despised by others for that very reason.”
  • Indeed, the link between evolution and the idea of human social-cultural progress was very strong before Darwin, and was one of the main things Darwin got rid of.
  • The encyclopedist Denis Diderot was typical in this respect: “The Tahitian is at a primary stage in the development of the world, the European is at its old age. The interval separating us is greater than that between the new-born child and the decrepit old man.” Similar nonsensical views can be found in Lamarck, Erasmus, and Chambers, the anonymous author of The Vestiges of the Natural History of Creation, usually considered the last protoscientific book on evolution to precede the Origin.
  • On the other side of the divide were social conservatives like the great anatomist George Cuvier, who rejected the idea of evolution — according to Ruse — not as much on scientific grounds as on political and ideological ones. Indeed, books like Erasmus’ Zoonomia and Chambers’ Vestiges were simply not much better than pseudoscientific treatises on, say, alchemy before the advent of modern chemistry.
  • people were well aware of this sorry situation, so much so that astronomer John Herschel referred to the question of the history of life as “the mystery of mysteries,” a phrase consciously adopted by Darwin in the Origin. Darwin set out to solve that mystery under the influence of three great thinkers: Newton, the above mentioned Herschel, and the philosopher William Whewell (whom Darwin knew and assiduously frequented in his youth)
  • Darwin was a graduate of the University of Cambridge, which had also been Newton’s home. Chuck got drilled early on during his Cambridge education with the idea that good science is about finding mechanisms (vera causa), something like the idea of gravitational attraction underpinning Newtonian mechanics. He reflected that all the talk of evolution up to then — including his grandfather’s — was empty, without a mechanism that could turn the idea into a scientific research program.
  • The second important influence was Herschel’s Preliminary Discourse on the Study of Natural Philosophy, published in 1831 and read by Darwin shortly thereafter, in which Herschel sets out to give his own take on what today we would call the demarcation problem, i.e. what methodology is distinctive of good science. One of Herschel’s points was to stress the usefulness of analogical reasoning
  • Finally, and perhaps most crucially, Darwin also read (twice!) Whewell’s History of the Inductive Sciences, which appeared in 1837. In it, Whewell sets out his notion that good scientific inductive reasoning proceeds by a consilience of ideas, a situation in which multiple independent lines of evidence point to the same conclusion.
  • the first part of the Origin, where Darwin introduces the concept of natural selection by way of analogy with artificial selection can be read as the result of Herschel’s influence (natural selection is the vera causa of evolution)
  • the second part of the book, constituting Darwin's famous “long argument,” applies Whewell’s method of consilience by bringing in evidence from a number of disparate fields, from embryology to paleontology to biogeography.
  • What, then, happened to the strict coupling of the ideas of social and biological progress that had preceded Darwin? While he still believed in the former, the latter was no longer an integral part of evolution, because natural selection makes things “better” only in a relative fashion. There is no meaningful sense in which, say, a large brain is better than very fast legs or sharp claws, as long as you still manage to have dinner and avoid being dinner by the end of the day (or, more precisely, by the time you reproduce).
  • Ruse’s claim that evolution transitioned not from protoscience to science, but from pseudoscience, makes sense to me given the historical and philosophical developments. It wasn’t the first time either. Just think about the already mentioned shift from alchemy to chemistry
  • Of course, the distinction between pseudoscience and protoscience is itself fuzzy, but we do have what I think are clear examples of the latter that cannot reasonably be confused with the former, SETI for one, and arguably Ptolemaic astronomy. We also have pretty obvious instances of pseudoscience (the usual suspects: astrology, ufology, etc.), so the distinction — as long as it is not stretched beyond usefulness — is interesting and defensible.
  • It is amusing to speculate which, if any, of the modern pseudosciences (cryonics, singularitarianism) might turn out to be able to transition in one form or another to actual sciences. To do so, they may need to find their philosophically and scientifically savvy Darwin, and a likely bet — if history teaches us anything — is that, should they succeed in this transition, their mature form will look as different from the original as chemistry and alchemy. Or as Darwinism and pre-Darwinian evolutionism.
  • Darwin called the Origin "one long argument," but I really do think that recognizing that the book contains (at least) two arguments could help to dispel that whole "just a theory" canard. The first half of the book is devoted to demonstrating that natural selection is the true cause of evolution; vera causa arguments require proof that the cause's effect be demonstrated as fact, so the second half of the book is devoted to a demonstration that evolution has really happened. In other words, evolution is a demonstrable fact and natural selection is the theory that explains that fact, just as the motion of the planets is a fact and gravity is a theory that explains it.
  • Cryogenics is the study of the production of low temperatures and the behavior of materials at those temperatures. It is a legitimate branch of physics and has been for a long time. I think you meant 'cryonics'.
  • The Singularity means different things to different people. It is uncharitable to dismiss all "singularitarians" by debunking Kurzweil. He is low hanging fruit. Reach for something higher.
  •  
    "before Charles Darwin, evolution was an epiphenomenon of the ideology of [social] progress, a pseudoscience and seen as such. Liked by some for that very reason, despised by others for that very reason."
1 - 20 of 62 Next › Last »
Showing 20 items per page