Skip to main content

Home/ New Media Ethics 2009 course/ Group items tagged Liberal

Rss Feed Group items tagged

Weiye Loh

MacIntyre on money « Prospect Magazine - 0 views

  • MacIntyre has often given the impression of a robe-ripping Savonarola. He has lambasted the heirs to the principal western ethical schools: John Locke’s social contract, Immanuel Kant’s categorical imperative, Jeremy Bentham’s utilitarian “the greatest happiness for the greatest number.” Yet his is not a lone voice in the wilderness. He can claim connections with a trio of 20th-century intellectual heavyweights: the late Elizabeth Anscombe, her surviving husband, Peter Geach, and the Canadian philosopher Charles Taylor, winner in 2007 of the Templeton prize. What all four have in common is their Catholic faith, enthusiasm for Aristotle’s telos (life goals), and promotion of Thomism, the philosophy of St Thomas Aquinas who married Christianity and Aristotle. Leo XIII (pope from 1878 to 1903), who revived Thomism while condemning communism and unfettered capitalism, is also an influence.
  • MacIntyre’s key moral and political idea is that to be human is to be an Aristotelian goal-driven, social animal. Being good, according to Aristotle, consists in a creature (whether plant, animal, or human) acting according to its nature—its telos, or purpose. The telos for human beings is to generate a communal life with others; and the good society is composed of many independent, self-reliant groups.
  • MacIntyre differs from all these influences and alliances, from Leo XIII onwards, in his residual respect for Marx’s critique of capitalism.
  • ...6 more annotations...
  • MacIntyre begins his Cambridge talk by asserting that the 2008 economic crisis was not due to a failure of business ethics.
  • he has argued that moral behaviour begins with the good practice of a profession, trade, or art: playing the violin, cutting hair, brick-laying, teaching philosophy.
  • In other words, the virtues necessary for human flourishing are not a result of the top-down application of abstract ethical principles, but the development of good character in everyday life.
  • After Virtue, which is in essence an attack on the failings of the Enlightenment, has in its sights a catalogue of modern assumptions of beneficence: liberalism, humanism, individualism, capitalism. MacIntyre yearns for a single, shared view of the good life as opposed to modern pluralism’s assumption that there can be many competing views of how to live well.
  • In philosophy he attacks consequentialism, the view that what matters about an action is its consequences, which is usually coupled with utilitarianism’s “greatest happiness” principle. He also rejects Kantianism—the identification of universal ethical maxims based on reason and applied to circumstances top down. MacIntyre’s critique routinely cites the contradictory moral principles adopted by the allies in the second world war. Britain invoked a Kantian reason for declaring war on Germany: that Hitler could not be allowed to invade his neighbours. But the bombing of Dresden (which for a Kantian involved the treatment of people as a means to an end, something that should never be countenanced) was justified under consequentialist or utilitarian arguments: to bring the war to a swift end.
  • MacIntyre seeks to oppose utilitarianism on the grounds that people are called on by their very nature to be good, not merely to perform acts that can be interpreted as good. The most damaging consequence of the Enlightenment, for MacIntyre, is the decline of the idea of a tradition within which an individual’s desires are disciplined by virtue. And that means being guided by internal rather than external “goods.” So the point of being a good footballer is the internal good of playing beautifully and scoring lots of goals, not the external good of earning a lot of money. The trend away from an Aristotelian perspective has been inexorable: from the empiricism of David Hume, to Darwin’s account of nature driven forward without a purpose, to the sterile analytical philosophy of AJ Ayer and the “demolition of metaphysics” in his 1936 book Language, Truth and Logic.
  •  
    The influential moral philosopher Alasdair MacIntyre has long stood outside the mainstream. Has the financial crisis finally vindicated his critique of global capitalism?
Weiye Loh

Rationally Speaking: The sorry state of higher education - 0 views

  • two disconcerting articles crossed my computer screen, both highlighting the increasingly sorry state of higher education, though from very different perspectives. The first is “Ed Dante’s” (actually a pseudonym) piece in the Chronicle of Higher Education, entitled The Shadow Scholar. The second is Gregory Petsko’s A Faustian Bargain, published of all places in Genome Biology.
  • There is much to be learned by educators in the Shadow Scholar piece, except the moral that “Dante” would like us to take from it. The anonymous author writes:“Pointing the finger at me is too easy. Why does my business thrive? Why do so many students prefer to cheat rather than do their own work? Say what you want about me, but I am not the reason your students cheat.
  • The point is that plagiarism and cheating happen for a variety of reasons, one of which is the existence of people like Mr. Dante and his company, who set up a business that is clearly unethical and should be illegal. So, pointing fingers at him and his ilk is perfectly reasonable. Yes, there obviously is a “market” for cheating in higher education, and there are complex reasons for it, but he is in a position similar to that of the drug dealer who insists that he is simply providing the commodity to satisfy society’s demand. Much too easy of a way out, and one that doesn’t fly in the case of drug dealers, and shouldn’t fly in the case of ghost cheaters.
  • ...16 more annotations...
  • As a teacher at the City University of New York, I am constantly aware of the possibility that my students might cheat on their tests. I do take some elementary precautionary steps
  • Still, my job is not that of the policeman. My students are adults who theoretically are there to learn. If they don’t value that learning and prefer to pay someone else to fake it, so be it, ultimately it is they who lose in the most fundamental sense of the term. Just like drug addicts, to return to my earlier metaphor. And just as in that other case, it is enablers like Mr. Dante who simply can’t duck the moral blame.
  • n open letter to the president of SUNY-Albany, penned by molecular biologist Gregory Petsko. The SUNY-Albany president has recently announced the closing — for budgetary reasons — of the departments of French, Italian, Classics, Russian and Theater Arts at his university.
  • Petsko begins by taking on one of the alleged reasons why SUNY-Albany is slashing the humanities: low enrollment. He correctly points out that the problem can be solved overnight at the stroke of a pen: stop abdicating your responsibilities as educators and actually put constraints on what your students have to take in order to graduate. Make courses in English literature, foreign languages, philosophy and critical thinking, the arts and so on, mandatory or one of a small number of options that the students must consider in order to graduate.
  • But, you might say, that’s cheating the market! Students clearly don’t want to take those courses, and a business should cater to its customers. That type of reasoning is among the most pernicious and idiotic I’ve ever heard. Students are not clients (if anything, their parents, who usually pay the tuition, are), they are not shopping for a new bag or pair of shoes. They do not know what is best for them educationally, that’s why they go to college to begin with. If you are not convinced about how absurd the students-as-clients argument is, consider an analogy: does anyone with functioning brain cells argue that since patients in a hospital pay a bill, they should be dictating how the brain surgeon operates? I didn’t think so.
  • Petsko then tackles the second lame excuse given by the president of SUNY-Albany (and common among the upper administration of plenty of public universities): I can’t do otherwise because of the legislature’s draconian cuts. Except that university budgets are simply too complicated for there not to be any other option. I know this first hand, I’m on a special committee at my own college looking at how to creatively deal with budget cuts handed down to us from the very same (admittedly small minded and dysfunctional) New York state legislature that has prompted SUNY-Albany’s action. As Petsko points out, the president there didn’t even think of involving the faculty and staff in a broad discussion of how to deal with the crisis, he simply announced the cuts on a Friday afternoon and then ran for cover. An example of very poor leadership to say the least, and downright hypocrisy considering all the talk that the same administrator has been dishing out about the university “community.”
  • Finally, there is the argument that the humanities don’t pay for their own way, unlike (some of) the sciences (some of the time). That is indubitably true, but irrelevant. Universities are not businesses, they are places of higher learning. Yes, of course they need to deal with budgets, fund raising and all the rest. But the financial and administrative side has one goal and one goal only: to provide the best education to the students who attend that university.
  • That education simply must include the sciences, philosophy, literature, and the arts, as well as more technical or pragmatic offerings such as medicine, business and law. Why? Because that’s the kind of liberal education that makes for an informed and intelligent citizenry, without which our democracy is but empty talk, and our lives nothing but slavery to the marketplace.
  • Maybe this is not how education works in the US. I thought that general (or compulsory) education (ie. up to high school) is designed to make sure that citizens in a democratic country can perform their civil duties. A balanced and well-rounded education, which includes a healthy mixture of science and humanities, is indeed very important for this purpose. However, college-level education is for personal growth and therefore the person must have a large say about what kind of classes he or she chooses to take. I am disturbed by Massimo's hospital analogy. Students are not ill. They don't go to college to be cured, or to be good citizens. They go to college to learn things that *they* want to learn. Patients are passive. Students are not.I agree that students typically do not know what kind of education is good for them. But who does?
  • students do have a saying in their education. They pick their major, and there are electives. But I object to the idea that they can customize their major any way they want. That assumes they know what the best education for them is, they don't. That's the point of education.
  • The students are in your class to get a good grade, any learning that takes place is purely incidental. Those good grades will look good on their transcript and might convince a future employer that they are smart and thus are worth paying more.
  • I don't know what the dollar to GPA exchange rate is these days, but I don't doubt that there is one.
  • Just how many of your students do you think will remember the extensive complex jargon of philosophy more than a couple of months after they leave your classroom?
  • and our lives nothing but slavery to the marketplace.We are there. Welcome. Where have you been all this time? In a capitalistic/plutocratic society money is power (and free speech too according to the supreme court). Money means a larger/better house/car/clothing/vacation than your neighbor and consequently better mating opportunities. You can mostly blame the women for that one I think just like the peacock's tail.
  • If a student of surgery fails to learn they might maim, kill or cripple someone. If an engineer of airplanes fails to learn they might design a faulty aircraft that fails and kills people. If a student of chemistry fails to learn they might design a faulty drug with unintended and unfortunate side effects, but what exactly would be the harm if a student of philosophy fails to learn Aristotle had to say about elements or Plato had to say about perfect forms? These things are so divorced from people's everyday activities as to be rendered all but meaningless.
  • human knowledge grows by leaps and bounds every day, but human brain capacity does not, so the portion of human knowledge you can personally hold gets smaller by the minute. Learn (and remember) as much as you can as fast as you can and you will still lose ground. You certainly have your work cut out for you emphasizing the importance of Thales in the Age of Twitter and whatever follows it next year.
Weiye Loh

In Europe, sharp criticism of US reaction to WikiLeaks - The Boston Globe - 0 views

  • Washington’s fierce reaction to the flood of secret diplomatic cables released by WikiLeaks displays imperial arrogance and hypocrisy, indicating a post-9/11 obsession with secrecy that contradicts American principles.
  • John Naughton, writing in the same British paper, deplored the attack on the openness of the Internet and the pressure on companies such as Amazon and eBay to evict the WikiLeaks site. “The response has been vicious, coordinated and potentially comprehensive,’’ he said, and presents a “delicious irony’’ that “it is now the so-called liberal democracies that are clamoring to shut WikiLeaks down.’’
  • A year ago, he noted, Clinton made a major speech about Internet freedom, interpreted as a rebuke to China’s cyberattack on Google. “Even in authoritarian countries,’’ she said, “information networks are helping people to discover new facts and making governments more accountable.’’ To Naughton now, “that Clinton speech reads like a satirical masterpiece.’’
  • ...4 more annotations...
  • The Russians seemed to take a special delight in tweaking Washington over its reaction to the leaks, suggesting the Americans are being hypocritical. “If it is a full-fledged democracy, then why have they put Assange away in jail? You call that democracy?’’ Prime Minister Vladimir V. Putin said during a news briefing with the French prime minister, Francois Fillon.
  • Even The Financial Times Deutschland (independent of the English-language Financial Times), said that “the already damaged reputation of the United States will only be further tattered with Assange’s new martyr status.’’ It added that “the openly embraced hope of the US government that along with Assange, WikiLeaks will disappear from the scene, is questionable.’’
  • Assange is being hounded, the paper said, “even though no one can explain what crimes Assange allegedly committed with the publication of the secret documents, or why publication by WikiLeaks was an offense, and in The New York Times, it was not.’’
  • But Renaud Girard, a respected reporter for the center-right Le Figaro, said he was impressed by the generally high quality of the American diplomatic corps. “What is most fascinating is that we see no cynicism in US diplomacy,’’ he said. “They really believe in human rights in Africa and China and Russia and Asia. They really believe in democracy and human rights. People accuse the Americans of double standards all the time. But it’s not true here. If anything, the diplomats are almost naive.’
Weiye Loh

Johann Hari: The Pope, the Prophet, and the religious support for evil - Johann Hari, C... - 0 views

  • What can make tens of millions of people – who are in their daily lives peaceful and compassionate and caring – suddenly want to physically dismember a man for drawing a cartoon, or make excuses for an international criminal conspiracy to protect child-rapists? Not reason. Not evidence. No. But it can happen when people choose their polar opposite – religion.
  • people can begin to behave in bizarre ways when they decide it is a good thing to abandon any commitment to fact and instead act on faith. It has led some to regard people accused of the attempted murders of the Mohamed cartoonists as victims, and to demand "respect" for the Pope, when he should be in a police station being quizzed about his role in covering up and thereby enabling the rape of children.
  • One otherwise liberal newspaper ran an article saying that since the cartoonists had engaged in an "aggressive act" and shown "prejudice... against religion per se", so it stated menacingly that no doubt "someone else is out there waiting for an opportunity to strike again".
  • ...3 more annotations...
  • if religion wasn't involved – would be so obvious it would seem ludicrous to have to say them out loud. Drawing a cartoon is not an act of aggression. Trying to kill somebody with an axe is. There is no moral equivalence between peacefully expressing your disagreement with an idea – any idea – and trying to kill somebody for it. Yet we have to say this because we have allowed religious people to claim their ideas belong to a different, exalted category, and it is abusive or violent merely to verbally question them. Nobody says I should "respect" conservatism or communism and keep my opposition to them to myself – but that's exactly what is routinely said about Islam or Christianity or Buddhism. What's the difference?
  • By 1962, it was becoming clear to the Vatican that a significant number of its priests were raping children. Rather than root it out, they issued a secret order called "Crimen Sollicitationis"' ordering bishops to swear the victims to secrecy and move the offending priest on to another parish. This of course meant they raped more children there, and on and on, in parish after parish.
  • when Ratzinger was Archbishop of Munich in the 1980s, one of his paedophile priests was "reassigned" in this way. He claims he didn't know. Yet a few years later he was put in charge of the Vatican's response to this kind of abuse and demanded every case had to be referred directly to him for 20 years. What happened on his watch, with every case going to his desk? Precisely this pattern, again and again. The BBC's Panorama studied one of many such cases. Father Tarcisio Spricigo was first accused of child abuse in 1991, in Brazil. He was moved by the Vatican four times, wrecking the lives of children at every stop. He was only caught in 2005 by the police, before he could be moved on once more.
  •  
    This enforced 'respect' is a creeping vine: it soon extends from ideas to institutions
Weiye Loh

Science Warriors' Ego Trips - The Chronicle Review - The Chronicle of Higher Education - 0 views

  • By Carlin Romano Standing up for science excites some intellectuals the way beautiful actresses arouse Warren Beatty, or career liberals boil the blood of Glenn Beck and Rush Limbaugh. It's visceral.
  • A brave champion of beleaguered science in the modern age of pseudoscience, this Ayn Rand protagonist sarcastically derides the benighted irrationalists and glows with a self-anointed superiority. Who wouldn't want to feel that sense of power and rightness?
  • You hear the voice regularly—along with far more sensible stuff—in the latest of a now common genre of science patriotism, Nonsense on Stilts: How to Tell Science From Bunk (University of Chicago Press), by Massimo Pigliucci, a philosophy professor at the City University of New York.
  • ...24 more annotations...
  • it mixes eminent common sense and frequent good reporting with a cocksure hubris utterly inappropriate to the practice it apotheosizes.
  • According to Pigliucci, both Freudian psychoanalysis and Marxist theory of history "are too broad, too flexible with regard to observations, to actually tell us anything interesting." (That's right—not one "interesting" thing.) The idea of intelligent design in biology "has made no progress since its last serious articulation by natural theologian William Paley in 1802," and the empirical evidence for evolution is like that for "an open-and-shut murder case."
  • Pigliucci offers more hero sandwiches spiced with derision and certainty. Media coverage of science is "characterized by allegedly serious journalists who behave like comedians." Commenting on the highly publicized Dover, Pa., court case in which U.S. District Judge John E. Jones III ruled that intelligent-design theory is not science, Pigliucci labels the need for that judgment a "bizarre" consequence of the local school board's "inane" resolution. Noting the complaint of intelligent-design advocate William Buckingham that an approved science textbook didn't give creationism a fair shake, Pigliucci writes, "This is like complaining that a textbook in astronomy is too focused on the Copernican theory of the structure of the solar system and unfairly neglects the possibility that the Flying Spaghetti Monster is really pulling each planet's strings, unseen by the deluded scientists."
  • Or is it possible that the alternate view unfairly neglected could be more like that of Harvard scientist Owen Gingerich, who contends in God's Universe (Harvard University Press, 2006) that it is partly statistical arguments—the extraordinary unlikelihood eons ago of the physical conditions necessary for self-conscious life—that support his belief in a universe "congenially designed for the existence of intelligent, self-reflective life"?
  • Even if we agree that capital "I" and "D" intelligent-design of the scriptural sort—what Gingerich himself calls "primitive scriptural literalism"—is not scientifically credible, does that make Gingerich's assertion, "I believe in intelligent design, lowercase i and lowercase d," equivalent to Flying-Spaghetti-Monsterism? Tone matters. And sarcasm is not science.
  • The problem with polemicists like Pigliucci is that a chasm has opened up between two groups that might loosely be distinguished as "philosophers of science" and "science warriors."
  • Philosophers of science, often operating under the aegis of Thomas Kuhn, recognize that science is a diverse, social enterprise that has changed over time, developed different methodologies in different subsciences, and often advanced by taking putative pseudoscience seriously, as in debunking cold fusion
  • The science warriors, by contrast, often write as if our science of the moment is isomorphic with knowledge of an objective world-in-itself—Kant be damned!—and any form of inquiry that doesn't fit the writer's criteria of proper science must be banished as "bunk." Pigliucci, typically, hasn't much sympathy for radical philosophies of science. He calls the work of Paul Feyerabend "lunacy," deems Bruno Latour "a fool," and observes that "the great pronouncements of feminist science have fallen as flat as the similarly empty utterances of supporters of intelligent design."
  • It doesn't have to be this way. The noble enterprise of submitting nonscientific knowledge claims to critical scrutiny—an activity continuous with both philosophy and science—took off in an admirable way in the late 20th century when Paul Kurtz, of the University at Buffalo, established the Committee for the Scientific Investigation of Claims of the Paranormal (Csicop) in May 1976. Csicop soon after launched the marvelous journal Skeptical Inquirer
  • Although Pigliucci himself publishes in Skeptical Inquirer, his contributions there exhibit his signature smugness. For an antidote to Pigliucci's overweening scientism 'tude, it's refreshing to consult Kurtz's curtain-raising essay, "Science and the Public," in Science Under Siege (Prometheus Books, 2009, edited by Frazier)
  • Kurtz's commandment might be stated, "Don't mock or ridicule—investigate and explain." He writes: "We attempted to make it clear that we were interested in fair and impartial inquiry, that we were not dogmatic or closed-minded, and that skepticism did not imply a priori rejection of any reasonable claim. Indeed, I insisted that our skepticism was not totalistic or nihilistic about paranormal claims."
  • Kurtz combines the ethos of both critical investigator and philosopher of science. Describing modern science as a practice in which "hypotheses and theories are based upon rigorous methods of empirical investigation, experimental confirmation, and replication," he notes: "One must be prepared to overthrow an entire theoretical framework—and this has happened often in the history of science ... skeptical doubt is an integral part of the method of science, and scientists should be prepared to question received scientific doctrines and reject them in the light of new evidence."
  • Pigliucci, alas, allows his animus against the nonscientific to pull him away from sensitive distinctions among various sciences to sloppy arguments one didn't see in such earlier works of science patriotism as Carl Sagan's The Demon-Haunted World: Science as a Candle in the Dark (Random House, 1995). Indeed, he probably sets a world record for misuse of the word "fallacy."
  • To his credit, Pigliucci at times acknowledges the nondogmatic spine of science. He concedes that "science is characterized by a fuzzy borderline with other types of inquiry that may or may not one day become sciences." Science, he admits, "actually refers to a rather heterogeneous family of activities, not to a single and universal method." He rightly warns that some pseudoscience—for example, denial of HIV-AIDS causation—is dangerous and terrible.
  • But at other points, Pigliucci ferociously attacks opponents like the most unreflective science fanatic
  • He dismisses Feyerabend's view that "science is a religion" as simply "preposterous," even though he elsewhere admits that "methodological naturalism"—the commitment of all scientists to reject "supernatural" explanations—is itself not an empirically verifiable principle or fact, but rather an almost Kantian precondition of scientific knowledge. An article of faith, some cold-eyed Feyerabend fans might say.
  • He writes, "ID is not a scientific theory at all because there is no empirical observation that can possibly contradict it. Anything we observe in nature could, in principle, be attributed to an unspecified intelligent designer who works in mysterious ways." But earlier in the book, he correctly argues against Karl Popper that susceptibility to falsification cannot be the sole criterion of science, because science also confirms. It is, in principle, possible that an empirical observation could confirm intelligent design—i.e., that magic moment when the ultimate UFO lands with representatives of the intergalactic society that planted early life here, and we accept their evidence that they did it.
  • "As long as we do not venture to make hypotheses about who the designer is and why and how she operates," he writes, "there are no empirical constraints on the 'theory' at all. Anything goes, and therefore nothing holds, because a theory that 'explains' everything really explains nothing."
  • Here, Pigliucci again mixes up what's likely or provable with what's logically possible or rational. The creation stories of traditional religions and scriptures do, in effect, offer hypotheses, or claims, about who the designer is—e.g., see the Bible.
  • Far from explaining nothing because it explains everything, such an explanation explains a lot by explaining everything. It just doesn't explain it convincingly to a scientist with other evidentiary standards.
  • A sensible person can side with scientists on what's true, but not with Pigliucci on what's rational and possible. Pigliucci occasionally recognizes that. Late in his book, he concedes that "nonscientific claims may be true and still not qualify as science." But if that's so, and we care about truth, why exalt science to the degree he does? If there's really a heaven, and science can't (yet?) detect it, so much the worse for science.
  • Pigliucci quotes a line from Aristotle: "It is the mark of an educated mind to be able to entertain a thought without accepting it." Science warriors such as Pigliucci, or Michael Ruse in his recent clash with other philosophers in these pages, should reflect on a related modern sense of "entertain." One does not entertain a guest by mocking, deriding, and abusing the guest. Similarly, one does not entertain a thought or approach to knowledge by ridiculing it.
  • Long live Skeptical Inquirer! But can we deep-six the egomania and unearned arrogance of the science patriots? As Descartes, that immortal hero of scientists and skeptics everywhere, pointed out, true skepticism, like true charity, begins at home.
  • Carlin Romano, critic at large for The Chronicle Review, teaches philosophy and media theory at the University of Pennsylvania.
  •  
    April 25, 2010 Science Warriors' Ego Trips
Weiye Loh

Review: What Rawls Hath Wrought | The National Interest - 0 views

  • THE primacy of this ideal is very recent. In the late 1970s, clearly a full thirty years after World War II, it all came about quite abruptly. And the ascendancy of rights as we now understand them came as a response, in part, to developments in the academy.
  • There were versions of utilitarianism, some scornful of rights (with Jeremy Bentham describing them as “nonsense upon stilts”), others that accepted that rights have important social functions (as in John Stuart Mill), but none of them asserted that rights were fundamental in ethical and political thinking.
  • There were various kinds of historicism—the English thinker Michael Oakeshott’s conservative traditionalism and the American scholar Richard Rorty’s postmodern liberalism, for example—that viewed human values as cultural creations, whose contents varied significantly from society to society. There was British theorist Isaiah Berlin’s value pluralism, which held that while some values are universally human, they conflict with one another in ways that do not always have a single rational solution. There were also varieties of Marxism which understood rights in explicitly historical terms.
  • ...2 more annotations...
  • human rights were discussed—when they were mentioned at all—as demands made in particular times and places. Some of these demands might be universal in scope—that torture be prohibited everywhere was frequently (though not always) formulated in terms of an all-encompassing necessity, but no one imagined that human rights comprised the only possible universal morality.
  • the notion that rights are the foundation of society came only with the rise of the Harvard philosopher John Rawls’s vastly influential A Theory of Justice (1971). In the years following, it slowly came to be accepted that human rights were the bottom line in political morality.
Weiye Loh

Review: What Rawls Hath Wrought | The National Interest - 0 views

  • Almost never used in English before the 1940s, “human rights” were mentioned in the New York Times five times as often in 1977 as in any prior year of the newspaper’s history. By the nineties, human rights had become central to the thinking not only of liberals but also of neoconservatives, who urged military intervention and regime change in the faith that these freedoms would blossom once tyranny was toppled. From being almost peripheral, the human-rights agenda found itself at the heart of politics and international relations.
  • In fact, it has become entrenched in extremis: nowadays, anyone who is skeptical about human rights is angrily challenged
  • The contemporary human-rights movement is demonstrably not the product of a revulsion against the worst crimes of Nazism. For one thing, the Holocaust did not figure in the deliberations that led up to the Universal Declaration of Human Rights adopted by the UN in 1948.
  • ...1 more annotation...
  • Contrary to received history, the rise of human rights had very little to do with the worst crime against humanity ever committed.
Weiye Loh

Book Review: Future Babble by Dan Gardner « Critical Thinking « Skeptic North - 0 views

  • I predict that you will find this review informative. If you do, you will congratulate my foresight. If you don’t, you’ll forget I was wrong.
  • My playful intro summarizes the main thesis of Gardner’s excellent book, Future Babble: Why Expert Predictions Fail – and Why We Believe Them Anyway.
  • In Future Babble, the research area explored is the validity of expert predictions, and the primary researcher examined is Philip Tetlock. In the early 1980s, Tetlock set out to better understand the accuracy of predictions made by experts by conducting a methodologically sound large-scale experiment.
  • ...10 more annotations...
  • Gardner presents Tetlock’s experimental design in an excellent way, making it accessible to the lay person. Concisely, Tetlock examined 27450 judgments in which 284 experts were presented with clear questions whose answers could later be shown to be true or false (e.g., “Will the official unemployment rate be higher, lower or the same a year from now?”). For each prediction, the expert must answer clearly and express their degree of certainty as a percentage (e.g., dead certain = 100%). The usage of precise numbers adds increased statistical options and removes the complications of vague or ambiguous language.
  • Tetlock found the surprising and disturbing truth “that experts’ predictions were no more accurate than random guesses.” (p. 26) An important caveat is that there was a wide range of capability, with some experts being completely out of touch, and others able to make successful predictions.
  • What distinguishes the impressive few from the borderline delusional is not whether they’re liberal or conservative. Tetlock’s data showed political beliefs made no difference to an expert’s accuracy. The same is true of optimists and pessimists. It also made no difference if experts had a doctorate, extensive experience, or access to classified information. Nor did it make a difference if experts were political scientists, historians, journalists, or economists.” (p. 26)
  • The experts who did poorly were not comfortable with complexity and uncertainty, and tended to reduce most problems to some core theoretical theme. It was as if they saw the world through one lens or had one big idea that everything else had to fit into. Alternatively, the experts who did decently were self-critical, used multiple sources of information and were more comfortable with uncertainty and correcting their errors. Their thinking style almost results in a paradox: “The experts who were more accurate than others tended to be less confident they were right.” (p.27)
  • Gardner then introduces the terms ‘Hedgehog’ and ‘Fox’ to refer to bad and good predictors respectively. Hedgehogs are the ones you see pushing the same idea, while Foxes are likely in the background questioning the ability of prediction itself while making cautious proposals. Foxes are more likely to be correct. Unfortunately, it is Hedgehogs that we see on the news.
  • one of Tetlock’s findings was that “the bigger the media profile of an expert, the less accurate his predictions.” (p.28)
  • Chapter 2 – The Unpredictable World An exploration into how many events in the world are simply unpredictable. Gardner discusses chaos theory and necessary and sufficient conditions for events to occur. He supports the idea of actually saying “I don’t know,” which many experts are reluctant to do.
  • Chapter 3 – In the Minds of Experts A more detailed examination of Hedgehogs and Foxes. Gardner discusses randomness and the illusion of control while using narratives to illustrate his points à la Gladwell. This chapter provides a lot of context and background information that should be very useful to those less initiated.
  • Chapter 6 – Everyone Loves a Hedgehog More about predictions and how the media picks up hedgehog stories and talking points without much investigation into their underlying source or concern for accuracy. It is a good demolition of the absurdity of so many news “discussion shows.” Gardner demonstrates how the media prefer a show where Hedgehogs square off against each other, and it is important that these commentators not be challenged lest they become exposed and, by association, implicate the flawed structure of the program/network.Gardner really singles out certain people, like Paul Ehrlich, and shows how they have been wrong many times and yet can still get an audience.
  • “An assertion that cannot be falsified by any conceivable evidence is nothing more than dogma. It can’t be debated. It can’t be proven or disproven. It’s just something people choose to believe or not for reasons that have nothing to do with fact and logic. And dogma is what predictions become when experts and their followers go to ridiculous lengths to dismiss clear evidence that they failed.”
Weiye Loh

Roger Pielke Jr.'s Blog: Ideological Diversity in Academia - 0 views

  • Jonathan Haidt's talk (above) at the annual meeting of the Society for Personality and Social Psychology was written up last week in a column by John Tierney in the NY Times.  This was soon followed by a dismissal of the work by Paul Krugman.  The entire sequence is interesting, but for me the best part, and the one that gets to the nub of the issue, is Haight's response to Krugman: My research, like so much research in social psychology, demonstrates that we humans are experts at using reasoning to find evidence for whatever conclusions we want to reach. We are terrible at searching for contradictory evidence. Science works because our peers are so darn good at finding that contradictory evidence for us. Social science — at least my corner of it — is broken because there is nobody to look for contradictory evidence regarding sacralized issues, particularly those related to race, gender, and class. I urged my colleagues to increase our ideological diversity not for any moral reason, but because it will make us better scientists. You do not have that problem in economics where the majority is liberal but there is a substantial and vocal minority of libertarians and conservatives. Your field is healthy, mine is not. Do you think I was wrong to call for my professional organization to seek out a modicum of ideological diversity?
  • On a related note, the IMF review of why the institution failed to warn of the global financial crisis identified a lack of intellectual diversity as being among the factors responsible (PDF): Several cognitive biases seem to have played an important role. Groupthink refers to the tendency among homogeneous, cohesive groups to consider issues only within a certain paradigm and not challenge its basic premises (Janis, 1982). The prevailing view among IMF staff—a cohesive group of macroeconomists—was that market discipline and self-regulation would be sufficient to stave off serious problems in financial institutions. They also believed that crises were unlikely to happen in advanced economies, where “sophisticated” financial markets could thrive safely with minimal regulation of a large and growing portion of the financial system.Everyyone in academia has seen similar dynamics at work.
Weiye Loh

takchek (读书 ): When Scientific Research and Higher Education become just Poli... - 0 views

  • A mere two years after the passage of the economic stimulus package, the now Republican-controlled House of Representatives have started swinging their budget cutting axe at scientific research and higher education.One point stood out in the midst of all this "fiscal responsibility" talk:The House bill does not specify cuts to five of the Office of Science's six programs, namely, basic energy sciences, high-energy physics, nuclear physics, fusion energy sciences, and advanced scientific computing. However, it explicitly whacks funding for the biological and environmental research program from $588 million to $302 million, a 49% reduction that would effectively zero out the program for the remainder of the year. The program supports much of DOE's climate and bioenergy research and in the past has funded much of the federal government's work on decoding the human genome. - Science , 25 February 2011: Vol. 331 no. 6020 pp. 997-998 DOI: 10.1126/science.331.6020.997 Do the terms Big Oil, Creationism/Intelligent Design come to your mind?
  • In other somewhat related news, tenure rights are being weakened in Louisiana and state legislatures are trying to have greater control over how colleges are run. It is hard not to see that there seems to be a coordinated assault on academia (presumably since many academics are seen by the Republican right as leftist liberals.)Lawmakers are inserting themselves even more directly into the classroom in South Carolina, where a proposal would require professors to teach a minimum of nine credit hours per semester."I think we need to have professors in the classroom and not on sabbatical and out researching and doing things to that effect," State Rep. Murrell G. Smith Jr., a Republican, told the Associated Press.I think they are attempting to turn research universities into trade/vocational schools.
Weiye Loh

How the Internet Gets Inside Us : The New Yorker - 0 views

  • N.Y.U. professor Clay Shirky—the author of “Cognitive Surplus” and many articles and blog posts proclaiming the coming of the digital millennium—is the breeziest and seemingly most self-confident
  • Shirky believes that we are on the crest of an ever-surging wave of democratized information: the Gutenberg printing press produced the Reformation, which produced the Scientific Revolution, which produced the Enlightenment, which produced the Internet, each move more liberating than the one before.
  • The idea, for instance, that the printing press rapidly gave birth to a new order of information, democratic and bottom-up, is a cruel cartoon of the truth. If the printing press did propel the Reformation, one of the biggest ideas it propelled was Luther’s newly invented absolutist anti-Semitism. And what followed the Reformation wasn’t the Enlightenment, a new era of openness and freely disseminated knowledge. What followed the Reformation was, actually, the Counter-Reformation, which used the same means—i.e., printed books—to spread ideas about what jerks the reformers were, and unleashed a hundred years of religious warfare.
  • ...17 more annotations...
  • If ideas of democracy and freedom emerged at the end of the printing-press era, it wasn’t by some technological logic but because of parallel inventions, like the ideas of limited government and religious tolerance, very hard won from history.
  • As Andrew Pettegree shows in his fine new study, “The Book in the Renaissance,” the mainstay of the printing revolution in seventeenth-century Europe was not dissident pamphlets but royal edicts, printed by the thousand: almost all the new media of that day were working, in essence, for kinglouis.gov.
  • Even later, full-fledged totalitarian societies didn’t burn books. They burned some books, while keeping the printing presses running off such quantities that by the mid-fifties Stalin was said to have more books in print than Agatha Christie.
  • Many of the more knowing Never-Betters turn for cheer not to messy history and mixed-up politics but to psychology—to the actual expansion of our minds.
  • The argument, advanced in Andy Clark’s “Supersizing the Mind” and in Robert K. Logan’s “The Sixth Language,” begins with the claim that cognition is not a little processing program that takes place inside your head, Robby the Robot style. It is a constant flow of information, memory, plans, and physical movements, in which as much thinking goes on out there as in here. If television produced the global village, the Internet produces the global psyche: everyone keyed in like a neuron, so that to the eyes of a watching Martian we are really part of a single planetary brain. Contraptions don’t change consciousness; contraptions are part of consciousness. We may not act better than we used to, but we sure think differently than we did.
  • Cognitive entanglement, after all, is the rule of life. My memories and my wife’s intermingle. When I can’t recall a name or a date, I don’t look it up; I just ask her. Our machines, in this way, become our substitute spouses and plug-in companions.
  • But, if cognitive entanglement exists, so does cognitive exasperation. Husbands and wives deny each other’s memories as much as they depend on them. That’s fine until it really counts (say, in divorce court). In a practical, immediate way, one sees the limits of the so-called “extended mind” clearly in the mob-made Wikipedia, the perfect product of that new vast, supersized cognition: when there’s easy agreement, it’s fine, and when there’s widespread disagreement on values or facts, as with, say, the origins of capitalism, it’s fine, too; you get both sides. The trouble comes when one side is right and the other side is wrong and doesn’t know it. The Shakespeare authorship page and the Shroud of Turin page are scenes of constant conflict and are packed with unreliable information. Creationists crowd cyberspace every bit as effectively as evolutionists, and extend their minds just as fully. Our trouble is not the over-all absence of smartness but the intractable power of pure stupidity, and no machine, or mind, seems extended enough to cure that.
  • Nicholas Carr, in “The Shallows,” William Powers, in “Hamlet’s BlackBerry,” and Sherry Turkle, in “Alone Together,” all bear intimate witness to a sense that the newfound land, the ever-present BlackBerry-and-instant-message world, is one whose price, paid in frayed nerves and lost reading hours and broken attention, is hardly worth the gains it gives us. “The medium does matter,” Carr has written. “As a technology, a book focuses our attention, isolates us from the myriad distractions that fill our everyday lives. A networked computer does precisely the opposite. It is designed to scatter our attention. . . . Knowing that the depth of our thought is tied directly to the intensity of our attentiveness, it’s hard not to conclude that as we adapt to the intellectual environment of the Net our thinking becomes shallower.
  • Carr is most concerned about the way the Internet breaks down our capacity for reflective thought.
  • Powers’s reflections are more family-centered and practical. He recounts, very touchingly, stories of family life broken up by the eternal consultation of smartphones and computer monitors
  • He then surveys seven Wise Men—Plato, Thoreau, Seneca, the usual gang—who have something to tell us about solitude and the virtues of inner space, all of it sound enough, though he tends to overlook the significant point that these worthies were not entirely in favor of the kinds of liberties that we now take for granted and that made the new dispensation possible.
  • Similarly, Nicholas Carr cites Martin Heidegger for having seen, in the mid-fifties, that new technologies would break the meditational space on which Western wisdoms depend. Since Heidegger had not long before walked straight out of his own meditational space into the arms of the Nazis, it’s hard to have much nostalgia for this version of the past. One feels the same doubts when Sherry Turkle, in “Alone Together,” her touching plaint about the destruction of the old intimacy-reading culture by the new remote-connection-Internet culture, cites studies that show a dramatic decline in empathy among college students, who apparently are “far less likely to say that it is valuable to put oneself in the place of others or to try and understand their feelings.” What is to be done?
  • Among Ever-Wasers, the Harvard historian Ann Blair may be the most ambitious. In her book “Too Much to Know: Managing Scholarly Information Before the Modern Age,” she makes the case that what we’re going through is like what others went through a very long while ago. Against the cartoon history of Shirky or Tooby, Blair argues that the sense of “information overload” was not the consequence of Gutenberg but already in place before printing began. She wants us to resist “trying to reduce the complex causal nexus behind the transition from Renaissance to Enlightenment to the impact of a technology or any particular set of ideas.” Anyway, the crucial revolution was not of print but of paper: “During the later Middle Ages a staggering growth in the production of manuscripts, facilitated by the use of paper, accompanied a great expansion of readers outside the monastic and scholastic contexts.” For that matter, our minds were altered less by books than by index slips. Activities that seem quite twenty-first century, she shows, began when people cut and pasted from one manuscript to another; made aggregated news in compendiums; passed around précis. “Early modern finding devices” were forced into existence: lists of authorities, lists of headings.
  • Everyone complained about what the new information technologies were doing to our minds. Everyone said that the flood of books produced a restless, fractured attention. Everyone complained that pamphlets and poems were breaking kids’ ability to concentrate, that big good handmade books were ignored, swept aside by printed works that, as Erasmus said, “are foolish, ignorant, malignant, libelous, mad.” The reader consulting a card catalogue in a library was living a revolution as momentous, and as disorienting, as our own.
  • The book index was the search engine of its era, and needed to be explained at length to puzzled researchers
  • That uniquely evil and necessary thing the comprehensive review of many different books on a related subject, with the necessary oversimplification of their ideas that it demanded, was already around in 1500, and already being accused of missing all the points. In the period when many of the big, classic books that we no longer have time to read were being written, the general complaint was that there wasn’t enough time to read big, classic books.
  • at any given moment, our most complicated machine will be taken as a model of human intelligence, and whatever media kids favor will be identified as the cause of our stupidity. When there were automatic looms, the mind was like an automatic loom; and, since young people in the loom period liked novels, it was the cheap novel that was degrading our minds. When there were telephone exchanges, the mind was like a telephone exchange, and, in the same period, since the nickelodeon reigned, moving pictures were making us dumb. When mainframe computers arrived and television was what kids liked, the mind was like a mainframe and television was the engine of our idiocy. Some machine is always showing us Mind; some entertainment derived from the machine is always showing us Non-Mind.
Weiye Loh

Asia Times Online :: Southeast Asia news and business from Indonesia, Philippines, Thai... - 0 views

  • Internet-based news websites and the growing popularity of social media have broken the mainstream media's monopoly on news - though not completely. Singapore's PAP-led government was one of the first in the world to devise content regulations for the Internet, issuing restrictions on topics it deemed as sensitive as early as 1996.
  • While political parties are broadly allowed to use the Internet to campaign, they were previously prohibited from employing some of the medium's most powerful features, including live audio and video streaming and so-called "viral marketing". Websites not belonging to political parties or candidates but registered as political sites have been banned from activities that could be considered online electioneering.
  • George argued that despite the growing influence of online media, it would be naive to conclude that the PAP's days of domination are numbered. "While the government appears increasingly liberal towards individual self-expression, it continues to intervene strategically at points at which such expression may become politically threatening," he said. "It is safe to assume that the government's digital surveillance capabilities far outstrip even its most technologically competent opponent's evasive abilities."
  • ...2 more annotations...
  • consistent with George's analysis, authorities last week relaxed past regulations that limited the use of the Internet and social media for election campaigning. Political parties and candidates will be allowed to use a broader range of new media platforms, including blogs, micro-blogs, online photo-sharing platforms, social networking sites and electronic media applications used on mobile phones, for election advertising. The loosening, however, only applies for political party-run websites, chat rooms and online discussion forums. Candidates must declare the new media content they intend to use within 12 hours after the start of the election campaign period. George warned in a recent blog entry that the new declaration requirements could open the way for PAP-led defamation suits against new media using opposition politicians. PAP leaders have historically relied on expensive litigation to suppress opposition and media criticism. "The PAP won't subject everyone's postings to legal scrutiny. But if it decides that a particular opposition politician needs to be utterly demolished, you can bet that no tweet of his would be too tiny, no Facebook update too fleeting ... in order a build the case against the individual," George warned in a journalism blog.
  • While opposition politicians will rely more on new than mainstream media to communicate with voters, they already recognize that the use of social media will not necessarily translate into votes. "[Online support] can give a too rosy a picture and false degree of comfort," said the RP's Jeyaretnam. "People who [interact with] us online are those who are already convinced with our messages anyway."
Weiye Loh

Fridae | Simon Fujiwara: Censored at the Singapore Biennale 2011 - 0 views

  • Fujiwara mailed me with this news: his work’s been censored. All the erotica’s been removed, rendering it, in his words, “meaningless, almost a tribute to Franco in the end”. The curators and managers didn’t even consult him or seek his permission to alter the piece – they simply altered it without his consent. What’s even more disgusting is the fact that they waited two weeks, until all the Biennale’s international guests had left the country, before they leapt into action. This way, they could appear liberal to foreign journalists while ultimately preserving a conservative front for Singaporean audiences.
Weiye Loh

Hunch Blog | Blog Archive | You've got mail: What your email domain says about you - 0 views

  • AOL users are most likely to be overweight women ages 35-64 who have a high school diploma and are spiritual, but not religious. They tend to be politically middle of the road, in a relationship of 10+ years, and have children. AOL users live in the suburbs and haven’t traveled outside their own country. Family is their first priority. AOL users mostly read magazines, have a desktop computer, listen to the radio, and watch TV on 1-3 DVRs in their home. At home, they lounge around in sweats. AOL users are optimistic extroverts who prefer sweet snacks and like working on a team.
  • Gmail users are most likely to be thin young men ages 18-34 who are college-educated and not religious. Like other young Hunch users, they tend to be politically liberal, single (and ready to mingle), and childless. Gmail users live in cities and have traveled to five or more countries. They’re career-focused and plugged in — they mostly read blogs, have an iPhone and laptop, and listen to music via MP3s and computers (but they don’t have a DVR). At home, they lounge around in a t-shirt and jeans. Gmail users prefer salty snacks and are introverted and entrepreneurial. They are optimistic or pessimistic, depending on the situation.
  • Hotmail users are most likely to be young women of average build ages 18-34 (and younger) who have a high school diploma and are not religious. They tend to be politically middle of the road, single, and childless. Hotmail users live in the suburbs, perhaps still with their parents, and have traveled to up to five countries. They mostly read magazines and contemporary fiction, have a laptop, and listen to music via MP3s and computers (but they don’t have a DVR). At home, Hotmail users lounge around in a t-shirt and jeans. They’re introverts who prefer sweet snacks and like working on a team. They consider themselves more pessimistic, but sometimes it depends on the situation.
  • ...1 more annotation...
  • Yahoo! users are most likely to be overweight women ages 18-49 who have a high school diploma and are spiritual, but not religious. They tend to be politically middle of the road, in a relationship of 1-5 years, and have children. Yahoo! users live in the suburbs or in rural areas and haven’t traveled outside their own country. Family is their first priority. They mostly read magazines, are almost equally likely to have a laptop or desktop computer, listen to the radio and cds, and watch TV on 1-2 DVRs in their home. At home, Yahoo! users lounge around in pajamas. They’re extroverts who prefer sweet snacks and like working on a team. Yahoo! users are optimistic or pessimistic, depending on the situation.
  •  
    What your email domain says about you
Weiye Loh

Rationally Speaking: Is modern moral philosophy still in thrall to religion? - 0 views

  • Recently I re-read Richard Taylor’s An Introduction to Virtue Ethics, a classic published by Prometheus
  • Taylor compares virtue ethics to the other two major approaches to moral philosophy: utilitarianism (a la John Stuart Mill) and deontology (a la Immanuel Kant). Utilitarianism, of course, is roughly the idea that ethics has to do with maximizing pleasure and minimizing pain; deontology is the idea that reason can tell us what we ought to do from first principles, as in Kant’s categorical imperative (e.g., something is right if you can agree that it could be elevated to a universally acceptable maxim).
  • Taylor argues that utilitarianism and deontology — despite being wildly different in a variety of respects — share one common feature: both philosophies assume that there is such a thing as moral right and wrong, and a duty to do right and avoid wrong. But, he says, on the face of it this is nonsensical. Duty isn’t something one can have in the abstract, duty is toward a law or a lawgiver, which begs the question of what could arguably provide us with a universal moral law, or who the lawgiver could possibly be.
  • ...11 more annotations...
  • His answer is that both utilitarianism and deontology inherited the ideas of right, wrong and duty from Christianity, but endeavored to do without Christianity’s own answers to those questions: the law is given by God and the duty is toward Him. Taylor says that Mill, Kant and the like simply absorbed the Christian concept of morality while rejecting its logical foundation (such as it was). As a result, utilitarians and deontologists alike keep talking about the right thing to do, or the good as if those concepts still make sense once we move to a secular worldview. Utilitarians substituted pain and pleasure for wrong and right respectively, and Kant thought that pure reason can arrive at moral universals. But of course neither utilitarians nor deontologist ever give us a reason why it would be irrational to simply decline to pursue actions that increase global pleasure and diminish global pain, or why it would be irrational for someone not to find the categorical imperative particularly compelling.
  • The situation — again according to Taylor — is dramatically different for virtue ethics. Yes, there too we find concepts like right and wrong and duty. But, for the ancient Greeks they had completely different meanings, which made perfect sense then and now, if we are not mislead by the use of those words in a different context. For the Greeks, an action was right if it was approved by one’s society, wrong if it wasn’t, and duty was to one’s polis. And they understood perfectly well that what was right (or wrong) in Athens may or may not be right (or wrong) in Sparta. And that an Athenian had a duty to Athens, but not to Sparta, and vice versa for a Spartan.
  • But wait a minute. Does that mean that Taylor is saying that virtue ethics was founded on moral relativism? That would be an extraordinary claim indeed, and he does not, in fact, make it. His point is a bit more subtle. He suggests that for the ancient Greeks ethics was not (principally) about right, wrong and duty. It was about happiness, understood in the broad sense of eudaimonia, the good or fulfilling life. Aristotle in particular wrote in his Ethics about both aspects: the practical ethics of one’s duty to one’s polis, and the universal (for human beings) concept of ethics as the pursuit of the good life. And make no mistake about it: for Aristotle the first aspect was relatively trivial and understood by everyone, it was the second one that represented the real challenge for the philosopher.
  • For instance, the Ethics is famous for Aristotle’s list of the virtues (see Table), and his idea that the right thing to do is to steer a middle course between extreme behaviors. But this part of his work, according to Taylor, refers only to the practical ways of being a good Athenian, not to the universal pursuit of eudaimonia. Vice of Deficiency Virtuous Mean Vice of Excess Cowardice Courage Rashness Insensibility Temperance Intemperance Illiberality Liberality Prodigality Pettiness Munificence Vulgarity Humble-mindedness High-mindedness Vaingloriness Want of Ambition Right Ambition Over-ambition Spiritlessness Good Temper Irascibility Surliness Friendly Civility Obsequiousness Ironical Depreciation Sincerity Boastfulness Boorishness Wittiness Buffoonery</t
  • How, then, is one to embark on the more difficult task of figuring out how to live a good life? For Aristotle eudaimonia meant the best kind of existence that a human being can achieve, which in turns means that we need to ask what it is that makes humans different from all other species, because it is the pursuit of excellence in that something that provides for a eudaimonic life.
  • Now, Plato - writing before Aristotle - ended up construing the good life somewhat narrowly and in a self-serving fashion. He reckoned that the thing that distinguishes humanity from the rest of the biological world is our ability to use reason, so that is what we should be pursuing as our highest goal in life. And of course nobody is better equipped than a philosopher for such an enterprise... Which reminds me of Bertrand Russell’s quip that “A process which led from the amoeba to man appeared to the philosophers to be obviously a progress, though whether the amoeba would agree with this opinion is not known.”
  • But Aristotle's conception of "reason" was significantly broader, and here is where Taylor’s own update of virtue ethics begins to shine, particularly in Chapter 16 of the book, aptly entitled “Happiness.” Taylor argues that the proper way to understand virtue ethics is as the quest for the use of intelligence in the broadest possible sense, in the sense of creativity applied to all walks of life. He says: “Creative intelligence is exhibited by a dancer, by athletes, by a chess player, and indeed in virtually any activity guided by intelligence [including — but certainly not limited to — philosophy].” He continues: “The exercise of skill in a profession, or in business, or even in such things as gardening and farming, or the rearing of a beautiful family, all such things are displays of creative intelligence.”
  • what we have now is a sharp distinction between utilitarianism and deontology on the one hand and virtue ethics on the other, where the first two are (mistakenly, in Taylor’s assessment) concerned with the impossible question of what is right or wrong, and what our duties are — questions inherited from religion but that in fact make no sense outside of a religious framework. Virtue ethics, instead, focuses on the two things that really matter and to which we can find answers: the practical pursuit of a life within our polis, and the lifelong quest of eudaimonia understood as the best exercise of our creative faculties
  • &gt; So if one's profession is that of assassin or torturer would being the best that you can be still be your duty and eudaimonic? And what about those poor blighters who end up with an ugly family? &lt;Aristotle's philosophy is ver much concerned with virtue, and being an assassin or a torturer is not a virtue, so the concept of a eudaimonic life for those characters is oxymoronic. As for ending up in a "ugly" family, Aristotle did write that eudaimonia is in part the result of luck, because it is affected by circumstances.
  • &gt; So to the title question of this post: "Is modern moral philosophy still in thrall to religion?" one should say: Yes, for some residual forms of philosophy and for some philosophers &lt;That misses Taylor's contention - which I find intriguing, though I have to give it more thought - that *all* modern moral philosophy, except virtue ethics, is in thrall to religion, without realizing it.
  • “The exercise of skill in a profession, or in business, or even in such things as gardening and farming, or the rearing of a beautiful family, all such things are displays of creative intelligence.”So if one's profession is that of assassin or torturer would being the best that you can be still be your duty and eudaimonic? And what about those poor blighters who end up with an ugly family?
Weiye Loh

Rationally Speaking: A different kind of moral relativism - 0 views

  • Prinz’s basic stance is that moral values stem from our cognitive hardware, upbringing, and social environment. These equip us with deep-seated moral emotions, but these emotions express themselves in a contingent way due to cultural circumstances. And while reason can help, it has limited influence, and can only reshape our ethics up to a point, it cannot settle major differences between different value systems. Therefore, it is difficult, if not impossible, to construct an objective morality that transcends emotions and circumstance.
  • As Prinz writes, in part:“No amount of reasoning can engender a moral value, because all values are, at bottom, emotional attitudes. … Reason cannot tell us which facts are morally good. Reason is evaluatively neutral. At best, reason can tell us which of our values are inconsistent, and which actions will lead to fulfillment of our goals. But, given an inconsistency, reason cannot tell us which of our conflicting values to drop or which goals to follow. If my goals come into conflict with your goals, reason tells me that I must either thwart your goals, or give up caring about mine; but reason cannot tell me to favor one choice over the other. … Moral judgments are based on emotions, and reasoning normally contributes only by helping us extrapolate from our basic values to novel cases. Reasoning can also lead us to discover that our basic values are culturally inculcated, and that might impel us to search for alternative values, but reason alone cannot tell us which values to adopt, nor can it instill new values.”
  • This moral relativism is not the absolute moral relativism of, supposedly, bands of liberal intellectuals, or of postmodernist philosophers. It presents a more serious challenge to those who argue there can be objective morality. To be sure, there is much Prinz and I agree on. At the least, we agree that morality is largely constructed by our cognition, upbringing, and social environment; and that reason has the power synthesize and clarify our worldviews, and help us plan for and react to life’s situations
  • ...5 more annotations...
  • Suppose I concede to Prinz that reason cannot settle differences in moral values and sentiments. Difference of opinion doesn’t mean that there isn’t a true or rational answer. In fact, there are many reasons why our cognition, emotional reactions or previous values could be wrong or irrational — and why people would not pick up on their deficiencies. In his article, Prinz uses the case of sociopaths, who simply lack certain cognitive abilities. There are many reasons other than sociopathy why human beings can get things wrong, morally speaking, often and badly. It could be that people are unable to adopt a more objective morality because of their circumstances — from brain deficiencies to lack of access to relevant information. But, again, none of this amounts to an argument against the existence of objective morality.
  • As it turns out, Prinz’s conception of objective morality does not quite reflect the thinking of most people who believe in objective morality. He writes that: “Objectivism holds that there is one true morality binding upon all of us.” This is a particular strand of moral realism, but there are many. For instance, one can judge some moral precepts as better than others, yet remain open to the fact that there are probably many different ways to establish a good society. This is a pluralistic conception of objective morality which doesn’t assume one absolute moral truth. For all that has been said, Sam Harris’ idea of a moral landscape does help illustrate this concept. Thinking in terms of better and worse morality gets us out of relativism and into an objectivist approach. The important thing to note is that one need not go all the way to absolute objectivity to work toward a rational, non-arbitrary morality.
  • even Prinz admits that “Relativism does not entail that we should tolerate murderous tyranny. When someone threatens us or our way of life, we are strongly motivated to protect ourselves.” That is, there are such things as better and worse values: the worse ones kill us, the better ones don’t. This is a very broad criterion, but it is an objective standard. Prinz is arguing for a tighter moral relativism – a sort of stripped down objective morality that is constricted by nature, experience, and our (modest) reasoning abilities.
  • I proposed at the discussion that a more objective morality could be had with the help of a robust public discourse on the issues at hand. Prinz does not necessarily disagree. He wrote that “Many people have overlapping moral values, and one can settle debates by appeal to moral common ground.” But Prinz pointed out a couple of limitations on public discourse. For example, the agreements we reach on “moral common ground” are often exclusive of some, and abstract in content. Consider the United Nations Declaration of Human Rights, a seemingly good example of global moral agreement. Yet, it was ratified by a small sample of 48 countries, and it is based on suspiciously Western sounding language. Everyone has a right to education and health care, but — Prinz pointed out during the discussion — what level of education and health care? Still, the U.N. declaration was passed 48-0 with just 8 abstentions (Belarus, Czechoslovakia, Poland, Ukraine, USSR, Yugoslavia, South Africa and Saudi Arabia). It includes 30 articles of ethical standards agreed upon by 48 countries around the world. Such a document does give us more reason to think that public discourse can lead to significant agreement upon values.
  • Reason might not be able to arrive at moral truths, but it can push us to test and question the rationality of our values — a crucial part in the process that leads to the adoption of new, or modified values. The only way to reduce disputes about morality is to try to get people on the same page about their moral goals. Given the above, this will not be easy, and perhaps we shouldn’t be too optimistic in our ability to employ reason to figure things out. But reason is still the best, and even only, tool we can wield, and while it might not provide us with a truly objective morality, it’s enough to save us from complete moral relativism.
Weiye Loh

Why YouTube Adopting Creative Commons Is a Big Deal Online Video News - 0 views

  • Creative Commons-licensed videos can be found from within YouTube’s video editor through a special CC tab. These videos can then be trimmed, combined with other clips and synchronized to music, just like users have been able to do with their own uploads ever since YouTube launched its video editor a year ago. “It’s as if all the Creative Commons videos were part of your personal library,” explained Product Manager Jason Toff when I talked to him on the phone yesterday.
  • YouTube’s catalog of Creative Commons clips is being seeded with more than 10,000 videos from partners like C-SPAN, Voice of America and Al-Jazeera. Users also now have the ability to publish any of their own videos under CC-BY simply by selecting the licenses as an option during the upload process.
  • CC-BY only requires that users credit the original videographer, and YouTube is automating this process by adding links to the original work next to every mashup video. Toff said that the site might add additional Creative Commons licenses in the future if there was strong demand for it.
  • ...2 more annotations...
  • Creative Commons has in the past been struggling with the fact that the majority of users tends to adopt more restrictive licenses. The organization estimated that two out of three Creative Commons-licensed works can’t be reused commercially, and one out of four can’t be reincorporated into a new work at all.
  • CC-BY on the other hand allows commercial reuse as well. This doesn’t just open YouTube and its producers new revenue opportunities it also makes it possible to reuse these videos in a much wider variety of contexts. Wikipedia, for example, demands that any videos posted to its site can be reused commercially. Combine that with the fact that YouTube has been converting its entire catalog into the open source WebM format, and there’s little reason why tens of thousands of Creative Commons-licensed YouTube videos shouldn’t show up on Wikipedia any day now.
Weiye Loh

First principles of justice: Rights and wrongs | The Economist - 0 views

  • Mr Sandel illustrates the old classroom chestnut—is it ever right to kill one innocent person to save the lives of several others?—with a horrifying dilemma from Afghanistan in 2005. A four-man American unit on reconnaissance behind lines stumbled on a shepherd likely, if let go, to betray them to the Taliban. They could not hold him prisoner. Nor, on moral grounds, would the serviceman in charge kill him. Released, the shepherd alerted the Taliban, who surrounded the unit. Three were killed along with 16 Americans in a rescue helicopter. The soldier in command, who lived, called his decision “stupid, lamebrained and southern-fried”. Which was right, his earlier refusal or his later regret?
  • He returns also to an old charge against the late John Rawls. In “Liberalism and the Limits of Justice” (1982) Mr Sandel argued that Rawls’s celebrated account of social justice downplayed the moral weight of family feeling, group loyalties and community attachments. He repeats those “communitarian” charges here.
Weiye Loh

Rationally Speaking: Don't blame free speech for the murders in Afghanistan - 0 views

  • The most disturbing example of this response came from the head of the U.N. Assistance Mission in Afghanistan, Staffan de Mistura, who said, “I don't think we should be blaming any Afghan. We should be blaming the person who produced the news — the one who burned the Koran. Freedom of speech does not mean freedom of offending culture, religion, traditions.” I was not going to comment on this monumentally inane line of thought, especially since Susan Jacoby, Michael Tomasky, and Mike Labossiere have already done such a marvelous job of it. But then I discovered, to my shock, that several of my liberal, progressive American friends actually agreed that Jones has some sort of legal and moral responsibility for what happened in Afghanistan
  • I believe he has neither. Here is why. Unlike many countries in the Middle East and Europe that punish blasphemy by fine, jail or death, the U.S., via the First Amendment and a history of court decisions, strongly protects freedom of speech and expression as basic and fundamental human rights. These include critiquing and offending other citizens’ culture, religion, and traditions. Such rights are not supposed to be swayed by peoples' subjective feelings, which form an incoherent and arbitrary basis for lawmaking. In a free society, if and when a person is offended by an argument or act, he or she has every right to argue and act back. If a person commits murder, the answer is not to limit the right; the answer is to condemn and punish the murderer for overreacting.
  • Of course, there are exceptions to this rule. Governments have an interest in condemning certain speech that provokes immediate hatred of or violence against people. The canonical example is yelling “fire!” in a packed room when there in fact is no fire, since this creates a clear and imminent danger for those inside the room. But Jones did not create such an environment, nor did he intend to. Jones (more precisely, Wayne Sapp) merely burned a book in a private ceremony in protest of its contents. Indeed, the connection between Jones and the murders requires many links in-between. The mob didn’t kill those accountable, or even Americans.
  • ...3 more annotations...
  • But even if there is no law prohibiting Jones’ action, isn’t he morally to blame for creating the environment that led to the murders? Didn’t he know Muslims would riot, and people might die? It seems ridiculous to assume that Jones could know such a thing, even if parts of the Muslim world have a poor track record in this area. But imagine for a moment that Jones did know Muslims would riot, and people would die. This does not make the act of burning a book and the act of murder morally equivalent, nor does it make the book burner responsible for reactions to his act. In and of itself, burning a book is a morally neutral act. Why would this change because some misguided individuals think book burning is worth the death penalty? And why is it that so many have automatically assumed the reaction to be respectable? To use an example nearer to some of us, recall when PZ Myers desecrated a communion wafer. If some Christian was offended, and went on to murder the closest atheist, would we really blame Myers? Is Myers' offense any different than Jones’?
  • the deep-seated belief among many that blasphemy is wrong. This means any reaction to blasphemy is less wrong, and perhaps even excused, compared to the blasphemous offense. Even President Obama said that, "The desecration of any holy text, including the Koran, is an act of extreme intolerance and bigotry.” To be sure, Obama went on to denounce the murders, and to state that burning a holy book is no excuse for murder. But Obama apparently couldn’t condemn the murders without also condemning Jones’ act of religious defiance.
  • As it turns out, this attitude is exactly what created the environment that led to murders in the first place. The members of the mob believed that religious belief should be free from public critical inquiry, and that a person who offends religious believers should face punishment. In the absence of official prosecution, they took matters into their own hands and sought anyone on the side of the offender. It didn’t help that Afghan leaders stoked the flames of hatred — but they only did so because they agreed with the mob’s sentiment to begin with. Afghan President Hamid Karzai said the U.S. should punish those responsible, and three well-known Afghan mullahs urged their followers to take to the streets and protest to call for the arrest of Jones
Weiye Loh

BioCentre - 0 views

  • Humanity’s End. The main premise of the book is that proposals that would supposedly promise to make us smarter like never before or add thousands of years to our live seem rather far fetched and the domain of mere fantasy. However, it is these very proposals which form the basis of many of the ideas and thoughts presented by advocates of radical enhancement and which are beginning to move from the sidelines to the centre of main stream discussion. A variety of technologies and therapies are being presented to us as options to expand our capabilities and capacities in order for us to become something other than human.
  • Agar takes issue with this and argues against radical human enhancement. He structures his analysis and discussion by focusing on four key figures and their proposals which help to form the core of the case for radical enhancement debate.&nbsp; First to be examined by Agar is Ray Kurzweil who argues that Man and Machine will become one as technology allows us to transcend our biology. Second, is Aubrey de Grey who is a passionate advocate and pioneer of anti-ageing therapies which allow us to achieve “longevity escape velocity”. Next is Nick Bostrom, a leading transhumanist who defends the morality and rationality of enhancement and finally James Hughes who is a keen advocate of a harmonious democracy of the enhanced and un-enhanced.
  • He avoids falling into any of the pitfalls of basing his argument solely upon the “playing God” question but instead seeks to posit a well founded argument in favour of the precautionary principle.
  • ...10 more annotations...
  • Agar directly tackles Hughes’ ideas of a “democratic transhumanism.” Here as post-humans and humans live shoulder to shoulder in wonderful harmony, all persons have access to the technologies they want in order to promote their own flourishing.&nbsp; Under girding all of this is the belief that no human should feel pressurised to become enhance. Agar finds no comfort with this and instead can foresee a situation where it would be very difficult for humans to ‘choose’ to remain human.&nbsp; The pressure to radically enhance would be considerable given the fact that the radically enhanced would no doubt be occupying the positions of power in society and would consider the moral obligation to utilise to the full enhancement techniques as being a moral imperative for the good of society.&nbsp; For those who were able to withstand then a new underclass would no doubt emerge between the enhanced and the un-enhanced. This is precisely the kind of society which Hughes appears to be overly optimistic will not emerge but which is more akin to Lee Silver’s prediction of the future with the distinction made between the "GenRich" and the "naturals”.&nbsp; This being the case, the author proposes that we have two options: radical enhancement is either enforced across the board or banned outright. It is the latter option which Agar favours but crucially does not elaborate further on so it is unclear as to how he would attempt such a ban given the complexity of the issue. This is disappointing as any general initial reflections which the author felt able to offer would have added to the discussion and added further strength to his line of argument.
  • A Transhuman Manifesto The final focus for Agar is James Hughes, who published his transhumanist manifesto Citizen Cyborg in 2004. Given the direct connection with politics and public policy this for me was a particularly interesting read. The basic premise to Hughes argument is that once humans and post humans recognise each other as citizens then this will mark the point at which they will be able to get along with each other.
  • Agar takes to task the argument Bostrom made with Toby Ord, concerning claims against enhancement. Bostrom and Ord argue that it boils down to a preference for the status quo; current human intellects and life spans are preferred and deemed best because they are what we have now and what we are familiar with (p. 134).&nbsp; Agar discusses the fact that in his view, Bostrom falls into a focalism – focusing on and magnifying the positives whilst ignoring the negative implications.&nbsp; Moreover, Agar goes onto develop and reiterate his earlier point that the sort of radical enhancements Bostrom et al enthusiastically support and promote take us beyond what is human so they are no longer human. It therefore cannot be said to be human enhancement given the fact that the traits or capacities that such enhancement afford us would be in many respects superior to ours, but they would not be ours.
  • With his law of accelerating returns and talk of the Singularity Ray Kurzweil proposes that we are speeding towards a time when our outdated systems of neurons and synapses will be traded for far more efficient electronic circuits, allowing us to become artificially super-intelligent and transferring our minds from brains into machines.
  • Having laid out the main ideas and thinking behind Kurzweil’s proposals, Agar makes the perceptive comment that despite the apparent appeal of greater processing power it would nevertheless be no longer human. Introducing chips to the human body and linking into the human nervous system to computers as per Ray Kurzweil’s proposals will prove interesting but it goes beyond merely creating a copy of us in order to that future replication and uploading can take place. Rather it will constitute something more akin to an upgrade. Electrochemical signals that the brain use to achieve thought travel at 100 metres per second. This is impressive but contrast this with the electrical signals in a computer which travel at 300 million metres per second then the distinction is clear. If the predictions are true how will such radically enhanced and empowered beings live not only the unenhanced but also what will there quality of life really be? In response, Agar favours something what he calls “rational biological conservatism” (pg. 57) where we set limits on how intelligent we can become in light of the fact that it will never be rational to us for human beings to completely upload their minds onto computers.
  • Agar then proceeds to argue that in the pursuit of Kurzweil enhanced capacities and capabilities we might accidentally undermine capacities of equal value. This line of argument would find much sympathy from those who consider human organisms in “ecological” terms, representing a profound interconnectedness which when interfered with presents a series of unknown and unexpected consequences. In other words, our specifies-specific form of intelligence may well be linked to species-specific form of desire. Thus, if we start building upon and enhancing our capacity to protect and promote deeply held convictions and beliefs then due to the interconnectedness, it may well affect and remove our desire to perform such activities (page 70). Agar’s subsequent discussion and reference to the work of Jerry Foder, philosopher and cognitive scientist is particularly helpful in terms of the functioning of the mind by modules and the implications of human-friendly AI verses human-unfriendly AI.
  • In terms of the author’s discussion of Aubrey de Grey, what is refreshing to read from the outset is the author’s clear grasp of Aubrey’s ideas and motivation. Some make the mistake of thinking he is the man who wants to live forever, when in actual fact this is not the case.&nbsp; De Grey wants to reverse the ageing process - Strategies for Engineered Negligible Senescence (SENS) so that people are living longer and healthier lives. Establishing this clear distinction affords the author the opportunity to offer more grounded critiques of de Grey’s than some of his other critics. The author makes plain that de Grey’s immediate goal is to achieve longevity escape velocity (LEV), where anti-ageing therapies add years to life expectancy faster than age consumes them.
  • In weighing up the benefits of living significantly longer lives, Agar posits a compelling argument that I had not fully seen before. In terms of risk, those radically enhanced to live longer may actually be the most risk adverse and fearful people to live. Taking the example of driving a car, a forty year-old senescing human being who gets into their car to drive to work and is involved in a fatal accident “stands to lose, at most, a few healthy, youthful years and a slightly larger number of years with reduced quality” (p.116). In stark contrast should a negligibly senescent being who drives a car and is involved in an accident resulting in their death, stands to lose on average one thousand, healthy, youthful years (p.116). &nbsp;
  • De Grey’s response to this seems a little flippant; with the end of ageing comes an increased sense of risk-aversion so the desire for risky activity such as driving will no longer be prevalent. Moreover, plus because we are living for longer we will not be in such a hurry to get to places!&nbsp; Virtual reality comes into its own at this point as a means by which the negligibly senescent being ‘adrenaline junkie’ can be engaged with activities but without the associated risks. But surely the risk is part of the reason why they would want to engage in snow boarding, bungee jumping et al in the first place. De Grey’s strategy seemingly fails to appreciate the extent to which human beings want “direct” contact with the “real” world.
  • Continuing this idea further though, Agar’s subsequent discussion of the role of fire-fighters is an interesting one.&nbsp; A negligibly senescent fire fighter may stand to loose more when they are trapped in a burning inferno but being negligibly senescent means that they are better fire-fighters by virtue of increase vitality. Having recently heard de Grey speak and had the privilege of discussing his ideas further with him, Agar’s discussion of De Grey were a particular highlight of the book and made for an engaging discussion. Whilst expressing concern and doubt in relation to De Grey’s ideas, Agar is nevertheless quick and gracious enough to acknowledge that if such therapies could be achieved then De Grey is probably the best person to comment on and achieve such therapies given the depth of knowledge and understanding that he has built up in this area.
« First ‹ Previous 41 - 60 of 64 Next ›
Showing 20 items per page