Skip to main content

Home/ New Media Ethics 2009 course/ Group items matching "Rationality" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
Weiye Loh

Rationally Speaking: Response to Jonathan Haidt's response, on the academy's liberal bias - 0 views

  • Dear Prof. Haidt,You understandably got upset by my harsh criticism of your recent claims about the mechanisms behind the alleged anti-conservative bias that apparently so permeates the modern academy. I find it amusing that you simply assumed I had not looked at your talk and was therefore speaking without reason. Yet, I have indeed looked at it (it is currently published at Edge, a non-peer reviewed webzine), and found that it simply doesn’t add much to the substance (such as it is) of Tierney’s summary.
  • Yes, you do acknowledge that there may be multiple reasons for the imbalance between the number of conservative and liberal leaning academics, but then you go on to characterize the academy, at least in your field, as a tribe having a serious identity issue, with no data whatsoever to back up your preferred subset of causal explanations for the purported problem.
  • your talk is simply an extended op-ed piece, which starts out with a summary of your findings about the different moral outlooks of conservatives and liberals (which I have criticized elsewhere on this blog), and then proceeds to build a flimsy case based on a couple of anecdotes and some badly flawed data.
  • ...4 more annotations...
  • For instance, slide 23 shows a Google search for “liberal social psychologist,” highlighting the fact that one gets a whopping 2,740 results (which, actually, by Google standards is puny; a search under my own name yields 145,000, and I ain’t no Lady Gaga). You then compared this search to one for “conservative social psychologist” and get only three entries.
  • First of all, if Google searches are the main tool of social psychology these days, I fear for the entire field. Second, I actually re-did your searches — at the prompting of one of my readers — and came up with quite different results. As the photo here shows, if you actually bother to scroll through the initial Google search for “liberal social psychologist” you will find that there are in fact only 24 results, to be compared to 10 (not 3) if you search for “conservative social psychologist.” Oops. From this scant data I would simply conclude that political orientation isn’t a big deal in social psychology.
  • Your talk continues with some pretty vigorous hand-waving: “We rely on our peers to find flaws in our arguments, but when there is essentially nobody out there to challenge liberal assumptions and interpretations of experimental findings, the peer review process breaks down, at least for work that is related to those sacred values.” Right, except that I would like to see a systematic survey of exactly how the lack of conservative peer review has affected the quality of academic publications. Oh, wait, it hasn’t, at least according to what you yourself say in the next sentence: “The great majority of work in social psychology is excellent, and is unaffected by these problems.” I wonder how you know this, and why — if true — you then think that there is a problem. Philosophers call this an inherent contradiction, it’s a common example of bad argument.
  • Finally, let me get to your outrage at the fact that I have allegedly accused you of academic misconduct and lying. I have done no such thing, and you really ought (in the ethical sense) to be careful when throwing those words around. I have simply raised the logical possibility that you (and Tierney) have an agenda, a possibility based on reading several of the things both you and Tierney have written of late. As a psychologist, I’m sure you are aware that biases can be unconscious, and therefore need not imply that the person in question is lying or engaging in any form of purposeful misconduct. Or were you implying in your own talk that your colleagues’ bias was conscious? Because if so, you have just accused an entire profession of misconduct.
Weiye Loh

Rationally Speaking: Double podcast teaser! Vegetarianism and the relationship between science and values - 0 views

  • Vegetarianism: is it a good idea? Vegetarianism is a complex set of beliefs and practices, spanning from the extreme “fruitarianism,” where people only eat fruits and other plant parts that can be gathered without “harming” the plant (though I’m sure the plant would rather keep its fruits and use them for the evolutionary purpose of dispersing its own offspring) to various forms of “flexitaranism,” like pollotarianism (poultry is okay to eat) and pescetarianism (fisk okay).
  • Is it true that a vegetarian diet increases one’s health? Yes, but only in certain respects, partially because vegetarians also tend to be health conscious in general (they exercise, don’t smoke, drink less, etc.), and it is not the case for the more extreme versions (including veganism), where one needs to be extremely careful to achieve a balanced diet which may need to be supplemented artificially, especially for growing children.
  • What is the ethical case for vegetarianism? Again, the answer is complex. It seems hard to logically defend fruitarianism, and borderline to make a moral argument for veganism, but broader forms of vegetarianism certainly get at important issues of suffering and mistreatment of both animals and industry workers, not to mention that the environmental impact of meat eating is much more damaging than that of vegetarianism. And so the debate rages on.
  • ...6 more annotations...
  • Value-free science? Many scientists think that science is about objectivity and “just the facts, ma’am.” Not so fast, philosophers, historians and sociologists of science have argued now for a number of decades. While I certainly have no sympathy for the extreme postmodernist position exemplified by the so-called “strong programme” in sociology of science — that science is entirely the result of social construction — there are several interesting and delicate facets of the problem to explore.
  • there are values embedded in the practice of science itself: testability, accuracy, generality, simplicity, and the like. Needless to say, few if any of these can be justified within science itself — there is no experiment confirming Occam’s razor, for instance.
  • Then there are the many moral dimensions of science practice, both in terms of ethical issues internal to science (fraud) and of the much broader ones affecting society at large (societal consequences of research and technological advances).
  • There is also the issue of diversity in science. Until very recently, and in many fields still today, science has largely been an affair conducted by white males. And this has historically resulted in a large amount of nonsense — say about gender differences, or ethnic differences — put forth as objective knowledge and accepted by the public because it has the imprimatur of science. But, you might say, that was the past, now we have corrected the errors and moved on. Except that such an argument ignores the fact that there is little reason to think that only we have gotten it just right, that the current generation is somehow immune from an otherwise uninterrupted history of science-based blunders.
  • Regarding Occam's Razor, there is a justification for it based on probability theory, see:http://www.johndcook.com/blog/2011/01/12/occams-razor-bayes-theorem/http://telescoper.wordpress.com/2011/02/19/bayes-razor/http://www.stat.duke.edu/~berger/papers/ockham.html
  • another interesting dimension of the relationship between values and science concerns which scientific questions we should pursue (and, often, fund with public money). Scientists often act as they ought to be the only arbiters here, and talk as if some questions were “obviously” intrinsically important. But when your research is costly and paid for by the public, perhaps society deserves a bit more of an explanation concerning why millions of dollars ought to be spent on obscure problems that apparently interest only a handful of university professors concentrated in one or a few countries.
Weiye Loh

Random Thoughts Of A Free Thinker: The TCM vs. Western medicine debate -- a philosophical and political debate? - 0 views

  • there is a sub-field within the study of philosophy that looks at what should qualify as valid or certain knowledge. And one main divide in this sub-field would perhaps be the divide between empiricism and rationalism. Proponents of the former generally argue that only what can be observed by the senses should qualify as valid knowledge while proponents of the latter are more sceptical about sensory data since such data can be "false" (for example, optical illusions) and instead argue that valid knowledge should be knowledge that is congruent with reason.
  • Another significant divide in this sub-field would be the divide between positivism/scientism and non-positivism/scientism. Essentially, proponents of the former argue that only knowledge that is congruent with scientific reasoning or that can be scientifically proven should qualify as valid knowledge. In contrast, the proponents of non-positivism/scientism is of the stance that although scientific knowledge may indeed be a form of valid knowledge, it is not the only form of valid knowledge; knowledge derived from other sources or methods may be just as valid.
  • Evidently, the latter divide is relevant with regards to this debate over the validity of TCM, or alternative medicine in general, as a form of medical treatment vis-a-vis Western medicine, in that the general impression perhaps that while Western medicine is scientifically proven, the former is however not as scientifically proven. And thus, to those who abide by the stance of positivism/scientism, this will imply that TCM, or alternative medicine in general, is not as valid or reliable a form of medical treatment as Western medicine. On the other hand, as can be seen from the letters written in to the ST Forum to defend TCM, there are those who will argue that although TCM may not be as scientifically proven, this does not however imply that it is not a valid or reliable form of medical treatment.
  • ...6 more annotations...
  • Of course, while there are similarities between the positions adopted in the "positivism/scientism versus non-positivism/scientism" and "Western medicine versus alternative medicine" debates, I suppose that one main difference is however that the latter is not just a theoretical debate but involves people's health and lives.
  • As was mentioned earlier, the general impression is perhaps that while Western medicine, which generally has its roots in Western societies, is scientifically proven, TCM, or alternative medicine, is however not as scientifically proven. The former is thus regarded as the dominant mainstream model of medical treatment while non-Western medical knowledge or treatment is regarded as "alternative medicine".
  • The process by which the above impression was created was, according to the postcolonial theorists, a highly political one. Essentially, it may be argued that along with their political colonisation of non-European territories in the past, the European/Western colonialists also colonised the minds of those living in those territories. This means that along with colonisation, traditional forms of knowledge, including medical knowledge, and cultures in the colonised terrorities were relegated to a non-dominant, if not inferior, position vis-a-vis Western knowledge and culture. And as postcolonial theorists may argue, the legacy and aftermath of this process is still felt today and efforts should be made to reverse it.
  • In light of the above, the increased push to have non-Western forms of medical treatment be recognised as an equally valid model of medical treatment besides that of Western medicine may be seen as part of the effort to reverse the dominance of Western knowledge and culture set in place during the colonial period. Of course, this push to reverse Western dominance is especially relevant in recent times, in light of the economic and political rise of non-Western powers such as China and India (interestingly enough, to the best of my knowledge, when talking about "alternative medicine", people are usually referring to traditional Indian or Chinese medical treatments and not really traditional African medical treatment).
  • Here, it is worthwhile to pause and think for a while: if it is recognised that Western and non-Western medicine are different but equally valid models of medical treatment, would they be complimentary or competing models? Or would they be just different models?
  • Moving on, so far it would seem that , for at least the foreseeable future, Western medicine will retain its dominant "mainstream" position but who knows what the future may hold?
Weiye Loh

Rationally Speaking: Is modern moral philosophy still in thrall to religion? - 0 views

  • Recently I re-read Richard Taylor’s An Introduction to Virtue Ethics, a classic published by Prometheus
  • Taylor compares virtue ethics to the other two major approaches to moral philosophy: utilitarianism (a la John Stuart Mill) and deontology (a la Immanuel Kant). Utilitarianism, of course, is roughly the idea that ethics has to do with maximizing pleasure and minimizing pain; deontology is the idea that reason can tell us what we ought to do from first principles, as in Kant’s categorical imperative (e.g., something is right if you can agree that it could be elevated to a universally acceptable maxim).
  • Taylor argues that utilitarianism and deontology — despite being wildly different in a variety of respects — share one common feature: both philosophies assume that there is such a thing as moral right and wrong, and a duty to do right and avoid wrong. But, he says, on the face of it this is nonsensical. Duty isn’t something one can have in the abstract, duty is toward a law or a lawgiver, which begs the question of what could arguably provide us with a universal moral law, or who the lawgiver could possibly be.
  • ...11 more annotations...
  • His answer is that both utilitarianism and deontology inherited the ideas of right, wrong and duty from Christianity, but endeavored to do without Christianity’s own answers to those questions: the law is given by God and the duty is toward Him. Taylor says that Mill, Kant and the like simply absorbed the Christian concept of morality while rejecting its logical foundation (such as it was). As a result, utilitarians and deontologists alike keep talking about the right thing to do, or the good as if those concepts still make sense once we move to a secular worldview. Utilitarians substituted pain and pleasure for wrong and right respectively, and Kant thought that pure reason can arrive at moral universals. But of course neither utilitarians nor deontologist ever give us a reason why it would be irrational to simply decline to pursue actions that increase global pleasure and diminish global pain, or why it would be irrational for someone not to find the categorical imperative particularly compelling.
  • The situation — again according to Taylor — is dramatically different for virtue ethics. Yes, there too we find concepts like right and wrong and duty. But, for the ancient Greeks they had completely different meanings, which made perfect sense then and now, if we are not mislead by the use of those words in a different context. For the Greeks, an action was right if it was approved by one’s society, wrong if it wasn’t, and duty was to one’s polis. And they understood perfectly well that what was right (or wrong) in Athens may or may not be right (or wrong) in Sparta. And that an Athenian had a duty to Athens, but not to Sparta, and vice versa for a Spartan.
  • But wait a minute. Does that mean that Taylor is saying that virtue ethics was founded on moral relativism? That would be an extraordinary claim indeed, and he does not, in fact, make it. His point is a bit more subtle. He suggests that for the ancient Greeks ethics was not (principally) about right, wrong and duty. It was about happiness, understood in the broad sense of eudaimonia, the good or fulfilling life. Aristotle in particular wrote in his Ethics about both aspects: the practical ethics of one’s duty to one’s polis, and the universal (for human beings) concept of ethics as the pursuit of the good life. And make no mistake about it: for Aristotle the first aspect was relatively trivial and understood by everyone, it was the second one that represented the real challenge for the philosopher.
  • For instance, the Ethics is famous for Aristotle’s list of the virtues (see Table), and his idea that the right thing to do is to steer a middle course between extreme behaviors. But this part of his work, according to Taylor, refers only to the practical ways of being a good Athenian, not to the universal pursuit of eudaimonia. Vice of Deficiency Virtuous Mean Vice of Excess Cowardice Courage Rashness Insensibility Temperance Intemperance Illiberality Liberality Prodigality Pettiness Munificence Vulgarity Humble-mindedness High-mindedness Vaingloriness Want of Ambition Right Ambition Over-ambition Spiritlessness Good Temper Irascibility Surliness Friendly Civility Obsequiousness Ironical Depreciation Sincerity Boastfulness Boorishness Wittiness Buffoonery</t
  • How, then, is one to embark on the more difficult task of figuring out how to live a good life? For Aristotle eudaimonia meant the best kind of existence that a human being can achieve, which in turns means that we need to ask what it is that makes humans different from all other species, because it is the pursuit of excellence in that something that provides for a eudaimonic life.
  • Now, Plato - writing before Aristotle - ended up construing the good life somewhat narrowly and in a self-serving fashion. He reckoned that the thing that distinguishes humanity from the rest of the biological world is our ability to use reason, so that is what we should be pursuing as our highest goal in life. And of course nobody is better equipped than a philosopher for such an enterprise... Which reminds me of Bertrand Russell’s quip that “A process which led from the amoeba to man appeared to the philosophers to be obviously a progress, though whether the amoeba would agree with this opinion is not known.”
  • But Aristotle's conception of "reason" was significantly broader, and here is where Taylor’s own update of virtue ethics begins to shine, particularly in Chapter 16 of the book, aptly entitled “Happiness.” Taylor argues that the proper way to understand virtue ethics is as the quest for the use of intelligence in the broadest possible sense, in the sense of creativity applied to all walks of life. He says: “Creative intelligence is exhibited by a dancer, by athletes, by a chess player, and indeed in virtually any activity guided by intelligence [including — but certainly not limited to — philosophy].” He continues: “The exercise of skill in a profession, or in business, or even in such things as gardening and farming, or the rearing of a beautiful family, all such things are displays of creative intelligence.”
  • what we have now is a sharp distinction between utilitarianism and deontology on the one hand and virtue ethics on the other, where the first two are (mistakenly, in Taylor’s assessment) concerned with the impossible question of what is right or wrong, and what our duties are — questions inherited from religion but that in fact make no sense outside of a religious framework. Virtue ethics, instead, focuses on the two things that really matter and to which we can find answers: the practical pursuit of a life within our polis, and the lifelong quest of eudaimonia understood as the best exercise of our creative faculties
  • &gt; So if one's profession is that of assassin or torturer would being the best that you can be still be your duty and eudaimonic? And what about those poor blighters who end up with an ugly family? &lt;Aristotle's philosophy is ver much concerned with virtue, and being an assassin or a torturer is not a virtue, so the concept of a eudaimonic life for those characters is oxymoronic. As for ending up in a "ugly" family, Aristotle did write that eudaimonia is in part the result of luck, because it is affected by circumstances.
  • &gt; So to the title question of this post: "Is modern moral philosophy still in thrall to religion?" one should say: Yes, for some residual forms of philosophy and for some philosophers &lt;That misses Taylor's contention - which I find intriguing, though I have to give it more thought - that *all* modern moral philosophy, except virtue ethics, is in thrall to religion, without realizing it.
  • “The exercise of skill in a profession, or in business, or even in such things as gardening and farming, or the rearing of a beautiful family, all such things are displays of creative intelligence.”So if one's profession is that of assassin or torturer would being the best that you can be still be your duty and eudaimonic? And what about those poor blighters who end up with an ugly family?
Weiye Loh

New Statesman - What we can learn from Harold Camping - 0 views

  • In all areas of life, people will often go to extraordinary lengths to maintain prior beliefs in the face of evidence to the contrary.
  • Apocalypse Now is a much more interesting prospect than Apocalypse Some Time in the Distant Future.
  • If the Bible is both true and complete, it follows that it ought to be possible to decode it and so work out when the End will come.
  • ...1 more annotation...
  • Each new prophet can explain why his prediction is going to come true where all previous predictions (sometimes including his own) have not.
Weiye Loh

Evaluating The Evidence for Cell Phones and WiFi « Critical Thinking « Skeptic North - 0 views

  • he “weight of evidence” approach to evaluation of causality is often vilified by cell phone and WiFi scare mongers as being an inadequate way to judge the evidence – often because it disagrees with their own sentiments about the science.&nbsp; If you can’t disqualify the evidence, then you can go after the method of evaluation and disqualify that, right?&nbsp; Of course, the weight of evidence approach is often portrayed as a dumbshow of putting all the “positive” trials on one side of the scale and all of the “negative” trials on the other and taking the difference in mass as the evidence.&nbsp; This is how Dr. Phillips characterised it in his paper on electromagnetic fields and DNA damage,&nbsp;as well as his appearance on CBC Radio.&nbsp; Of course, the procedure is much more like a systematic review, where all of the papers, regardless of their outcomes, are weighed for their quality. (The higher quality studies will have good internal and external validity, proper blinding and randomisation, large enough sample size, proper controls and good statistical analysis; as well as being reproduced by independent investigators.) Then they are tallied and a rational conclusion is offered as to the most likely state of the evidence (of course, it is much more involved than I am stating, but suffice it to say, it does not involve a scale.) &nbsp; This is standard operating procedure and, in fact, is what we all do when we are evaluating evidence: we decide which studies are good and we pool the evidence before we make a decision.
  •  
    n many discussions of the "dangers" of WiFi and cell phones, the precautionary principle is evoked. It is the idea that we have "an obligation, if the level of harm may be high, for action to prevent or minimise such harm even when the absence of scientific certainty makes it difficult to predict the likelihood of harm occurring, or the level of harm should it occur."  It is important to note that the precautionary principle or approach is required when we do not have a scientific consensus or if we have a lack of scientific certainty.  It is used often in European regulation of potential health and environmental hazards.  "Scientific certainty" is an important clause here, because it does not mean 100% certainty. Science can never give that absolute a result and if we required 100% certainty of no risk, we would not walk out our front doors or even get out of bed, lest we have a mishap.
Weiye Loh

Rationally Speaking: What do I think of Wikipedia? - 0 views

  • Scholarpedia. I know, this is probably the first time you've heard of it, and I must admit that I never use it myself, but it is an open access peer reviewed encyclopedia, curated by Dr. Eugene M. Izhikevich, associated with an outlet called the Brain Corporation, out in San Diego, CA. Don’t know anything more about it (even Wikipedia doesn’t have an article on that!).
  • I go to Wikipedia at least some of the times to use it as a starting point, a convenient trampoline to be used — together with Google (and, increasingly, Google Scholar) — to get an initial foothold into areas with which I am a bit less familiar. However, I don’t use that information for my writings (professional or for the general public) unless I actually check the sources and/or have independent confirmation of whatever it is that I found potentially interesting in the relevant Wikipedia article.This isn’t a matter of academic snobbism, it’s rather a question of sensibly covering your ass — which is the same advice I give to my undergraduate students (my graduate students better not be using Wiki for anything substantial at all, the peer reviewed Stanford Encyclopedia of Philosophy being a manyfold better source across the board).
Weiye Loh

Does Anything Matter? by Peter Singer - Project Syndicate - 0 views

  • Although this view of ethics has often been challenged, many of the objections have come from religious thinkers who appealed to God’s commands. Such arguments have limited appeal in the largely secular world of Western philosophy. Other defenses of objective truth in ethics made no appeal to religion, but could make little headway against the prevailing philosophical mood.
  • Many people assume that rationality is always instrumental: reason can tell us only how to get what we want, but our basic wants and desires are beyond the scope of reasoning. Not so, Parfit argues. Just as we can grasp the truth that 1 + 1 = 2, so we can see that I have a reason to avoid suffering agony at some future time, regardless of whether I now care about, or have desires about, whether I will suffer agony at that time. We can also have reasons (though not always conclusive reasons) to prevent others from suffering agony. Such self-evident normative truths provide the basis for Parfit’s defense of objectivity in ethics.
  • One major argument against objectivism in ethics is that people disagree deeply about right and wrong, and this disagreement extends to philosophers who cannot be accused of being ignorant or confused. If great thinkers like Immanuel Kant and Jeremy Bentham disagree about what we ought to do, can there really be an objectively true answer to that question? Parfit’s response to this line of argument leads him to make a claim that is perhaps even bolder than his defense of objectivism in ethics. He considers three leading theories about what we ought to do – one deriving from Kant, one from the social-contract tradition of Hobbes, Locke, Rousseau, and the contemporary philosophers John Rawls and T.M. Scanlon, and one from Bentham’s utilitarianism – and argues that the Kantian and social-contract theories must be revised in order to be defensible.
  • ...3 more annotations...
  • he argues that these revised theories coincide with a particular form of consequentialism, which is a theory in the same broad family as utilitarianism. If Parfit is right, there is much less disagreement between apparently conflicting moral theories than we all thought. The defenders of each of these theories are, in Parfit’s vivid phrase, “climbing the same mountain on different sides.”
  • Parfit’s real interest is in combating subjectivism and nihilism. Unless he can show that objectivism is true, he believes, nothing matters.
  • When Parfit does come to the question of “what matters,” his answer might seem surprisingly obvious. He tells us, for example, that what matters most now is that “we rich people give up some of our luxuries, ceasing to overheat the Earth’s atmosphere, and taking care of this planet in other ways, so that it continues to support intelligent life.” Many of us had already reached that conclusion. What we gain from Parfit’s work is the possibility of defending these and other moral claims as objective truths.
  •  
    Can moral judgments be true or false? Or is ethics, at bottom, a purely subjective matter, for individuals to choose, or perhaps relative to the culture of the society in which one lives? We might have just found out the answer. Among philosophers, the view that moral judgments state objective truths has been out of fashion since the 1930's, when logical positivists asserted that, because there seems to be no way of verifying the truth of moral judgments, they cannot be anything other than expressions of our feelings or attitudes. So, for example, when we say, "You ought not to hit that child," all we are really doing is expressing our disapproval of your hitting the child, or encouraging you to stop hitting the child. There is no truth to the matter of whether or not it is wrong for you to hit the child.
Weiye Loh

Rationally Speaking: On ethics, part III: Deontology - 0 views

  • Plato showed convincingly in his Euthyphro dialogue that even if gods existed they would not help at all settling the question of morality.
  • Broadly speaking, deontological approaches fall into the same category as consequentialism — they are concerned with what we ought to do, as opposed to what sort of persons we ought to be (the latter is, most famously, the concern of virtue ethics). That said, deontology is the chief rival of consequentialism, and the two have distinct advantages and disadvantages that seem so irreducible
  • Here is one way to understand the difference between consequentialism and deontology: for the former the consequences of an action are moral if they increase the Good (which, as we have seen, can be specified in different ways, including increasing happiness and/or decreasing pain). For the latter, the fundamental criterion is conformity to moral duties. You could say that for the deontologist the Right (sometimes) trumps the Good. Of course, as a result consequentialists have to go through the trouble of defining and justifying the Good, while deontologists have to tackle the task of defining and justifying the Right.
  • ...10 more annotations...
  • two major “modes” of deontology: agent-centered and victim-centered. Agent-centered deontology is concerned with permissions and obligations to act toward other agents, the typical example being parents’ duty to protect and nurture their children. Notice the immediate departure from consequentialism, here, since the latter is an agent-neutral type of ethics (we have seen that it has trouble justifying the idea of special treatment of relatives or friends). Where do such agent-relative obligations come from? From the fact that we make explicit or implicit promises to some agents but not others. By bringing my child into the world, for instance, I make a special promise to that particular individual, a promise that I do not make to anyone else’s children. While this certainly doesn’t mean that I don’t have duties toward other children (like inflicting no intentional harm), it does mean that I have additional duties toward my own children as a result of the simple fact that they are mine.
  • Agent-centered deontology gets into trouble because of its close philosophical association to some doctrines that originated within Catholic theology, like the idea of double effect. (I should immediately clarify that the trouble is not due to the fact that these doctrines are rooted in a religious framework, it’s their intrinsic moral logic that is at issue here.) For instance, for agent-centered deontologists we are morally forbidden from killing innocent others (reasonably enough), but this prohibition extends even to cases when so doing would actually save even more innocents.
  • Those familiar with trolleology will recognize one of the classic forms of the trolley dilemma here: is it right to throw an innocent person in front of the out of control trolley in order to save five others? For consequentialists the answer is a no-brainer: of course yes, you are saving a net of four lives! But for the deontologist you are now using another person (the innocent you are throwing to stop the trolley) as a means to an end, thus violating one of the forms of Kant’s imperative:“Act in such a way that you treat humanity, whether in your own person or in the person of any other, always at the same time as an end and never merely as a means to an end.”
  • The other form, in case you are wondering, is: “Act only according to that maxim whereby you can at the same time will that it should become a universal law without contradiction.”
  • Victim-centered deontologies are right- rather than duty-based, which of course does raise the question of why we think of them as deontological to begin with.
  • The fundamental idea about victim-centered deontology is the right that people have not to be used by others without their consent. This is were we find Robert Nozick-style libertarianism, which I have already criticized on this blog. One of the major implications of this version of deontology is that there is no strong moral duty to help others.
  • contractarian deontological theories. These deal with social contracts of the type, for instance, discussed by John Rawls in his theory of justice. However, I will devote a separate post to contractarianism, in part because it is so important in ethics, and in part because one can argue that contractarianism is really a meta-ethical theory, and therefore does not strictly fall under deontology per se.
  • deontological theories have the advantage over consequentialism in that they account for special concerns for one’s relatives and friends, as we have seen above. Consequentialism, by comparison, comes across as alienating and unreasonably demanding. Another advantage of deontology over consequentialism is that it accounts for the intuition that even if an act is not morally demanded it may still be praiseworthy. For a consequentialist, on the contrary, if something is not morally demanded it is then morally forbidden. (Another way to put this is that consequentialism is a more minimalist approach to ethics than deontology.) Moreover, deontology also deals much better than consequentialism with the idea of rights.
  • deontological theories run into the problem that they seem to give us permission, and sometimes even require, to make things actually morally worse in the world. Indeed, a strict deontologist could actually cause human catastrophes by adhering to Kant’s imperative and still think he acted morally (Kant at one point remarked that it is “better the whole people should perish” than that injustice be done — one wonders injustice to whom, since nobody would be left standing). Deontologists also have trouble dealing with the seemingly contradictory ideas that our duties are categorical (i.e., they do not admit of exceptions), and yet that some duties are more important than others. (Again, Kant famously stated that “a conflict of duties is inconceivable” while forgetting to provide any argument in defense of such a bold statement.)
  • . One famous attempt at this reconciliation was proposed by Thomas Nagel (he of “what is it like to be a bat?” fame). Nagel suggested that perhaps we should be consequentialists when it comes to agent-neutral reasoning, and deontologists when we engage in agent-relative reasoning. He neglected to specify, however, any non-mysterious way to decide what to do in those situations in which the same moral dilemma can be seen from both perspectives.
yongernn teo

Ethics and Values Case Study- Mercy Killing, Euthanasia - 8 views

  •  
    THE ETHICAL PROBLEM: Allowing someone to die, mercy death, and mercy killing, Euthanasia: A 24-year-old man named Robert who has a wife and child is paralyzed from the neck down in a motorcycle accident. He has always been very active and hates the idea of being paralyzed. He also is in a great deal of pain, an he has asked his doctors and other members of his family to "put him out of his misery." After several days of such pleading, his brother comes into Robert's hospital ward and asks him if he is sure he still wants to be put out of his misery. Robert says yes and pleads with his brother to kill him. The brother kisses and blesses Robert, then takes out a gun and shoots him, killing him instantly. The brother later is tried for murder and acquitted by reason of temporary insanity. Was what Robert's brother did moral? Do you think he should have been brought to trial at all? Do you think he should have been acquitted? Would you do the same for a loved one if you were asked? THE DISCUSSION: In my opinion, the most dubious part about the case would be the part on Robert pleading with his brother, asking his brother to kill him. This could be his brother's own account of the incident and could/could not have been a plea by Robert. 1) With assumption that Robert indeed pleaded with his brother to kill him, an ethical analysis as such could be derived: That Robert's brother was only respecting Robert's choice and killed him because he wanted to relieve him from his misery. This could be argued to be ethical using a teleoloigical framework where the focus is on the end-result and the consequences that entails the action. Here, although the act of killing per se may be wrong and illegal, Robert was able to relieved of his pain and suffering. 2) With an assumption that Robert did not plea with his brother to kill him and that it was his brother's own decision to relieve Robert of all-suffering: In this case, the b
  • ...2 more comments...
  •  
    I find euthanasia to be a very interesting ethical dilemma. Even I myself am caught in the middle. Euthanasia has been termed as 'mercy killing' and even 'happy death'. Others may simply just term it as being 'evil'. Is it right to end someone's life even when he or she pleads you to do so? In the first place, is it even right to commit suicide? Once someone pulls off the main support that's keeping the person alive, such as the feeding tube, there is no turning back. Hmm..Come to think of it, technology is kind of unethical by being made available, for in the past, when someone is dying, they had the right to die naturally. Now, scientific technology is 'forcing' us to stay alive and cling on to a life that may be deemed being worthless if we were standing outside our bodies looking at our comatose selves. Then again, this may just be MY personal standpoint. But I have to argue, who gave technology the right to make me a worthless vegetable!(and here I am, attaching a value/judgement onto an immobile human being..) Hence, being incompetent in making decisions for my unconscious self (or perhaps even brain dead), who should take responsibility for my life, for my existence? And on what basis are they allowed to help me out? Taking the other side of the argument, against euthanasia, we can say that the act of ending someone else's life is the act of destroying societal respect for life. Based on the utilitarian perspective, we are not thinking of the overall beneficence for society and disregarding the moral considerations encompassed within the state's interest to preserve the sanctity of all life. It has been said that life in itself takes priority over all other values. We should let the person live so as to give him/her a chance to wake up or hope for recovery (think comatose patients). But then again we can also argue that life is not the top of the hierarchy! A life without rights is as if not living a life at all? By removing the patient
  •  
    as a human being, you supposedly have a right to live, whether you are mobile or immobile. however, i think that, in the case of euthanasia, you 'give up' your rights when you "show" that you are no longer able to serve the pre-requisites of having the right. for example, if "living" rights are equate to you being able to talk, walk, etc etc, then, obviously the opposite means you no longer are able to perform up to the expectations of that right. then again, it is very subjective as to who gets to make that criteria!
  •  
    hmm interesting.. however, a question i have is who and when can this "right" be "given up"? when i am a victim in a car accident, and i lost the ability to breathe, walk and may need months to recover. i am unconscious and the doctor is unable to determine when am i gonna regain consciousness. when should my parents decide i can no longer be able to have any living rights? and taking elaine's point into consideration, is committing suicide even 'right'? if it is legally not right, when i ask someone to take my life and wrote a letter that it was cus i wanted to die, does that make it committing suicide only in the hands of others?
  •  
    Similarly, I question the 'rights' that you have to 'give up' when you no longer 'serve the pre-requisites of having the right'. If the living rights means being able to talk and walk, then where does it leave infants? Where does it leave people who may be handicapped? Have their lost their rights to living?
Weiye Loh

BrainGate gives paralysed the power of mind control | Science | The Observer - 0 views

  • brain-computer interface, or BCI
  • is a branch of science exploring how computers and the human brain can be meshed together. It sounds like science fiction (and can look like it too), but it is motivated by a desire to help chronically injured people. They include those who have lost limbs, people with Lou Gehrig's disease, or those who have been paralysed by severe spinal-cord injuries. But the group of people it might help the most are those whom medicine assumed were beyond all hope: sufferers of "locked-in syndrome".
  • These are often stroke victims whose perfectly healthy minds end up trapped inside bodies that can no longer move. The most famous example was French magazine editor Jean-Dominique Bauby who managed to dictate a memoir, The Diving Bell and the Butterfly, by blinking one eye. In the book, Bauby, who died in 1997 shortly after the book was published, described the prison his body had become for a mind that still worked normally.
  • ...9 more annotations...
  • Now the project is involved with a second set of human trials, pushing the technology to see how far it goes and trying to miniaturise it and make it wireless for a better fit in the brain. BrainGate's concept is simple. It posits that the problem for most patients does not lie in the parts of the brain that control movement, but with the fact that the pathways connecting the brain to the rest of the body, such as the spinal cord, have been broken. BrainGate plugs into the brain, picks up the right neural signals and beams them into a computer where they are translated into moving a cursor or controlling a computer keyboard. By this means, paralysed people can move a robot arm or drive their own wheelchair, just by thinking about it.
  • he and his team are decoding the language of the human brain. This language is made up of electronic signals fired by billions of neurons and it controls everything from our ability to move, to think, to remember and even our consciousness itself. Donoghue's genius was to develop a deceptively small device that can tap directly into the brain and pick up those signals for a computer to translate them. Gold wires are implanted into the brain's tissue at the motor cortex, which controls movement. Those wires feed back to a tiny array – an information storage device – attached to a "pedestal" in the skull. Another wire feeds from the array into a computer. A test subject with BrainGate looks like they have a large plug coming out the top of their heads. Or, as Donoghue's son once described it, they resemble the "human batteries" in The Matrix.
  • BrainGate's highly advanced computer programs are able to decode the neuron signals picked up by the wires and translate them into the subject's desired movement. In crude terms, it is a form of mind-reading based on the idea that thinking about moving a cursor to the right will generate detectably different brain signals than thinking about moving it to the left.
  • The technology has developed rapidly, and last month BrainGate passed a vital milestone when one paralysed patient went past 1,000 days with the implant still in her brain and allowing her to move a computer cursor with her thoughts. The achievement, reported in the prestigious Journal of Neural Engineering, showed that the technology can continue to work inside the human body for unprecedented amounts of time.
  • Donoghue talks enthusiastically of one day hooking up BrainGate to a system of electronic stimulators plugged into the muscles of the arm or legs. That would open up the prospect of patients moving not just a cursor or their wheelchair, but their own bodies.
  • If Nagle's motor cortex was no longer working healthily, the entire BrainGate project could have been rendered pointless. But when Nagle was plugged in and asked to imagine moving his limbs, the signals beamed out with a healthy crackle. "We asked him to imagine moving his arm to the left and to the right and we could hear the activity," Donoghue says. When Nagle first moved a cursor on a screen using only his thoughts, he exclaimed: "Holy shit!"
  • BrainGate and other BCI projects have also piqued the interest of the government and the military. BCI is melding man and machine like no other sector of medicine or science and there are concerns about some of the implications. First, beyond detecting and translating simple movement commands, BrainGate may one day pave the way for mind-reading. A device to probe the innermost thoughts of captured prisoners or dissidents would prove very attractive to some future military or intelligence service. Second, there is the idea that BrainGate or other BCI technologies could pave the way for robot warriors controlled by distant humans using only their minds. At a conference in 2002, a senior American defence official, Anthony Tether, enthused over BCI. "Imagine a warrior with the intellect of a human and the immortality of a machine." Anyone who has seen Terminator might worry about that.
  • Donoghue acknowledges the concerns but has little time for them. When it comes to mind-reading, current BrainGate technology has enough trouble with translating commands for making a fist, let alone probing anyone's mental secrets
  • As for robot warriors, Donoghue was slightly more circumspect. At the moment most BCI research, including BrainGate projects, that touch on the military is focused on working with prosthetic limbs for veterans who have lost arms and legs. But Donoghue thinks it is healthy for scientists to be aware of future issues. "As long as there is a rational dialogue and scientists think about where this is going and what is the reasonable use of the technology, then we are on a good path," he says.
  •  
    The robotic arm clutched a glass and swung it over a series of coloured dots that resembled a Twister gameboard. Behind it, a woman sat entirely immobile in a wheelchair. Slowly, the arm put the glass down, narrowly missing one of the dots. "She's doing that!" exclaims Professor John Donoghue, watching a video of the scene on his office computer - though the woman onscreen had not moved at all. "She actually has the arm under her control," he says, beaming with pride. "We told her to put the glass down on that dot." The woman, who is almost completely paralysed, was using Donoghue's groundbreaking technology to control the robot arm using only her thoughts. Called BrainGate, the device is implanted into her brain and hooked up to a computer to which she sends mental commands. The video played on, giving Donoghue, a silver-haired and neatly bearded man of 62, even more reason to feel pleased. The patient was not satisfied with her near miss and the robot arm lifted the glass again. After a brief hover, the arm positioned the glass on the dot.
Weiye Loh

Rationally Speaking: Don't blame free speech for the murders in Afghanistan - 0 views

  • The most disturbing example of this response came from the head of the U.N. Assistance Mission in Afghanistan, Staffan de Mistura, who said, “I don't think we should be blaming any Afghan. We should be blaming the person who produced the news — the one who burned the Koran. Freedom of speech does not mean freedom of offending culture, religion, traditions.” I was not going to comment on this monumentally inane line of thought, especially since Susan Jacoby, Michael Tomasky, and Mike Labossiere have already done such a marvelous job of it. But then I discovered, to my shock, that several of my liberal, progressive American friends actually agreed that Jones has some sort of legal and moral responsibility for what happened in Afghanistan
  • I believe he has neither. Here is why. Unlike many countries in the Middle East and Europe that punish blasphemy by fine, jail or death, the U.S., via the First Amendment and a history of court decisions, strongly protects freedom of speech and expression as basic and fundamental human rights. These include critiquing and offending other citizens’ culture, religion, and traditions. Such rights are not supposed to be swayed by peoples' subjective feelings, which form an incoherent and arbitrary basis for lawmaking. In a free society, if and when a person is offended by an argument or act, he or she has every right to argue and act back. If a person commits murder, the answer is not to limit the right; the answer is to condemn and punish the murderer for overreacting.
  • Of course, there are exceptions to this rule. Governments have an interest in condemning certain speech that provokes immediate hatred of or violence against people. The canonical example is yelling “fire!” in a packed room when there in fact is no fire, since this creates a clear and imminent danger for those inside the room. But Jones did not create such an environment, nor did he intend to. Jones (more precisely, Wayne Sapp) merely burned a book in a private ceremony in protest of its contents. Indeed, the connection between Jones and the murders requires many links in-between. The mob didn’t kill those accountable, or even Americans.
  • ...3 more annotations...
  • But even if there is no law prohibiting Jones’ action, isn’t he morally to blame for creating the environment that led to the murders? Didn’t he know Muslims would riot, and people might die? It seems ridiculous to assume that Jones could know such a thing, even if parts of the Muslim world have a poor track record in this area. But imagine for a moment that Jones did know Muslims would riot, and people would die. This does not make the act of burning a book and the act of murder morally equivalent, nor does it make the book burner responsible for reactions to his act. In and of itself, burning a book is a morally neutral act. Why would this change because some misguided individuals think book burning is worth the death penalty? And why is it that so many have automatically assumed the reaction to be respectable? To use an example nearer to some of us, recall when PZ Myers desecrated a communion wafer. If some Christian was offended, and went on to murder the closest atheist, would we really blame Myers? Is Myers' offense any different than Jones’?
  • the deep-seated belief among many that blasphemy is wrong. This means any reaction to blasphemy is less wrong, and perhaps even excused, compared to the blasphemous offense. Even President Obama said that, "The desecration of any holy text, including the Koran, is an act of extreme intolerance and bigotry.” To be sure, Obama went on to denounce the murders, and to state that burning a holy book is no excuse for murder. But Obama apparently couldn’t condemn the murders without also condemning Jones’ act of religious defiance.
  • As it turns out, this attitude is exactly what created the environment that led to murders in the first place. The members of the mob believed that religious belief should be free from public critical inquiry, and that a person who offends religious believers should face punishment. In the absence of official prosecution, they took matters into their own hands and sought anyone on the side of the offender. It didn’t help that Afghan leaders stoked the flames of hatred — but they only did so because they agreed with the mob’s sentiment to begin with. Afghan President Hamid Karzai said the U.S. should punish those responsible, and three well-known Afghan mullahs urged their followers to take to the streets and protest to call for the arrest of Jones
Weiye Loh

Rationally Speaking: Evolution as pseudoscience? - 0 views

  • I have been intrigued by an essay by my colleague Michael Ruse, entitled “Evolution and the idea of social progress,” published in a collection that I am reviewing, Biology and Ideology from Descartes to Dawkins (gotta love the title!), edited by Denis Alexander and Ronald Numbers.
  • Ruse's essay in the Alexander-Numbers collection questions the received story about the early evolution of evolutionary theory, which sees the stuff that immediately preceded Darwin — from Lamarck to Erasmus Darwin — as protoscience, the immature version of the full fledged science that biology became after Chuck's publication of the Origin of Species. Instead, Ruse thinks that pre-Darwinian evolutionists really engaged in pseudoscience, and that it took a very conscious and precise effort on Darwin’s part to sweep away all the garbage and establish a discipline with empirical and theoretical content analogous to that of the chemistry and physics of the time.
  • Ruse asserts that many serious intellectuals of the late 18th and early 19th century actually thought of evolution as pseudoscience, and he is careful to point out that the term “pseudoscience” had been used at least since 1843 (by the physiologist Francois Magendie)
  • ...17 more annotations...
  • Ruse’s somewhat surprising yet intriguing claim is that “before Charles Darwin, evolution was an epiphenomenon of the ideology of [social] progress, a pseudoscience and seen as such. Liked by some for that very reason, despised by others for that very reason.”
  • Indeed, the link between evolution and the idea of human social-cultural progress was very strong before Darwin, and was one of the main things Darwin got rid of.
  • The encyclopedist Denis Diderot was typical in this respect: “The Tahitian is at a primary stage in the development of the world, the European is at its old age. The interval separating us is greater than that between the new-born child and the decrepit old man.” Similar nonsensical views can be found in Lamarck, Erasmus, and Chambers, the anonymous author of The Vestiges of the Natural History of Creation, usually considered the last protoscientific book on evolution to precede the Origin.
  • On the other side of the divide were social conservatives like the great anatomist George Cuvier, who rejected the idea of evolution — according to Ruse — not as much on scientific grounds as on political and ideological ones. Indeed, books like Erasmus’ Zoonomia and Chambers’ Vestiges were simply not much better than pseudoscientific treatises on, say, alchemy before the advent of modern chemistry.
  • people were well aware of this sorry situation, so much so that astronomer John Herschel referred to the question of the history of life as “the mystery of mysteries,” a phrase consciously adopted by Darwin in the Origin. Darwin set out to solve that mystery under the influence of three great thinkers: Newton, the above mentioned Herschel, and the philosopher William Whewell (whom Darwin knew and assiduously frequented in his youth)
  • Darwin was a graduate of the University of Cambridge, which had also been Newton’s home. Chuck got drilled early on during his Cambridge education with the idea that good science is about finding mechanisms (vera causa), something like the idea of gravitational attraction underpinning Newtonian mechanics. He reflected that all the talk of evolution up to then — including his grandfather’s — was empty, without a mechanism that could turn the idea into a scientific research program.
  • The second important influence was Herschel’s Preliminary Discourse on the Study of Natural Philosophy, published in 1831 and read by Darwin shortly thereafter, in which Herschel sets out to give his own take on what today we would call the demarcation problem, i.e. what methodology is distinctive of good science. One of Herschel’s points was to stress the usefulness of analogical reasoning
  • Finally, and perhaps most crucially, Darwin also read (twice!) Whewell’s History of the Inductive Sciences, which appeared in 1837. In it, Whewell sets out his notion that good scientific inductive reasoning proceeds by a consilience of ideas, a situation in which multiple independent lines of evidence point to the same conclusion.
  • the first part of the Origin, where Darwin introduces the concept of natural selection by way of analogy with artificial selection can be read as the result of Herschel’s influence (natural selection is the vera causa of evolution)
  • the second part of the book, constituting Darwin's famous “long argument,” applies Whewell’s method of consilience by bringing in evidence from a number of disparate fields, from embryology to paleontology to biogeography.
  • What, then, happened to the strict coupling of the ideas of social and biological progress that had preceded Darwin? While he still believed in the former, the latter was no longer an integral part of evolution, because natural selection makes things “better” only in a relative fashion. There is no meaningful sense in which, say, a large brain is better than very fast legs or sharp claws, as long as you still manage to have dinner and avoid being dinner by the end of the day (or, more precisely, by the time you reproduce).
  • Ruse’s claim that evolution transitioned not from protoscience to science, but from pseudoscience, makes sense to me given the historical and philosophical developments. It wasn’t the first time either. Just think about the already mentioned shift from alchemy to chemistry
  • Of course, the distinction between pseudoscience and protoscience is itself fuzzy, but we do have what I think are clear examples of the latter that cannot reasonably be confused with the former, SETI for one, and arguably Ptolemaic astronomy. We also have pretty obvious instances of pseudoscience (the usual suspects: astrology, ufology, etc.), so the distinction — as long as it is not stretched beyond usefulness — is interesting and defensible.
  • It is amusing to speculate which, if any, of the modern pseudosciences (cryonics, singularitarianism) might turn out to be able to transition in one form or another to actual sciences. To do so, they may need to find their philosophically and scientifically savvy Darwin, and a likely bet — if history teaches us anything — is that, should they succeed in this transition, their mature form will look as different from the original as chemistry and alchemy. Or as Darwinism and pre-Darwinian evolutionism.
  • Darwin called the Origin "one long argument," but I really do think that recognizing that the book contains (at least) two arguments could help to dispel that whole "just a theory" canard. The first half of the book is devoted to demonstrating that natural selection is the true cause of evolution; vera causa arguments require proof that the cause's effect be demonstrated as fact, so the second half of the book is devoted to a demonstration that evolution has really happened. In other words, evolution is a demonstrable fact and natural selection is the theory that explains that fact, just as the motion of the planets is a fact and gravity is a theory that explains it.
  • Cryogenics is the study of the production of low temperatures and the behavior of materials at those temperatures. It is a legitimate branch of physics and has been for a long time. I think you meant 'cryonics'.
  • The Singularity means different things to different people. It is uncharitable to dismiss all "singularitarians" by debunking Kurzweil. He is low hanging fruit. Reach for something higher.
  •  
    "before Charles Darwin, evolution was an epiphenomenon of the ideology of [social] progress, a pseudoscience and seen as such. Liked by some for that very reason, despised by others for that very reason."
Weiye Loh

Rationally Speaking: Ray Kurzweil and the Singularity: visionary genius or pseudoscientific crank? - 0 views

  • I will focus on a single detailed essay he wrote entitled “Superintelligence and Singularity,” which was originally published as chapter 1 of his The Singularity is Near (Viking 2005), and has been reprinted in an otherwise insightful collection edited by Susan Schneider, Science Fiction and Philosophy.
  • Kurzweil begins by telling us that he gradually became aware of the coming Singularity, in a process that, somewhat peculiarly, he describes as a “progressive awakening” — a phrase with decidedly religious overtones. He defines the Singularity as “a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed.” Well, by that definition, we have been through several “singularities” already, as technology has often rapidly and irreversibly transformed our lives.
  • The major piece of evidence for Singularitarianism is what “I [Kurzweil] have called the law of accelerating returns (the inherent acceleration of the rate of evolution, with technological evolution as a continuation of biological evolution).”
  • ...9 more annotations...
  • the first obvious serious objection is that technological “evolution” is in no logical way a continuation of biological evolution — the word “evolution” here being applied with completely different meanings. And besides, there is no scientifically sensible way in which biological evolution has been accelerating over the several billion years of its operation on our planet. So much for scientific accuracy and logical consistency.
  • here is a bit that will give you an idea of why some people think of Singularitarianism as a secular religion: “The Singularity will allow us to transcend [the] limitations of our biological bodies and brains. We will gain power over our fates. Our mortality will be in our own hands. We will be able to live as long as we want.”
  • Fig. 2 of that essay shows a progression through (again, entirely arbitrary) six “epochs,” with the next one (#5) occurring when there will be a merger between technological and human intelligence (somehow, a good thing), and the last one (#6) labeled as nothing less than “the universe wakes up” — a nonsensical outcome further described as “patterns of matter and energy in the universe becom[ing] saturated with intelligence processes and knowledge.” This isn’t just science fiction, it is bad science fiction.
  • “a serious assessment of the history of technology reveals that technological change is exponential. Exponential growth is a feature of any evolutionary process.” First, it is highly questionable that one can even measure “technological change” on a coherent uniform scale. Yes, we can plot the rate of, say, increase in microprocessor speed, but that is but one aspect of “technological change.” As for the idea that any evolutionary process features exponential growth, I don’t know where Kurzweil got it, but it is simply wrong, for one thing because biological evolution does not have any such feature — as any student of Biology 101 ought to know.
  • Kurzweil’s ignorance of evolution is manifested again a bit later, when he claims — without argument, as usual — that “Evolution is a process of creating patterns of increasing order. ... It’s the evolution of patterns that constitutes the ultimate story of the world. ... Each stage or epoch uses the information-processing methods of the previous epoch to create the next.” I swear, I was fully expecting a scholarly reference to Deepak Chopra at the end of that sentence. Again, “evolution” is a highly heterogeneous term that picks completely different concepts, such as cosmic “evolution” (actually just change over time), biological evolution (which does have to do with the creation of order, but not in Kurzweil’s blatantly teleological sense), and technological “evolution” (which is certainly yet another type of beast altogether, since it requires intelligent design). And what on earth does it mean that each epoch uses the “methods” of the previous one to “create” the next one?
  • As we have seen, the whole idea is that human beings will merge with machines during the ongoing process of ever accelerating evolution, an event that will eventually lead to the universe awakening to itself, or something like that. Now here is the crucial question: how come this has not happened already?
  • To appreciate the power of this argument you may want to refresh your memory about the Fermi Paradox, a serious (though in that case, not a knockdown) argument against the possibility of extraterrestrial intelligent life. The story goes that physicist Enrico Fermi (the inventor of the first nuclear reactor) was having lunch with some colleagues, back in 1950. His companions were waxing poetic about the possibility, indeed the high likelihood, that the galaxy is teeming with intelligent life forms. To which Fermi asked something along the lines of: “Well, where are they, then?”
  • The idea is that even under very pessimistic (i.e., very un-Kurzweil like) expectations about how quickly an intelligent civilization would spread across the galaxy (without even violating the speed of light limit!), and given the mind boggling length of time the galaxy has already existed, it becomes difficult (though, again, not impossible) to explain why we haven’t seen the darn aliens yet.
  • Now, translate that to Kurzweil’s much more optimistic predictions about the Singularity (which allegedly will occur around 2045, conveniently just a bit after Kurzweil’s expected demise, given that he is 63 at the time of this writing). Considering that there is no particular reason to think that planet earth, or the human species, has to be the one destined to trigger the big event, why is it that the universe hasn’t already “awakened” as a result of a Singularity occurring somewhere else at some other time?
Weiye Loh

Skepticblog » About the International Nuclear Event Scale - 0 views

  • The INES scale is an internationally agreed-upon standard. Signatory nations are themselves responsible for interpreting the scale and assigning numbers to their own incidents. There is not a single international body that does this. Indeed, from the INES web site: What the Scale is Not For It is not appropriate to use INES to compare safety performance between facilities,&nbsp;organizations or countries. The statistically small numbers of events at Level 2 and above and the differences between countries for reporting more minor events to the public make it inappropriate to draw international comparisons.
  • the INES number is not a “threat level”. It’s a rough assessment of the scale of a mess that has been created. It does not portend coming danger, it characterizes an incident.
  • Nuclear incident severity levels. Click on it to see it in full readable size.
  • ...3 more annotations...
  • Within Japan, it’s the NSC (Nuclear Safety Commission) that has responsibility for classifying its incidents. When they say Fukushima is a 7, it doesn’t necessarily mean the same thing as what the USSR considered to be a 7 in 1986. Why not? Because there are many different aspects to a nuclear incident. There are health effects, potential health effects, environmental effects, measurements of radiation released, and so on.
  • The scale boils all these factors down to a single number, which to me, is a misguided effort: 0 – No safety significance 1 – Anomaly 2 – Incident 3 – Serious incident 4 – Accident with local consequences 5 – Accident with wider consequences 6 – Serious accident 7 – Major accident I certainly agree that Fukushima is a 7, a major accident, considering its type of reactor. Chernobyl was a Generation 0 atomic pile, not really what you’d call a nuclear reactor, and I’m surprised it didn’t blow up half the continent. &nbsp;For a proper nuclear reactor, I think Fukushima is about as bad as things can get.
  • But notice, it does not fulfill some of the qualifications of a 7, or even of a 4. For example, people start dying from radiation as early as 4 on the scale. Nobody has died from radiation at Fukushima (three were killed by the tsunami), and nobody was hurt at all at Three Mile Island which was a 5. The grimmest rational estimates of Chernobyl put its eventual death toll from cancer at 4,000. But it does fulfill the other qualifications of a 7; notably: Major release of radioactive material with widespread health and environmental effects requiring implementation of planned and extended countermeasures.
Weiye Loh

Rationally Speaking: Talking to the media, a cautionary tale - 0 views

  • The Observer piece then continues by labeling New York City Skeptics as a cult. Now a cult is often defined as “a relatively small group of people having religious beliefs or practices regarded by others as strange or sinister.” Hmm, let’s see. Well, NYCS is indeed a small group, and it probably isn’t impossible to find someone somewhere who considers our activities “strange” (though “sinister” would be pushing it). At least as strange as New Yorkers might find a group of people getting together for dinner and talking about things they are interested in — that is, not at all. But “having religious beliefs”? By what sort of distorted conception of religious belief does what Mr. Liu observed that night qualify as such? We are not told, though inquiring minds (apparently not those of Liu’s editors) wish to know.
  • For Liu “Skepticism starts with the feeling of being under siege by the nonthinking. It becomes Skepticism with the faith that there must be people out there who think like you do — that is, who think.” Well, that’s actually close to the mark, except that we like to think that we go by evidence not faith. But just as my spirits (metaphorically speaking) were beginning to lift a bit, I learned from Mr. Liu that skepticism has recently turned “[in]to something like a distinct, aggressive and almost messianic mentality.” Distinct, yes. Aggressive, maybe, though nothing compared to the aggressiveness of fundamentalists and homeopaths. Messianic? Here we go again with the projected Jesus complex!
  • Had he done his homework, he would have found out the answer quite readily: until the very same week of the meetup, New Yorkers had been treated to an inane message of the anti-vaccination movement, displayed in full colors on the CBS billboard in Times Square. But that’s a fact that was much less interesting to Mr. Liu than the type of earring I wear (a black diamond, if you need to know).
Weiye Loh

The Good Short Life With A.L.S. - NYTimes.com - 0 views

  • Lingering would be a colossal waste of love and money.
  • I’d rather die. I respect the wishes of people who want to live as long as they can. But I would like the same respect for those of us who decide — rationally — not to. I’ve done my homework. I have a plan. If I get pneumonia, I’ll let it snuff me out. If not, there are those other ways. I just have to act while my hands still work: the gun, narcotics, sharp blades, a plastic bag, a fast car, over-the-counter drugs, oleander tea (the polite Southern way), carbon monoxide, even helium. That would give me a really funny voice at the end. I have found the way. Not a gun. A way that’s quiet and calm. Knowing that comforts me. I don’t worry about fatty foods anymore. I don’t worry about having enough money to grow old. I’m not going to grow old. I’m having a wonderful time.
  •  
    We obsess in this country about how to eat and dress and drink, about finding a job and a mate. About having sex and children. About how to live. But we don't talk about how to die. We act as if facing death weren't one of life's greatest, most absorbing thrills and challenges. Believe me, it is. This is not dull. But we have to be able to see doctors and machines, medical and insurance systems, family and friends and religions as informative - not governing - in order to be free.
Weiye Loh

journalism.sg » Tony Tan engages the blogs: new era in relations with alternative media? - 0 views

  • TOC, Mr Brown, Leong Sze Hian and other bloggers received the information from Tan’s office yesterday and honoured the embargo on the news.
  • As the presumptive government-endorsed candidate, Tan's move can be seen as a landmark in relations between the state and Singapore’s intrepid and often unruly alternative online media. Until now, the government has refused to treat any of these sites as engaging in bona fide journalism. Bloggers have long complained that government departments do not respond to requests for information. When The Online Citizen organised a pre-election forum for all political parties to share their ideas last December, the People’s Action Party would have nothing to do with it. TOC highlighted the ruling party’s conspicuous absence by leaving an empty chair on stage. The election regulations’ ban on campaigning on the “cooling off” day and polling day also discriminate against citizen journalism: only licenced news organisations are exempted.
  • The sudden change of heart is undoubtedly one result of May’s groundbreaking general election. Online media were obviously influential, and the government may have decided that it has no choice but to do business with them.
  • ...3 more annotations...
  • While officials probably still can’t stand TOC’s guts, such sites represent the more rational and reasonable end of the ideological spectrum in cyberspace. TOC, together with Alex Au’s Yawning Bread and some other individual blogs, have been noticeably pushing for more credible online journalism within their extremely limited means. Most importantly, they have shown some commitment to accountability. They operate openly rather than behind cloaks of pseudonymity, they are not above correcting factual errors when these are pointed out to them, and they practice either pre- or post-moderation of comments to keep discussions within certain bounds.
  • Bloggers will have to understand that the huge and complex machinery of government is not going to transform itself overnight. Indeed, a blogger-friendly media engagement policy is probably easier to implement for a small and discrete Presidential Election campaign office than it would be for any government ministry.
  • On the government’s part, officials need to be clear that the success of the experiment cannot be measured by how quickly bloggers and their readers are led to the “right” answers or to a “consensus”, but by the inclusiveness and civility of the conversation: as long as more and more people are trying to persuade one another – rather than ignoring or shouting down one another – such engagement between government and alternative media would be strengthening Singapore’s governance and civic life.
Weiye Loh

A Clockwork Chemistry - Guy Kahane - Project Syndicate - 0 views

  •  
    Over the past decade, an army of psychologists, neuroscientists, and evolutionary biologists has been busy trying to uncover the neural "clockwork" that underlies human morality. They have started to trace the evolutionary origins of pro-social sentiments such as empathy, and have begun to uncover the genes that dispose some individuals to senseless violence and others to acts of altruism, and the pathways in our brain that shape our ethical decisions. And to understand how something works is also to begin to see ways to modify and even control it. Indeed, scientists have not only identified some of the brain pathways that shape our ethical decisions, but also chemical substances that modulate this neural activity.
« First ‹ Previous 61 - 80 of 81 Next ›
Showing 20 items per page