Skip to main content

Home/ New Media Ethics 2009 course/ Group items tagged Life

Rss Feed Group items tagged

Weiye Loh

Do avatars have digital rights? - 20 views

hi weiye, i agree with you that this brings in the topic of representation. maybe you should try taking media and representation by Dr. Ingrid to discuss more on this. Going back to your questio...

avatars

Weiye Loh

Happiness: Do we have a choice? » Scienceline - 0 views

  • “Objective choices make a difference to happiness over and above genetics and personality,” said Bruce Headey, a psychologist at Melbourne University in Australia. Headey and his colleagues analyzed annual self-reports of life satisfaction from over 20,000 Germans who have been interviewed every year since 1984. He compared five-year averages of people’s reported life satisfaction, and plotted their relative happiness on a percentile scale from 1 to 100. Heady found that as time went on, more and more people recorded substantial changes in their life satisfaction. By 2008, more than a third had moved up or down on the happiness scale by at least 25 percent, compared to where they had started in 1984.
  • Headey’s findings, published in the October 19th issue of Proceedings of the National Academy of Sciences, run contrary to what is known as the happiness set-point theory — the idea that even if you win the lottery or become a paraplegic, you’ll revert back to the same fixed level of happiness within a year or two. This psychological theory was widely accepted in the 1990s because it explained why happiness levels seemed to remain stable over the long term: They were mainly determined early in life by genetic factors including personality traits.
  • But even this dynamic choice-driven picture does not fully capture the nuance of what it means to be happy, said Jerome Kagan, a Harvard University developmental psychologist. He warns against conflating two distinct dimensions of happiness: everyday emotional experience (an assessment of how you feel at the moment) and life evaluation (a judgment of how satisfied you are with your life). It’s the difference between “how often did you smile yesterday?” and “how does your life compare to the best possible life you can imagine?”
  • ...4 more annotations...
  • Kagan suggests that we may have more choice over the latter, because life evaluation is not a function of how we currently feel — it is a comparison of our life to what we decide the good life should be.
  • Kagan has found that young children differ biologically in the ease with which they can feel happy, or tense, or distressed, or sad — what he calls temperament. People establish temperament early in life and have little capacity to change it. But they can change their life evaluation, which Kagan describes as an ethical concept synonymous with “how good of a life have I led?” The answer will depend on individual choices and the purpose they create for themselves. A painter who is constantly stressed and moody (unhappy in the moment) may still feel validation in creating good artwork and may be very satisfied with his life (happy as a judgment).
  • when it comes to happiness, our choices may matter — but it depends on what the choices are about, and how we define what we want to change.
  • Graham thinks that people may evaluate their happiness based on whichever dimension — happiness at the moment, or life evaluation — they have a choice over.
  •  
    Instead of existing as a stable equilibrium, Headey suggests that happiness is much more dynamic, and that individual choices - about one's partner, working hours, social participation and lifestyle - make substantial and permanent changes to reported happiness levels. For example, doing more or fewer paid hours of work than you want, or exercising regularly, can have just as much impact on life satisfaction as having an extroverted personality.
Weiye Loh

Catholic Bishop Castigates and Threatens Hospital that Saved Woman's Life | RHRealityCh... - 0 views

  • a young mother of four children was rushed to St. Joseph's Hospital in Phoenix, Arizona for an emergency abortion. The doctors who cared for her at the Catholic hospital determined that without the emergency abortion, she likely would have died.
  • The woman was eleven weeks pregnant and suffered from life-threatening pulmonary hypertension, which is high blood pressure in the arteries that supply blood to the lungs. As her condition worsened, the hospital diagnosed her with right-sided heart failure and cardiogenic shock, and determined that she would almost certainly die unless she terminated the pregnancy.
  • After the life-saving procedure was performed Bishop Thomas Olmstead of the Diocese demoted Sister Mary McBride who acted as the liasion between the hospital Ethics Committee and the physicians. The U.S. Conference of Catholic Bishops agreed with the decision.
  • ...7 more annotations...
  • Bishop Olmstead is not only castigating Catholic Healthcare West, the group that runs St. Joseph's Hospital, for saving her life but threatening them in order to force them to promise that doctors will never save a woman's life if it requires an emergency abortion ever again.
  • In a letter (PDF) to Lloyd H. Dean, President of Catholic Healthcare West, Bishop Olmstead calls the life-saving procedure "morally wrong" even though he doesn't deny that it almost certainly saved her life. The ACLU notes that he then "threatens to remove his endorsement of the hospital unless CHW "acknowledge[s] in writing that the medical procedure that resulted in the abortion at St. Josephs' Hospital was a violation" of the policy that governs all Catholic hospitals and "will never occur again at St. Joseph's Hospital."
  • it seems as if Dean and CHW have stuck to their position that not only were their actions moral and just, in this circumstance, but that they certainly would not promise not to save a woman's life or health if presented with a similar case in the future. In fact, they presented both religious and moral ethicists' opinions as support for the hospital's actions.
  • The ACLU claims that Olmstead's insistence that the hospital must never provide an emergency abortion procedure is actually a violation of federal law. Alexi Kolbi-Molinas, staff attorney for the ACLU, said in a statement this week: "Religiously affiliated hospitals are not exempt from federal laws that protect a patient's right to receive emergency care, and cannot invoke their religious status to jeopardize the health and lives of pregnant women. Women should never have to be afraid that they will be denied life-saving medical care when they enter a hospital."
  • The federal law, in specific, to which Kolbi Molinas refers is the Emergency Medical Treatment and Active Labor Act. The law protects patients' rights to receive emergency reproductive health care and Catholic hospitals cannot opt out. The law is necessary given that Catholic hospitals operate 15 percent of all hospital beds, according to the ACLU, and may likely provide the only or closest emergency care in a region.
  • the ACLU requests an investigation into violations of the federal law - not only as a result of the incident at St. Joseph's but after numerous reports of horrendous scenarios: We know that what happened at St. Joseph's was not an isolated incident. Catholic-owned hospitals across the country have refused to provide emergency abortions, as documented in a recent article in the American Journal of Public Health. For example, a doctor in the Northeast decided to leave a Catholic-owned hospital after he was forced by the hospital's ethics committee to risk a pregnant patient's life. The woman was in the process of miscarrying at 19 weeks of pregnancy. She was dying: her temperature was 106 degrees, she had disseminated intravascular coagulopathy, which is a life-threatening condition that prevents a person's blood from clotting normally and causes excessive bleeding. This patient was bleeding so badly that the sclera, the whites of her eyes, were red, filled with blood. Despite the fact that there was no chance the fetus could survive, the ethics committee told the doctor that he could not perform the abortion the woman needed to save her life until the fetus's heartbeat stopped. After the delay, the patient was in the Intensive Care Unit for 10 days, and developed pulmonary disease, resulting in lifetime oxygen dependency.
  • Still, Bishop Olmstead and the Roman Catholic Diocese are steadfast in their insistence that physicians and hospital administrators acted immorally when they saved the life of a pregnant mother of four children and are determined to ensure that pregnant women are not safe in the hands of Catholic hospitals across the country.
Weiye Loh

Cancer resembles life 1 billion years ago, say astrobiologists - microbiology, genomics... - 0 views

  • astrobiologists, working with oncologists in the US, have suggested that cancer resembles ancient forms of life that flourished between 600 million and 1 billion years ago.
  • Read more about what this discovery means for cancer research.
  • The genes that controlled the behaviour of these early multicellular organisms still reside within our own cells, managed by more recent genes that keep them in check.It's when these newer controlling genes fail that the older mechanisms take over, and the cell reverts to its earlier behaviours and grows out of control.
  • ...11 more annotations...
  • The new theory, published in the journal Physical Biology, has been put forward by two leading figures in the world of cosmology and astrobiology: Paul Davies, director of the Beyond Center for Fundamental Concepts in Science, Arizona State University; and Charles Lineweaver, from the Australian National University.
  • According to Lineweaver, this suggests that cancer is an atavism, or an evolutionary throwback.
  • In the paper, they suggest that a close look at cancer shows similarities with early forms of multicellular life.
  • “Unlike bacteria and viruses, cancer has not developed the capacity to evolve into new forms. In fact, cancer is better understood as the reversion of cells to the way they behaved a little over one billion years ago, when humans were nothing more than loose-knit colonies of only partially differentiated cells. “We think that the tumours that develop in cancer patients today take the same form as these simple cellular structures did more than a billion years ago,” he said.
  • One piece of evidence to support this theory is that cancers appear in virtually all metazoans, with the notable exception of the bizarre naked mole rat."This quasi-ubiquity suggests that the mechanisms of cancer are deep-rooted in evolutionary history, a conjecture that receives support from both paleontology and genetics," they write.
  • the genes that controlled this early multi-cellular form of life are like a computer operating system's 'safe mode', and when there are failures or mutations in the more recent genes that manage the way cells specialise and interact to form the complex life of today, then the earlier level of programming takes over.
  • Their notion is in contrast to a prevailing theory that cancer cells are 'rogue' cells that evolve rapidly within the body, overcoming the normal slew of cellular defences.
  • However, Davies and Lineweaver point out that cancer cells are highly cooperative with each other, if competing with the host's cells. This suggests a pre-existing complexity that is reminiscent of early multicellular life.
  • cancers' manifold survival mechanisms are predictable, and unlikely to emerge spontaneously through evolution within each individual in such a consistent way.
  • The good news is that this means combating cancer is not necessarily as complex as if the cancers were rogue cells evolving new and novel defence mechanisms within the body.Instead, because cancers fall back on the same evolved mechanisms that were used by early life, we can expect them to remain predictable, thus if they're susceptible to treatment, it's unlikely they'll evolve new ways to get around it.
  • If the atavism hypothesis is correct, there are new reasons for optimism," they write.
  •  
    Feature: Inside DNA vaccines bioMD makes a bid for Andrew Forest's Allied Medical and Coridon Alexion acquires technology for MoCD therapy More > Most Popular Media Releases Cancer resembles life 1 billion years ago, say astrobiologists Feature: The challenge of a herpes simplex vaccine Feature: Proteomics power of pawpaw bioMD makes a bid for Andrew Forest's Allied Medical and Coridon Immune system boosting hormone might lead to HIV cure Biotechnology Directory Company Profile Check out this company's profile and more in the Biotechnology Directory! Biotechnology Directory Find company by name Find company by category Latest Jobs Senior Software Developer / Java Analyst Programm App Support Developer - Java / J2ee Solutions Consultant - VIC Technical Writer Product Manager (Fisheye/Crucible)   BUYING GUIDES Portable Multimedia Players Digital Cameras Digital Video Cameras LATEST PRODUCTS HTC Wildfire S Android phone (preview) Panasonic LUMIX DMC-GH2 digital camera HTC Desire S Android phone (preview) Qld ICT minister Robert Schwarten retires Movie piracy costs Aus economy $1.37 billion in 12 months: AFACT Wireless smartphones essential to e-health: CSIRO Aussie outsourcing CRM budgets to soar in 2011: Ovum Federal government to evaluate education revolution targets Business continuity planning - more than just disaster recovery Proving the value of IT - Part one 5 open source security projects to watch In-memory computing Information security in 2011 EFA shoots down 'unproductive' AFACT movie piracy study In Pictures: IBM hosts Galactic dinner Emerson Network Power launches new infrastructure solutions Consumers not smart enough for smartphones? Google one-ups Apple online media subscription service M2M offerings expand as more machines go online India cancels satellite spectrum deal after controversy Lenovo profit rises in Q3 on strong PC sales in China Taiwan firm to supply touch sensors to Samsung HP regains top position in India's PC market Copyright 20
yongernn teo

Ethics and Values Case Study- Mercy Killing, Euthanasia - 8 views

  •  
    THE ETHICAL PROBLEM: Allowing someone to die, mercy death, and mercy killing, Euthanasia: A 24-year-old man named Robert who has a wife and child is paralyzed from the neck down in a motorcycle accident. He has always been very active and hates the idea of being paralyzed. He also is in a great deal of pain, an he has asked his doctors and other members of his family to "put him out of his misery." After several days of such pleading, his brother comes into Robert's hospital ward and asks him if he is sure he still wants to be put out of his misery. Robert says yes and pleads with his brother to kill him. The brother kisses and blesses Robert, then takes out a gun and shoots him, killing him instantly. The brother later is tried for murder and acquitted by reason of temporary insanity. Was what Robert's brother did moral? Do you think he should have been brought to trial at all? Do you think he should have been acquitted? Would you do the same for a loved one if you were asked? THE DISCUSSION: In my opinion, the most dubious part about the case would be the part on Robert pleading with his brother, asking his brother to kill him. This could be his brother's own account of the incident and could/could not have been a plea by Robert. 1) With assumption that Robert indeed pleaded with his brother to kill him, an ethical analysis as such could be derived: That Robert's brother was only respecting Robert's choice and killed him because he wanted to relieve him from his misery. This could be argued to be ethical using a teleoloigical framework where the focus is on the end-result and the consequences that entails the action. Here, although the act of killing per se may be wrong and illegal, Robert was able to relieved of his pain and suffering. 2) With an assumption that Robert did not plea with his brother to kill him and that it was his brother's own decision to relieve Robert of all-suffering: In this case, the b
  • ...2 more comments...
  •  
    I find euthanasia to be a very interesting ethical dilemma. Even I myself am caught in the middle. Euthanasia has been termed as 'mercy killing' and even 'happy death'. Others may simply just term it as being 'evil'. Is it right to end someone's life even when he or she pleads you to do so? In the first place, is it even right to commit suicide? Once someone pulls off the main support that's keeping the person alive, such as the feeding tube, there is no turning back. Hmm..Come to think of it, technology is kind of unethical by being made available, for in the past, when someone is dying, they had the right to die naturally. Now, scientific technology is 'forcing' us to stay alive and cling on to a life that may be deemed being worthless if we were standing outside our bodies looking at our comatose selves. Then again, this may just be MY personal standpoint. But I have to argue, who gave technology the right to make me a worthless vegetable!(and here I am, attaching a value/judgement onto an immobile human being..) Hence, being incompetent in making decisions for my unconscious self (or perhaps even brain dead), who should take responsibility for my life, for my existence? And on what basis are they allowed to help me out? Taking the other side of the argument, against euthanasia, we can say that the act of ending someone else's life is the act of destroying societal respect for life. Based on the utilitarian perspective, we are not thinking of the overall beneficence for society and disregarding the moral considerations encompassed within the state's interest to preserve the sanctity of all life. It has been said that life in itself takes priority over all other values. We should let the person live so as to give him/her a chance to wake up or hope for recovery (think comatose patients). But then again we can also argue that life is not the top of the hierarchy! A life without rights is as if not living a life at all? By removing the patient
  •  
    as a human being, you supposedly have a right to live, whether you are mobile or immobile. however, i think that, in the case of euthanasia, you 'give up' your rights when you "show" that you are no longer able to serve the pre-requisites of having the right. for example, if "living" rights are equate to you being able to talk, walk, etc etc, then, obviously the opposite means you no longer are able to perform up to the expectations of that right. then again, it is very subjective as to who gets to make that criteria!
  •  
    hmm interesting.. however, a question i have is who and when can this "right" be "given up"? when i am a victim in a car accident, and i lost the ability to breathe, walk and may need months to recover. i am unconscious and the doctor is unable to determine when am i gonna regain consciousness. when should my parents decide i can no longer be able to have any living rights? and taking elaine's point into consideration, is committing suicide even 'right'? if it is legally not right, when i ask someone to take my life and wrote a letter that it was cus i wanted to die, does that make it committing suicide only in the hands of others?
  •  
    Similarly, I question the 'rights' that you have to 'give up' when you no longer 'serve the pre-requisites of having the right'. If the living rights means being able to talk and walk, then where does it leave infants? Where does it leave people who may be handicapped? Have their lost their rights to living?
Weiye Loh

Skepticblog » Flaws in Creationist Logic - 0 views

  • making a false analogy here by confusing the origin of life with the later evolution of life. The watch analogy was specifically offered to say that something which is complex and displays design must have been created and designed by a creator. Therefore, since we see complexity and design in life it too must have had a creator. But all the life that we know – that life which is being pointed to as complex and designed – is the result of a process (evolution) that has worked over billions of years. Life can grow, reproduce, and evolve. Watches cannot – so it is not a valid analogy.
  • Life did emerge from non-living matter, but that is irrelevant to the point. There was likely a process of chemical evolution – but still the non-living precursors to life were just chemicals, they did not display the design or complexity apparent in a watch. Ankur’s attempt to rescue this false analogy fails. And before someone has a chance to point it out – yes, I said that life displays design. It displays bottom-up evolutionary design, not top-down intelligent design. This refers to another fallacy of creationists – the assumption that all design is top down. But nature demonstrates that this is a false assumption.
  • An increase in variation is an increase in information – it takes more information to describe the greater variety. By any actual definition of information – variation increases information. Also, as I argued, when you have gene duplication you are physically increasing the number of information carrying units – that is an increase in information. There is simply no way to avoid the mountain of genetic evidence that genetic information has increased over evolutionary time through evolutionary processes.
  •  
    FLAWS IN CREATIONIST LOGIC
Weiye Loh

Rationally Speaking: Human, know thy place! - 0 views

  • I kicked off a recent episode of the Rationally Speaking podcast on the topic of transhumanism by defining it as “the idea that we should be pursuing science and technology to improve the human condition, modifying our bodies and our minds to make us smarter, healthier, happier, and potentially longer-lived.”
  • Massimo understandably expressed some skepticism about why there needs to be a transhumanist movement at all, given how incontestable their mission statement seems to be. As he rhetorically asked, “Is transhumanism more than just the idea that we should be using technologies to improve the human condition? Because that seems a pretty uncontroversial point.” Later in the episode, referring to things such as radical life extension and modifications of our minds and genomes, Massimo said, “I don't think these are things that one can necessarily have objections to in principle.”
  • There are a surprising number of people whose reaction, when they are presented with the possibility of making humanity much healthier, smarter and longer-lived, is not “That would be great,” nor “That would be great, but it's infeasible,” nor even “That would be great, but it's too risky.” Their reaction is, “That would be terrible.”
  • ...14 more annotations...
  • The people with this attitude aren't just fringe fundamentalists who are fearful of messing with God's Plan. Many of them are prestigious professors and authors whose arguments make no mention of religion. One of the most prominent examples is political theorist Francis Fukuyama, author of End of History, who published a book in 2003 called “Our Posthuman Future: Consequences of the Biotechnology Revolution.” In it he argues that we will lose our “essential” humanity by enhancing ourselves, and that the result will be a loss of respect for “human dignity” and a collapse of morality.
  • Fukuyama's reasoning represents a prominent strain of thought about human enhancement, and one that I find doubly fallacious. (Fukuyama is aware of the following criticisms, but neither I nor other reviewers were impressed by his attempt to defend himself against them.) The idea that the status quo represents some “essential” quality of humanity collapses when you zoom out and look at the steady change in the human condition over previous millennia. Our ancestors were less knowledgable, more tribalistic, less healthy, shorter-lived; would Fukuyama have argued for the preservation of all those qualities on the grounds that, in their respective time, they constituted an “essential human nature”? And even if there were such a thing as a persistent “human nature,” why is it necessarily worth preserving? In other words, I would argue that Fukuyama is committing both the fallacy of essentialism (there exists a distinct thing that is “human nature”) and the appeal to nature (the way things naturally are is how they ought to be).
  • Writer Bill McKibben, who was called “probably the nation's leading environmentalist” by the Boston Globe this year, and “the world's best green journalist” by Time magazine, published a book in 2003 called “Enough: Staying Human in an Engineered Age.” In it he writes, “That is the choice... one that no human should have to make... To be launched into a future without bounds, where meaning may evaporate.” McKibben concludes that it is likely that “meaning and pain, meaning and transience are inextricably intertwined.” Or as one blogger tartly paraphrased: “If we all live long healthy happy lives, Bill’s favorite poetry will become obsolete.”
  • President George W. Bush's Council on Bioethics, which advised him from 2001-2009, was steeped in it. Harvard professor of political philosophy Michael J. Sandel served on the Council from 2002-2005 and penned an article in the Atlantic Monthly called “The Case Against Perfection,” in which he objected to genetic engineering on the grounds that, basically, it’s uppity. He argues that genetic engineering is “the ultimate expression of our resolve to see ourselves astride the world, the masters of our nature.” Better we should be bowing in submission than standing in mastery, Sandel feels. Mastery “threatens to banish our appreciation of life as a gift,” he warns, and submitting to forces outside our control “restrains our tendency toward hubris.”
  • If you like Sandel's “It's uppity” argument against human enhancement, you'll love his fellow Councilmember Dr. William Hurlbut's argument against life extension: “It's unmanly.” Hurlbut's exact words, delivered in a 2007 debate with Aubrey de Grey: “I actually find a preoccupation with anti-aging technologies to be, I think, somewhat spiritually immature and unmanly... I’m inclined to think that there’s something profound about aging and death.”
  • And Council chairman Dr. Leon Kass, a professor of bioethics from the University of Chicago who served from 2001-2005, was arguably the worst of all. Like McKibben, Kass has frequently argued against radical life extension on the grounds that life's transience is central to its meaningfulness. “Could the beauty of flowers depend on the fact that they will soon wither?” he once asked. “How deeply could one deathless ‘human’ being love another?”
  • Kass has also argued against human enhancements on the same grounds as Fukuyama, that we shouldn't deviate from our proper nature as human beings. “To turn a man into a cockroach— as we don’t need Kafka to show us —would be dehumanizing. To try to turn a man into more than a man might be so as well,” he said. And Kass completes the anti-transhumanist triad (it robs life of meaning; it's dehumanizing; it's hubris) by echoing Sandel's call for humility and gratitude, urging, “We need a particular regard and respect for the special gift that is our own given nature.”
  • By now you may have noticed a familiar ring to a lot of this language. The idea that it's virtuous to suffer, and to humbly surrender control of your own fate, is a cornerstone of Christian morality.
  • it's fairly representative of standard Christian tropes: surrendering to God, submitting to God, trusting that God has good reasons for your suffering.
  • I suppose I can understand that if you believe in an all-powerful entity who will become irate if he thinks you are ungrateful for anything, then this kind of groveling might seem like a smart strategic move. But what I can't understand is adopting these same attitudes in the absence of any religious context. When secular people chastise each other for the “hubris” of trying to improve the “gift” of life they've received, I want to ask them: just who, exactly, are you groveling to? Who, exactly, are you afraid of affronting if you dare to reach for better things?
  • This is why transhumanism is most needed, from my perspective – to counter the astoundingly widespread attitude that suffering and 80-year-lifespans are good things that are worth preserving. That attitude may make sense conditional on certain peculiarly masochistic theologies, but the rest of us have no need to defer to it. It also may have been a comforting thing to tell ourselves back when we had no hope of remedying our situation, but that's not necessarily the case anymore.
  • I think there is a seperation of Transhumanism and what Massimo is referring to. Things like robotic arms and the like come from trying to deal with a specific defect and thus seperate it from Transhumanism. I would define transhumanism the same way you would (the achievement of a better human), but I would exclude the inventions of many life altering devices as transhumanism. If we could invent a device that just made you smarter, then ideed that would be transhumanism, but if we invented a device that could make someone that was metally challenged to be able to be normal, I would define this as modern medicine. I just want to make sure we seperate advances in modern medicine from transhumanism. Modern medicine being the one that advances to deal with specific medical issues to improve quality of life (usually to restore it to normal conditions) and transhumanism being the one that can advance every single human (perhaps equally?).
    • Weiye Loh
       
      Assumes that "normal conditions" exist. 
  • I agree with all your points about why the arguments against transhumanism and for suffering are ridiculous. That being said, when I first heard about the ideas of Transhumanism, after the initial excitement wore off (since I'm a big tech nerd), my reaction was more of less the same as Massimo's. I don't particularly see the need for a philosophical movement for this.
  • if people believe that suffering is something God ordained for us, you're not going to convince them otherwise with philosophical arguments any more than you'll convince them there's no God at all. If the technologies do develop, acceptance of them will come as their use becomes more prevalent, not with arguments.
  •  
    Human, know thy place!
Weiye Loh

Skepticblog » A Creationist Challenge - 0 views

  • The commenter starts with some ad hominems, asserting that my post is biased and emotional. They provide no evidence or argument to support this assertion. And of course they don’t even attempt to counter any of the arguments I laid out. They then follow up with an argument from authority – he can link to a PhD creationist – so there.
  • The article that the commenter links to is by Henry M. Morris, founder for the Institute for Creation Research (ICR) – a young-earth creationist organization. Morris was (he died in 2006 following a stroke) a PhD – in civil engineering. This point is irrelevant to his actual arguments. I bring it up only to put the commenter’s argument from authority into perspective. No disrespect to engineers – but they are not biologists. They have no expertise relevant to the question of evolution – no more than my MD. So let’s stick to the arguments themselves.
  • The article by Morris is an overview of so-called Creation Science, of which Morris was a major architect. The arguments he presents are all old creationist canards, long deconstructed by scientists. In fact I address many of them in my original refutation. Creationists generally are not very original – they recycle old arguments endlessly, regardless of how many times they have been destroyed.
  • ...26 more annotations...
  • Morris also makes heavy use of the “taking a quote out of context” strategy favored by creationists. His quotes are often from secondary sources and are incomplete.
  • A more scholarly (i.e. intellectually honest) approach would be to cite actual evidence to support a point. If you are going to cite an authority, then make sure the quote is relevant, in context, and complete.
  • And even better, cite a number of sources to show that the opinion is representative. Rather we get single, partial, and often outdated quotes without context.
  • (nature is not, it turns out, cleanly divided into “kinds”, which have no operational definition). He also repeats this canard: Such variation is often called microevolution, and these minor horizontal (or downward) changes occur fairly often, but such changes are not true “vertical” evolution. This is the microevolution/macroevolution false dichotomy. It is only “often called” this by creationists – not by actual evolutionary scientists. There is no theoretical or empirical division between macro and micro evolution. There is just evolution, which can result in the full spectrum of change from minor tweaks to major changes.
  • Morris wonders why there are no “dats” – dog-cat transitional species. He misses the hierarchical nature of evolution. As evolution proceeds, and creatures develop a greater and greater evolutionary history behind them, they increasingly are committed to their body plan. This results in a nestled hierarchy of groups – which is reflected in taxonomy (the naming scheme of living things).
  • once our distant ancestors developed the basic body plan of chordates, they were committed to that body plan. Subsequent evolution resulted in variations on that plan, each of which then developed further variations, etc. But evolution cannot go backward, undo evolutionary changes and then proceed down a different path. Once an evolutionary line has developed into a dog, evolution can produce variations on the dog, but it cannot go backwards and produce a cat.
  • Stephen J. Gould described this distinction as the difference between disparity and diversity. Disparity (the degree of morphological difference) actually decreases over evolutionary time, as lineages go extinct and the surviving lineages are committed to fewer and fewer basic body plans. Meanwhile, diversity (the number of variations on a body plan) within groups tends to increase over time.
  • the kind of evolutionary changes that were happening in the past, when species were relatively undifferentiated (compared to contemporary species) is indeed not happening today. Modern multi-cellular life has 600 million years of evolutionary history constraining their future evolution – which was not true of species at the base of the evolutionary tree. But modern species are indeed still evolving.
  • Here is a list of research documenting observed instances of speciation. The list is from 1995, and there are more recent examples to add to the list. Here are some more. And here is a good list with references of more recent cases.
  • Next Morris tries to convince the reader that there is no evidence for evolution in the past, focusing on the fossil record. He repeats the false claim (again, which I already dealt with) that there are no transitional fossils: Even those who believe in rapid evolution recognize that a considerable number of generations would be required for one distinct “kind” to evolve into another more complex kind. There ought, therefore, to be a considerable number of true transitional structures preserved in the fossils — after all, there are billions of non-transitional structures there! But (with the exception of a few very doubtful creatures such as the controversial feathered dinosaurs and the alleged walking whales), they are not there.
  • I deal with this question at length here, pointing out that there are numerous transitional fossils for the evolution of terrestrial vertebrates, mammals, whales, birds, turtles, and yes – humans from ape ancestors. There are many more examples, these are just some of my favorites.
  • Much of what follows (as you can see it takes far more space to correct the lies and distortions of Morris than it did to create them) is classic denialism – misinterpreting the state of the science, and confusing lack of information about the details of evolution with lack of confidence in the fact of evolution. Here are some examples – he quotes Niles Eldridge: “It is a simple ineluctable truth that virtually all members of a biota remain basically stable, with minor fluctuations, throughout their durations. . . .“ So how do evolutionists arrive at their evolutionary trees from fossils of organisms which didn’t change during their durations? Beware the “….” – that means that meaningful parts of the quote are being omitted. I happen to have the book (The Pattern of Evolution) from which Morris mined that particular quote. Here’s the rest of it: (Remember, by “biota” we mean the commonly preserved plants and animals of a particular geological interval, which occupy regions often as large as Roger Tory Peterson’s “eastern” region of North American birds.) And when these systems change – when the older species disappear, and new ones take their place – the change happens relatively abruptly and in lockstep fashion.”
  • Eldridge was one of the authors (with Gould) of punctuated equilibrium theory. This states that, if you look at the fossil record, what we see are species emerging, persisting with little change for a while, and then disappearing from the fossil record. They theorize that most species most of the time are at equilibrium with their environment, and so do not change much. But these periods of equilibrium are punctuated by disequilibrium – periods of change when species will have to migrate, evolve, or go extinct.
  • This does not mean that speciation does not take place. And if you look at the fossil record we see a pattern of descendant species emerging from ancestor species over time – in a nice evolutionary pattern. Morris gives a complete misrepresentation of Eldridge’s point – once again we see intellectual dishonesty in his methods of an astounding degree.
  • Regarding the atheism = religion comment, it reminds me of a great analogy that I first heard on twitter from Evil Eye. (paraphrase) “those that say atheism is a religion, is like saying ‘not collecting stamps’ is a hobby too.”
  • Morris next tackles the genetic evidence, writing: More often is the argument used that similar DNA structures in two different organisms proves common evolutionary ancestry. Neither argument is valid. There is no reason whatever why the Creator could not or would not use the same type of genetic code based on DNA for all His created life forms. This is evidence for intelligent design and creation, not evolution.
  • Here is an excellent summary of the multiple lines of molecular evidence for evolution. Basically, if we look at the sequence of DNA, the variations in trinucleotide codes for amino acids, and amino acids for proteins, and transposons within DNA we see a pattern that can only be explained by evolution (or a mischievous god who chose, for some reason, to make life look exactly as if it had evolved – a non-falsifiable notion).
  • The genetic code is essentially comprised of four letters (ACGT for DNA), and every triplet of three letters equates to a specific amino acid. There are 64 (4^3) possible three letter combinations, and 20 amino acids. A few combinations are used for housekeeping, like a code to indicate where a gene stops, but the rest code for amino acids. There are more combinations than amino acids, so most amino acids are coded for by multiple combinations. This means that a mutation that results in a one-letter change might alter from one code for a particular amino acid to another code for the same amino acid. This is called a silent mutation because it does not result in any change in the resulting protein.
  • It also means that there are very many possible codes for any individual protein. The question is – which codes out of the gazillions of possible codes do we find for each type of protein in different species. If each “kind” were created separately there would not need to be any relationship. Each kind could have it’s own variation, or they could all be identical if they were essentially copied (plus any mutations accruing since creation, which would be minimal). But if life evolved then we would expect that the exact sequence of DNA code would be similar in related species, but progressively different (through silent mutations) over evolutionary time.
  • This is precisely what we find – in every protein we have examined. This pattern is necessary if evolution were true. It cannot be explained by random chance (the probability is absurdly tiny – essentially zero). And it makes no sense from a creationist perspective. This same pattern (a branching hierarchy) emerges when we look at amino acid substitutions in proteins and other aspects of the genetic code.
  • Morris goes for the second law of thermodynamics again – in the exact way that I already addressed. He responds to scientists correctly pointing out that the Earth is an open system, by writing: This naive response to the entropy law is typical of evolutionary dissimulation. While it is true that local order can increase in an open system if certain conditions are met, the fact is that evolution does not meet those conditions. Simply saying that the earth is open to the energy from the sun says nothing about how that raw solar heat is converted into increased complexity in any system, open or closed. The fact is that the best known and most fundamental equation of thermodynamics says that the influx of heat into an open system will increase the entropy of that system, not decrease it. All known cases of decreased entropy (or increased organization) in open systems involve a guiding program of some sort and one or more energy conversion mechanisms.
  • Energy has to be transformed into a usable form in order to do the work necessary to decrease entropy. That’s right. That work is done by life. Plants take solar energy (again – I’m not sure what “raw solar heat” means) and convert it into food. That food fuels the processes of life, which include development and reproduction. Evolution emerges from those processes- therefore the conditions that Morris speaks of are met.
  • But Morris next makes a very confused argument: Evolution has neither of these. Mutations are not “organizing” mechanisms, but disorganizing (in accord with the second law). They are commonly harmful, sometimes neutral, but never beneficial (at least as far as observed mutations are concerned). Natural selection cannot generate order, but can only “sieve out” the disorganizing mutations presented to it, thereby conserving the existing order, but never generating new order.
  • The notion that evolution (as if it’s a thing) needs to use energy is hopelessly confused. Evolution is a process that emerges from the system of life – and life certainly can use solar energy to decrease its entropy, and by extension the entropy of the biosphere. Morris slips into what is often presented as an information argument.  (Yet again – already dealt with. The pattern here is that we are seeing a shuffling around of the same tired creationists arguments.) It is first not true that most mutations are harmful. Many are silent, and many of those that are not silent are not harmful. They may be neutral, they may be a mixed blessing, and their relative benefit vs harm is likely to be situational. They may be fatal. And they also may be simply beneficial.
  • Morris finishes with a long rambling argument that evolution is religion. Evolution is promoted by its practitioners as more than mere science. Evolution is promulgated as an ideology, a secular religion — a full-fledged alternative to Christianity, with meaning and morality . . . . Evolution is a religion. This was true of evolution in the beginning, and it is true of evolution still today. Morris ties evolution to atheism, which, he argues, makes it a religion. This assumes, of course, that atheism is a religion. That depends on how you define atheism and how you define religion – but it is mostly wrong. Atheism is a lack of belief in one particular supernatural claim – that does not qualify it as a religion.
  • But mutations are not “disorganizing” – that does not even make sense. It seems to be based on a purely creationist notion that species are in some privileged perfect state, and any mutation can only take them farther from that perfection. For those who actually understand biology, life is a kluge of compromises and variation. Mutations are mostly lateral moves from one chaotic state to another. They are not directional. But they do provide raw material, variation, for natural selection. Natural selection cannot generate variation, but it can select among that variation to provide differential survival. This is an old game played by creationists – mutations are not selective, and natural selection is not creative (does not increase variation). These are true but irrelevant, because mutations increase variation and information, and selection is a creative force that results in the differential survival of better adapted variation.
  •  
    One of my earlier posts on SkepticBlog was Ten Major Flaws in Evolution: A Refutation, published two years ago. Occasionally a creationist shows up to snipe at the post, like this one:i read this and found it funny. It supposedly gives a scientific refutation, but it is full of more bias than fox news, and a lot of emotion as well.here's a scientific case by an actual scientists, you know, one with a ph. D, and he uses statements by some of your favorite evolutionary scientists to insist evolution doesn't exist.i challenge you to write a refutation on this one.http://www.icr.org/home/resources/resources_tracts_scientificcaseagainstevolution/Challenge accepted.
Weiye Loh

Rationally Speaking: Is modern moral philosophy still in thrall to religion? - 0 views

  • Recently I re-read Richard Taylor’s An Introduction to Virtue Ethics, a classic published by Prometheus
  • Taylor compares virtue ethics to the other two major approaches to moral philosophy: utilitarianism (a la John Stuart Mill) and deontology (a la Immanuel Kant). Utilitarianism, of course, is roughly the idea that ethics has to do with maximizing pleasure and minimizing pain; deontology is the idea that reason can tell us what we ought to do from first principles, as in Kant’s categorical imperative (e.g., something is right if you can agree that it could be elevated to a universally acceptable maxim).
  • Taylor argues that utilitarianism and deontology — despite being wildly different in a variety of respects — share one common feature: both philosophies assume that there is such a thing as moral right and wrong, and a duty to do right and avoid wrong. But, he says, on the face of it this is nonsensical. Duty isn’t something one can have in the abstract, duty is toward a law or a lawgiver, which begs the question of what could arguably provide us with a universal moral law, or who the lawgiver could possibly be.
  • ...11 more annotations...
  • His answer is that both utilitarianism and deontology inherited the ideas of right, wrong and duty from Christianity, but endeavored to do without Christianity’s own answers to those questions: the law is given by God and the duty is toward Him. Taylor says that Mill, Kant and the like simply absorbed the Christian concept of morality while rejecting its logical foundation (such as it was). As a result, utilitarians and deontologists alike keep talking about the right thing to do, or the good as if those concepts still make sense once we move to a secular worldview. Utilitarians substituted pain and pleasure for wrong and right respectively, and Kant thought that pure reason can arrive at moral universals. But of course neither utilitarians nor deontologist ever give us a reason why it would be irrational to simply decline to pursue actions that increase global pleasure and diminish global pain, or why it would be irrational for someone not to find the categorical imperative particularly compelling.
  • The situation — again according to Taylor — is dramatically different for virtue ethics. Yes, there too we find concepts like right and wrong and duty. But, for the ancient Greeks they had completely different meanings, which made perfect sense then and now, if we are not mislead by the use of those words in a different context. For the Greeks, an action was right if it was approved by one’s society, wrong if it wasn’t, and duty was to one’s polis. And they understood perfectly well that what was right (or wrong) in Athens may or may not be right (or wrong) in Sparta. And that an Athenian had a duty to Athens, but not to Sparta, and vice versa for a Spartan.
  • But wait a minute. Does that mean that Taylor is saying that virtue ethics was founded on moral relativism? That would be an extraordinary claim indeed, and he does not, in fact, make it. His point is a bit more subtle. He suggests that for the ancient Greeks ethics was not (principally) about right, wrong and duty. It was about happiness, understood in the broad sense of eudaimonia, the good or fulfilling life. Aristotle in particular wrote in his Ethics about both aspects: the practical ethics of one’s duty to one’s polis, and the universal (for human beings) concept of ethics as the pursuit of the good life. And make no mistake about it: for Aristotle the first aspect was relatively trivial and understood by everyone, it was the second one that represented the real challenge for the philosopher.
  • For instance, the Ethics is famous for Aristotle’s list of the virtues (see Table), and his idea that the right thing to do is to steer a middle course between extreme behaviors. But this part of his work, according to Taylor, refers only to the practical ways of being a good Athenian, not to the universal pursuit of eudaimonia. Vice of Deficiency Virtuous Mean Vice of Excess Cowardice Courage Rashness Insensibility Temperance Intemperance Illiberality Liberality Prodigality Pettiness Munificence Vulgarity Humble-mindedness High-mindedness Vaingloriness Want of Ambition Right Ambition Over-ambition Spiritlessness Good Temper Irascibility Surliness Friendly Civility Obsequiousness Ironical Depreciation Sincerity Boastfulness Boorishness Wittiness Buffoonery</t
  • How, then, is one to embark on the more difficult task of figuring out how to live a good life? For Aristotle eudaimonia meant the best kind of existence that a human being can achieve, which in turns means that we need to ask what it is that makes humans different from all other species, because it is the pursuit of excellence in that something that provides for a eudaimonic life.
  • Now, Plato - writing before Aristotle - ended up construing the good life somewhat narrowly and in a self-serving fashion. He reckoned that the thing that distinguishes humanity from the rest of the biological world is our ability to use reason, so that is what we should be pursuing as our highest goal in life. And of course nobody is better equipped than a philosopher for such an enterprise... Which reminds me of Bertrand Russell’s quip that “A process which led from the amoeba to man appeared to the philosophers to be obviously a progress, though whether the amoeba would agree with this opinion is not known.”
  • But Aristotle's conception of "reason" was significantly broader, and here is where Taylor’s own update of virtue ethics begins to shine, particularly in Chapter 16 of the book, aptly entitled “Happiness.” Taylor argues that the proper way to understand virtue ethics is as the quest for the use of intelligence in the broadest possible sense, in the sense of creativity applied to all walks of life. He says: “Creative intelligence is exhibited by a dancer, by athletes, by a chess player, and indeed in virtually any activity guided by intelligence [including — but certainly not limited to — philosophy].” He continues: “The exercise of skill in a profession, or in business, or even in such things as gardening and farming, or the rearing of a beautiful family, all such things are displays of creative intelligence.”
  • what we have now is a sharp distinction between utilitarianism and deontology on the one hand and virtue ethics on the other, where the first two are (mistakenly, in Taylor’s assessment) concerned with the impossible question of what is right or wrong, and what our duties are — questions inherited from religion but that in fact make no sense outside of a religious framework. Virtue ethics, instead, focuses on the two things that really matter and to which we can find answers: the practical pursuit of a life within our polis, and the lifelong quest of eudaimonia understood as the best exercise of our creative faculties
  • &gt; So if one's profession is that of assassin or torturer would being the best that you can be still be your duty and eudaimonic? And what about those poor blighters who end up with an ugly family? &lt;Aristotle's philosophy is ver much concerned with virtue, and being an assassin or a torturer is not a virtue, so the concept of a eudaimonic life for those characters is oxymoronic. As for ending up in a "ugly" family, Aristotle did write that eudaimonia is in part the result of luck, because it is affected by circumstances.
  • &gt; So to the title question of this post: "Is modern moral philosophy still in thrall to religion?" one should say: Yes, for some residual forms of philosophy and for some philosophers &lt;That misses Taylor's contention - which I find intriguing, though I have to give it more thought - that *all* modern moral philosophy, except virtue ethics, is in thrall to religion, without realizing it.
  • “The exercise of skill in a profession, or in business, or even in such things as gardening and farming, or the rearing of a beautiful family, all such things are displays of creative intelligence.”So if one's profession is that of assassin or torturer would being the best that you can be still be your duty and eudaimonic? And what about those poor blighters who end up with an ugly family?
Weiye Loh

Titans of science: David Attenborough meets Richard Dawkins | Science | The Guardian - 0 views

  • What is the one bit of science from your field that you think everyone should know?David Attenborough: The unity of life.Richard Dawkins: The unity of life that comes about through evolution, since we're all descended from a single common ancestor. It's almost too good to be true, that on one planet this extraordinary complexity of life should have come about by what is pretty much an intelligible process. And we're the only species capable of understanding it.
  • RD: I know you're working on a programme about Cambrian and pre-Cambrian fossils, David. A lot of people might think, "These are very old animals, at the beginning of evolution; they&nbsp;weren't very good at what they did." I&nbsp;suspect that isn't the case?DA: They were just as good, but as generalists, most were ousted from the competition.RD: So it probably is true there's a progressive element to evolution in the short term but not in the long term – that when a lineage branches out, it gets better for about five million years but not 500 million years. You wouldn't see progressive improvement over that kind of time scale.DA: No, things get more and more specialised. Not necessarily better.RD: The "camera" eyes of any modern animal would be better than what had come before.DA: Certainly... but they don't elaborate beyond function. When I listen to a soprano sing a Handel aria with an astonishing coloratura from that particular larynx, I say to myself, there has to be a biological reason that was useful at some stage. The larynx of a human being did not evolve without having some function. And the only function I can see is sexual attraction.RD: Sexual selection is important and probably underrated.DA: What I like to think is that if I think the male bird of paradise is beautiful, my appreciation of it is precisely the same as a female bird of paradise.
    • Weiye Loh
       
      Is survivability really all about sex and reproduction of future generation? 
  • People say Richard Feynman had one of these extraordinary minds that could grapple with ideas of which I have no concept. And you hear all the ancillary bits – like he was a good bongo player – that make him human. So I&nbsp;admire this man who could not only deal with string theory but also play the bongos. But he is beyond me. I have no idea what he was talking of.
  • ...6 more annotations...
  • RD: There does seem to be a sense in which physics has gone beyond what human intuition can understand. We shouldn't be too surprised about that because we're evolved to understand things that move at a medium pace at a medium scale. We can't cope with the very tiny scale of quantum physics or the very large scale of relativity.
  • DA: A physicist will tell me that this armchair is made of vibrations and that it's not really here at all. But when Samuel Johnson was asked to prove the material existence of reality, he just went up to a big stone and kicked it. I'm with him.
  • RD: It's intriguing that the chair is mostly empty space and the thing that stops you going through it is vibrations or energy fields. But it's also fascinating that, because we're animals that evolved to survive, what solidity is to most of us is something you can't walk through.
  • the science of the future may be vastly different from the science of today, and you have to have the humility to admit when you don't know. But instead of filling that vacuum with goblins or spirits, I&nbsp;think you should say, "Science is working on it."
  • DA: Yes, there was a letter in the paper [about Stephen Hawking's comments on the nonexistence of God] saying, "It's absolutely clear that the function of the world is to declare the glory of God." I thought, what does that sentence mean?!
  • What is the most difficult ethical dilemma facing science today?DA: How far do you go to preserve individual human life?RD: That's a good one, yes.DA: I mean, what are we to do with the NHS? How can you put a value in pounds, shillings and pence on an individual's life? There was a case with a bowel cancer drug – if you gave that drug, which costs several thousand pounds, it continued life for six weeks on. How can you make that decision?
  •  
    Of mind and matter: David Attenborough meets Richard Dawkins We paired up Britain's most celebrated scientists to chat about the big issues: the unity of life, ethics, energy, Handel - and the joy of riding a snowmobile
Weiye Loh

Kevin Kelly and Steven Johnson on Where Ideas Come From | Magazine - 0 views

  • Say the word “inventor” and most people think of a solitary genius toiling in a basement. But two ambitious new books on the history of innovation—by Steven Johnson and Kevin Kelly, both longtime wired contributors—argue that great discoveries typically spring not from individual minds but from the hive mind. In Where Good Ideas Come From: The Natural History of Innovation, Johnson draws on seven centuries of scientific and technological progress, from Gutenberg to GPS, to show what sorts of environments nurture ingenuity. He finds that great creative milieus, whether MIT or Los Alamos, New York City or the World Wide Web, are like coral reefs—teeming, diverse colonies of creators who interact with and influence one another.
  • Seven centuries are an eyeblink in the scope of Kelly’s book, What Technology Wants, which looks back over some 50,000 years of history and peers nearly that far into the future. His argument is similarly sweeping: Technology, Kelly believes, can be seen as a sort of autonomous life-form, with intrinsic goals toward which it gropes over the course of its long development. Those goals, he says, are much like the tendencies of biological life, which over time diversifies, specializes, and (eventually) becomes more sentient.
  • We share a fascination with the long history of simultaneous invention: cases where several people come up with the same idea at almost exactly the same time. Calculus, the electrical battery, the telephone, the steam engine, the radio—all these groundbreaking innovations were hit upon by multiple inventors working in parallel with no knowledge of one another.
  • ...25 more annotations...
  • It’s amazing that the myth of the lone genius has persisted for so long, since simultaneous invention has always been the norm, not the exception. Anthropologists have shown that the same inventions tended to crop up in prehistory at roughly similar times, in roughly the same order, among cultures on different continents that couldn’t possibly have contacted one another.
  • Also, there’s a related myth—that innovation comes primarily from the profit motive, from the competitive pressures of a market society. If you look at history, innovation doesn’t come just from giving people incentives; it comes from creating environments where their ideas can connect.
  • The musician Brian Eno invented a wonderful word to describe this phenomenon: scenius. We normally think of innovators as independent geniuses, but Eno’s point is that innovation comes from social scenes,from passionate and connected groups of people.
  • It turns out that the lone genius entrepreneur has always been a rarity—there’s far more innovation coming out of open, nonmarket networks than we tend to assume.
  • Really, we should think of ideas as connections,in our brains and among people. Ideas aren’t self-contained things; they’re more like ecologies and networks. They travel in clusters.
  • ideas are networks
  • In part, that’s because ideas that leap too far ahead are almost never implemented—they aren’t even valuable. People can absorb only one advance, one small hop, at a time. Gregor Mendel’s ideas about genetics, for example: He formulated them in 1865, but they were ignored for 35 years because they were too advanced. Nobody could incorporate them. Then, when the collective mind was ready and his idea was only one hop away, three different scientists independently rediscovered his work within roughly a year of one another.
  • Charles Babbage is another great case study. His “analytical engine,” which he started designing in the 1830s, was an incredibly detailed vision of what would become the modern computer, with a CPU, RAM, and so on. But it couldn’t possibly have been built at the time, and his ideas had to be rediscovered a hundred years later.
  • I think there are a lot of ideas today that are ahead of their time. Human cloning, autopilot cars, patent-free law—all are close technically but too many steps ahead culturally. Innovating is about more than just having the idea yourself; you also have to bring everyone else to where your idea is. And that becomes really difficult if you’re too many steps ahead.
  • The scientist Stuart Kauffman calls this the “adjacent possible.” At any given moment in evolution—of life, of natural systems, or of cultural systems—there’s a space of possibility that surrounds any current configuration of things. Change happens when you take that configuration and arrange it in a new way. But there are limits to how much you can change in a single move.
  • Which is why the great inventions are usually those that take the smallest possible step to unleash the most change. That was the difference between Tim Berners-Lee’s successful HTML code and Ted Nelson’s abortive Xanadu project. Both tried to jump into the same general space—a networked hypertext—but Tim’s approach did it with a dumb half-step, while Ted’s earlier, more elegant design required that everyone take five steps all at once.
  • Also, the steps have to be taken in the right order. You can’t invent the Internet and then the digital computer. This is true of life as well. The building blocks of DNA had to be in place before evolution could build more complex things. One of the key ideas I’ve gotten from you, by the way—when I read your book Out of Control in grad school—is this continuity between biological and technological systems.
  • technology is something that can give meaning to our lives, particularly in a secular world.
  • He had this bleak, soul-sucking vision of technology as an autonomous force for evil. You also present technology as a sort of autonomous force—as wanting something, over the long course of its evolution—but it’s a more balanced and ultimately positive vision, which I find much more appealing than the alternative.
  • As I started thinking about the history of technology, there did seem to be a sense in which, during any given period, lots of innovations were in the air, as it were. They came simultaneously. It appeared as if they wanted to happen. I should hasten to add that it’s not a conscious agency; it’s a lower form, something like the way an organism or bacterium can be said to have certain tendencies, certain trends, certain urges. But it’s an agency nevertheless.
  • technology wants increasing diversity—which is what I think also happens in biological systems, as the adjacent possible becomes larger with each innovation. As tech critics, I think we have to keep this in mind, because when you expand the diversity of a system, that leads to an increase in great things and an increase in crap.
  • the idea that the most creative environments allow for repeated failure.
  • And for wastes of time and resources. If you knew nothing about the Internet and were trying to figure it out from the data, you would reasonably conclude that it was designed for the transmission of spam and porn. And yet at the same time, there’s more amazing stuff available to us than ever before, thanks to the Internet.
  • To create something great, you need the means to make a lot of really bad crap. Another example is spectrum. One reason we have this great explosion of innovation in wireless right now is that the US deregulated spectrum. Before that, spectrum was something too precious to be wasted on silliness. But when you deregulate—and say, OK, now waste it—then you get Wi-Fi.
  • If we didn’t have genetic mutations, we wouldn’t have us. You need error to open the door to the adjacent possible.
  • image of the coral reef as a metaphor for where innovation comes from. So what, today, are some of the most reeflike places in the technological realm?
  • Twitter—not to see what people are having for breakfast, of course, but to see what people are talking about, the links to articles and posts that they’re passing along.
  • second example of an information coral reef, and maybe the less predictable one, is the university system. As much as we sometimes roll our eyes at the ivory-tower isolation of universities, they continue to serve as remarkable engines of innovation.
  • Life seems to gravitate toward these complex states where there’s just enough disorder to create new things. There’s a rate of mutation just high enough to let interesting new innovations happen, but not so many mutations that every new generation dies off immediately.
  • , technology is an extension of life. Both life and technology are faces of the same larger system.
  •  
    Kevin Kelly and Steven Johnson on Where Ideas Come From By Wired September 27, 2010  |  2:00 pm  |  Wired October 2010
Weiye Loh

How We Know by Freeman Dyson | The New York Review of Books - 0 views

  • Another example illustrating the central dogma is the French optical telegraph.
  • The telegraph was an optical communication system with stations consisting of large movable pointers mounted on the tops of sixty-foot towers. Each station was manned by an operator who could read a message transmitted by a neighboring station and transmit the same message to the next station in the transmission line.
  • The distance between neighbors was about seven miles. Along the transmission lines, optical messages in France could travel faster than drum messages in Africa. When Napoleon took charge of the French Republic in 1799, he ordered the completion of the optical telegraph system to link all the major cities of France from Calais and Paris to Toulon and onward to Milan. The telegraph became, as Claude Chappe had intended, an important instrument of national power. Napoleon made sure that it was not available to private users.
  • ...27 more annotations...
  • Unlike the drum language, which was based on spoken language, the optical telegraph was based on written French. Chappe invented an elaborate coding system to translate written messages into optical signals. Chappe had the opposite problem from the drummers. The drummers had a fast transmission system with ambiguous messages. They needed to slow down the transmission to make the messages unambiguous. Chappe had a painfully slow transmission system with redundant messages. The French language, like most alphabetic languages, is highly redundant, using many more letters than are needed to convey the meaning of a message. Chappe’s coding system allowed messages to be transmitted faster. Many common phrases and proper names were encoded by only two optical symbols, with a substantial gain in speed of transmission. The composer and the reader of the message had code books listing the message codes for eight thousand phrases and names. For Napoleon it was an advantage to have a code that was effectively cryptographic, keeping the content of the messages secret from citizens along the route.
  • After these two historical examples of rapid communication in Africa and France, the rest of Gleick’s book is about the modern development of information technolog
  • The modern history is dominated by two Americans, Samuel Morse and Claude Shannon. Samuel Morse was the inventor of Morse Code. He was also one of the pioneers who built a telegraph system using electricity conducted through wires instead of optical pointers deployed on towers. Morse launched his electric telegraph in 1838 and perfected the code in 1844. His code used short and long pulses of electric current to represent letters of the alphabet.
  • Morse was ideologically at the opposite pole from Chappe. He was not interested in secrecy or in creating an instrument of government power. The Morse system was designed to be a profit-making enterprise, fast and cheap and available to everybody. At the beginning the price of a message was a quarter of a cent per letter. The most important users of the system were newspaper correspondents spreading news of local events to readers all over the world. Morse Code was simple enough that anyone could learn it. The system provided no secrecy to the users. If users wanted secrecy, they could invent their own secret codes and encipher their messages themselves. The price of a message in cipher was higher than the price of a message in plain text, because the telegraph operators could transcribe plain text faster. It was much easier to correct errors in plain text than in cipher.
  • Claude Shannon was the founding father of information theory. For a hundred years after the electric telegraph, other communication systems such as the telephone, radio, and television were invented and developed by engineers without any need for higher mathematics. Then Shannon supplied the theory to understand all of these systems together, defining information as an abstract quantity inherent in a telephone message or a television picture. Shannon brought higher mathematics into the game.
  • When Shannon was a boy growing up on a farm in Michigan, he built a homemade telegraph system using Morse Code. Messages were transmitted to friends on neighboring farms, using the barbed wire of their fences to conduct electric signals. When World War II began, Shannon became one of the pioneers of scientific cryptography, working on the high-level cryptographic telephone system that allowed Roosevelt and Churchill to talk to each other over a secure channel. Shannon’s friend Alan Turing was also working as a cryptographer at the same time, in the famous British Enigma project that successfully deciphered German military codes. The two pioneers met frequently when Turing visited New York in 1943, but they belonged to separate secret worlds and could not exchange ideas about cryptography.
  • In 1945 Shannon wrote a paper, “A Mathematical Theory of Cryptography,” which was stamped SECRET and never saw the light of day. He published in 1948 an expurgated version of the 1945 paper with the title “A Mathematical Theory of Communication.” The 1948 version appeared in the Bell System Technical Journal, the house journal of the Bell Telephone Laboratories, and became an instant classic. It is the founding document for the modern science of information. After Shannon, the technology of information raced ahead, with electronic computers, digital cameras, the Internet, and the World Wide Web.
  • According to Gleick, the impact of information on human affairs came in three installments: first the history, the thousands of years during which people created and exchanged information without the concept of measuring it; second the theory, first formulated by Shannon; third the flood, in which we now live
  • The event that made the flood plainly visible occurred in 1965, when Gordon Moore stated Moore’s Law. Moore was an electrical engineer, founder of the Intel Corporation, a company that manufactured components for computers and other electronic gadgets. His law said that the price of electronic components would decrease and their numbers would increase by a factor of two every eighteen months. This implied that the price would decrease and the numbers would increase by a factor of a hundred every decade. Moore’s prediction of continued growth has turned out to be astonishingly accurate during the forty-five years since he announced it. In these four and a half decades, the price has decreased and the numbers have increased by a factor of a billion, nine powers of ten. Nine powers of ten are enough to turn a trickle into a flood.
  • Gordon Moore was in the hardware business, making hardware components for electronic machines, and he stated his law as a law of growth for hardware. But the law applies also to the information that the hardware is designed to embody. The purpose of the hardware is to store and process information. The storage of information is called memory, and the processing of information is called computing. The consequence of Moore’s Law for information is that the price of memory and computing decreases and the available amount of memory and computing increases by a factor of a hundred every decade. The flood of hardware becomes a flood of information.
  • In 1949, one year after Shannon published the rules of information theory, he drew up a table of the various stores of memory that then existed. The biggest memory in his table was the US Library of Congress, which he estimated to contain one hundred trillion bits of information. That was at the time a fair guess at the sum total of recorded human knowledge. Today a memory disc drive storing that amount of information weighs a few pounds and can be bought for about a thousand dollars. Information, otherwise known as data, pours into memories of that size or larger, in government and business offices and scientific laboratories all over the world. Gleick quotes the computer scientist Jaron Lanier describing the effect of the flood: “It’s as if you kneel to plant the seed of a tree and it grows so fast that it swallows your whole town before you can even rise to your feet.”
  • On December 8, 2010, Gleick published on the The New York Review’s blog an illuminating essay, “The Information Palace.” It was written too late to be included in his book. It describes the historical changes of meaning of the word “information,” as recorded in the latest quarterly online revision of the Oxford English Dictionary. The word first appears in 1386 a parliamentary report with the meaning “denunciation.” The history ends with the modern usage, “information fatigue,” defined as “apathy, indifference or mental exhaustion arising from exposure to too much information.”
  • The consequences of the information flood are not all bad. One of the creative enterprises made possible by the flood is Wikipedia, started ten years ago by Jimmy Wales. Among my friends and acquaintances, everybody distrusts Wikipedia and everybody uses it. Distrust and productive use are not incompatible. Wikipedia is the ultimate open source repository of information. Everyone is free to read it and everyone is free to write it. It contains articles in 262 languages written by several million authors. The information that it contains is totally unreliable and surprisingly accurate. It is often unreliable because many of the authors are ignorant or careless. It is often accurate because the articles are edited and corrected by readers who are better informed than the authors
  • Jimmy Wales hoped when he started Wikipedia that the combination of enthusiastic volunteer writers with open source information technology would cause a revolution in human access to knowledge. The rate of growth of Wikipedia exceeded his wildest dreams. Within ten years it has become the biggest storehouse of information on the planet and the noisiest battleground of conflicting opinions. It illustrates Shannon’s law of reliable communication. Shannon’s law says that accurate transmission of information is possible in a communication system with a high level of noise. Even in the noisiest system, errors can be reliably corrected and accurate information transmitted, provided that the transmission is sufficiently redundant. That is, in a nutshell, how Wikipedia works.
  • The information flood has also brought enormous benefits to science. The public has a distorted view of science, because children are taught in school that science is a collection of firmly established truths. In fact, science is not a collection of truths. It is a continuing exploration of mysteries. Wherever we go exploring in the world around us, we find mysteries. Our planet is covered by continents and oceans whose origin we cannot explain. Our atmosphere is constantly stirred by poorly understood disturbances that we call weather and climate. The visible matter in the universe is outweighed by a much larger quantity of dark invisible matter that we do not understand at all. The origin of life is a total mystery, and so is the existence of human consciousness. We have no clear idea how the electrical discharges occurring in nerve cells in our brains are connected with our feelings and desires and actions.
  • Even physics, the most exact and most firmly established branch of science, is still full of mysteries. We do not know how much of Shannon’s theory of information will remain valid when quantum devices replace classical electric circuits as the carriers of information. Quantum devices may be made of single atoms or microscopic magnetic circuits. All that we know for sure is that they can theoretically do certain jobs that are beyond the reach of classical devices. Quantum computing is still an unexplored mystery on the frontier of information theory. Science is the sum total of a great multitude of mysteries. It is an unending argument between a great multitude of voices. It resembles Wikipedia much more than it resembles the Encyclopaedia Britannica.
  • The rapid growth of the flood of information in the last ten years made Wikipedia possible, and the same flood made twenty-first-century science possible. Twenty-first-century science is dominated by huge stores of information that we call databases. The information flood has made it easy and cheap to build databases. One example of a twenty-first-century database is the collection of genome sequences of living creatures belonging to various species from microbes to humans. Each genome contains the complete genetic information that shaped the creature to which it belongs. The genome data-base is rapidly growing and is available for scientists all over the world to explore. Its origin can be traced to the year 1939, when Shannon wrote his Ph.D. thesis with the title “An Algebra for Theoretical Genetics.
  • Shannon was then a graduate student in the mathematics department at MIT. He was only dimly aware of the possible physical embodiment of genetic information. The true physical embodiment of the genome is the double helix structure of DNA molecules, discovered by Francis Crick and James Watson fourteen years later. In 1939 Shannon understood that the basis of genetics must be information, and that the information must be coded in some abstract algebra independent of its physical embodiment. Without any knowledge of the double helix, he could not hope to guess the detailed structure of the genetic code. He could only imagine that in some distant future the genetic information would be decoded and collected in a giant database that would define the total diversity of living creatures. It took only sixty years for his dream to come true.
  • In the twentieth century, genomes of humans and other species were laboriously decoded and translated into sequences of letters in computer memories. The decoding and translation became cheaper and faster as time went on, the price decreasing and the speed increasing according to Moore’s Law. The first human genome took fifteen years to decode and cost about a billion dollars. Now a human genome can be decoded in a few weeks and costs a few thousand dollars. Around the year 2000, a turning point was reached, when it became cheaper to produce genetic information than to understand it. Now we can pass a piece of human DNA through a machine and rapidly read out the genetic information, but we cannot read out the meaning of the information. We shall not fully understand the information until we understand in detail the processes of embryonic development that the DNA orchestrated to make us what we are.
  • The explosive growth of information in our human society is a part of the slower growth of ordered structures in the evolution of life as a whole. Life has for billions of years been evolving with organisms and ecosystems embodying increasing amounts of information. The evolution of life is a part of the evolution of the universe, which also evolves with increasing amounts of information embodied in ordered structures, galaxies and stars and planetary systems. In the living and in the nonliving world, we see a growth of order, starting from the featureless and uniform gas of the early universe and producing the magnificent diversity of weird objects that we see in the sky and in the rain forest. Everywhere around us, wherever we look, we see evidence of increasing order and increasing information. The technology arising from Shannon’s discoveries is only a local acceleration of the natural growth of information.
  • . Lord Kelvin, one of the leading physicists of that time, promoted the heat death dogma, predicting that the flow of heat from warmer to cooler objects will result in a decrease of temperature differences everywhere, until all temperatures ultimately become equal. Life needs temperature differences, to avoid being stifled by its waste heat. So life will disappear
  • Thanks to the discoveries of astronomers in the twentieth century, we now know that the heat death is a myth. The heat death can never happen, and there is no paradox. The best popular account of the disappearance of the paradox is a chapter, “How Order Was Born of Chaos,” in the book Creation of the Universe, by Fang Lizhi and his wife Li Shuxian.2 Fang Lizhi is doubly famous as a leading Chinese astronomer and a leading political dissident. He is now pursuing his double career at the University of Arizona.
  • The belief in a heat death was based on an idea that I call the cooking rule. The cooking rule says that a piece of steak gets warmer when we put it on a hot grill. More generally, the rule says that any object gets warmer when it gains energy, and gets cooler when it loses energy. Humans have been cooking steaks for thousands of years, and nobody ever saw a steak get colder while cooking on a fire. The cooking rule is true for objects small enough for us to handle. If the cooking rule is always true, then Lord Kelvin’s argument for the heat death is correct.
  • the cooking rule is not true for objects of astronomical size, for which gravitation is the dominant form of energy. The sun is a familiar example. As the sun loses energy by radiation, it becomes hotter and not cooler. Since the sun is made of compressible gas squeezed by its own gravitation, loss of energy causes it to become smaller and denser, and the compression causes it to become hotter. For almost all astronomical objects, gravitation dominates, and they have the same unexpected behavior. Gravitation reverses the usual relation between energy and temperature. In the domain of astronomy, when heat flows from hotter to cooler objects, the hot objects get hotter and the cool objects get cooler. As a result, temperature differences in the astronomical universe tend to increase rather than decrease as time goes on. There is no final state of uniform temperature, and there is no heat death. Gravitation gives us a universe hospitable to life. Information and order can continue to grow for billions of years in the future, as they have evidently grown in the past.
  • The vision of the future as an infinite playground, with an unending sequence of mysteries to be understood by an unending sequence of players exploring an unending supply of information, is a glorious vision for scientists. Scientists find the vision attractive, since it gives them a purpose for their existence and an unending supply of jobs. The vision is less attractive to artists and writers and ordinary people. Ordinary people are more interested in friends and family than in science. Ordinary people may not welcome a future spent swimming in an unending flood of information.
  • A darker view of the information-dominated universe was described in a famous story, “The Library of Babel,” by Jorge Luis Borges in 1941.3 Borges imagined his library, with an infinite array of books and shelves and mirrors, as a metaphor for the universe.
  • Gleick’s book has an epilogue entitled “The Return of Meaning,” expressing the concerns of people who feel alienated from the prevailing scientific culture. The enormous success of information theory came from Shannon’s decision to separate information from meaning. His central dogma, “Meaning is irrelevant,” declared that information could be handled with greater freedom if it was treated as a mathematical abstraction independent of meaning. The consequence of this freedom is the flood of information in which we are drowning. The immense size of modern databases gives us a feeling of meaninglessness. Information in such quantities reminds us of Borges’s library extending infinitely in all directions. It is our task as humans to bring meaning back into this wasteland. As finite creatures who think and feel, we can create islands of meaning in the sea of information. Gleick ends his book with Borges’s image of the human condition:We walk the corridors, searching the shelves and rearranging them, looking for lines of meaning amid leagues of cacophony and incoherence, reading the history of the past and of the future, collecting our thoughts and collecting the thoughts of others, and every so often glimpsing mirrors, in which we may recognize creatures of the information.
Weiye Loh

Should This Be the Last Generation? - Opinionator Blog - NYTimes.com - 0 views

  • Have you ever thought about whether to have a child? If so, what factors entered into your decision? Was it whether having children would be good for you, your partner and others close to the possible child, such as children you may already have, or perhaps your parents?
  • Some may also think about the desirability of adding to the strain that the nearly seven billion people already here are putting on our planet’s environment. But very few ask whether coming into existence is a good thing for the child itself.
  • we think it is wrong to bring into the world a child whose prospects for a happy, healthy life are poor, but we don’t usually think the fact that a child is likely to have a happy, healthy life is a reason for bringing the child into existence. This has come to be known among philosophers as “the asymmetry” and it is not easy to justify.
  • ...5 more annotations...
  • How good does life have to be, to make it reasonable to bring a child into the world? Is the standard of life experienced by most people in developed nations today good enough to make this decision unproblematic
  • Arthur Schopenhauer held that even the best life possible for humans is one in which we strive for ends that, once achieved, bring only fleeting satisfaction.
  • One of Benatar’s arguments trades on something like the asymmetry noted earlier. To bring into existence someone who will suffer is, Benatar argues, to harm that person, but to bring into existence someone who will have a good life is not to benefit him or her.
  • Hence continued reproduction will harm some children severely, and benefit none.
  • human lives are, in general, much less good than we think they are. We spend most of our lives with unfulfilled desires, and the occasional satisfactions that are all most of us can achieve are insufficient to outweigh these prolonged negative states.
  •  
    June 6, 2010, 5:15 PM Should This Be the Last Generation? By PETER SINGER
Weiye Loh

Arsenic bacteria - a post-mortem, a review, and some navel-gazing | Not Exactly Rocket ... - 0 views

  • t was the big news that wasn’t. Hyperbolic claims about the possible discovery of alien life, or a second branch of life on Earth, turned out to be nothing more than bacteria that can thrive on arsenic, using it in place of phosphorus in their DNA and other molecules. But after the initial layers of hype were peeled away, even this extraordinar
  • This is a chronological roundup of the criticism against the science in the paper itself, ending with some personal reflections on my own handling of the story (skip to Friday, December 10th for that bit).
  • Thursday, December 2nd: Felisa Wolfe-Simon published a paper in Science, claiming to have found bacteria in California’s Mono Lake that can grow using arsenic instead of phosphorus. Given that phosphorus is meant to be one of six irreplaceable elements, this would have been a big deal, not least because the bacteria apparently used arsenic to build the backbones of their DNA molecules.
  • ...14 more annotations...
  • In my post, I mentioned some caveats. Wolfe-Simon isolated the arsenic-loving strain, known as GFAJ-1, by growing Mono Lake bacteria in ever-increasing concentrations of arsenic while diluting out the phosphorus. It is possible that the bacteria’s arsenic molecules were an adaptation to the harsh environments within the experiment, rather than Mono Lake itself. More importantly, there were still detectable levels of phosphorus left in the cells at the end of the experiment, although Wolfe-Simon claimed that the bacteria shouldn’t have been able to grow on such small amounts.
  • signs emerged that NASA weren’t going to engage with the criticisms. Dwayne Brown, their senior public affairs officer, highlighted the fact that the paper was published in one of the “most prestigious scientific journals” and deemed it inappropriate to debate the science using the same media and bloggers who they relied on for press coverage of the science. Wolfe-Simon herself tweeted that “discussion about scientific details MUST be within a scientific venue so that we can come back to the public with a unified understanding.”
  • Jonathan Eisen says that “they carried out science by press release and press conference” and “are now hypocritical if they say that the only response should be in the scientific literature.” David Dobbs calls the attitude “a return to pre-Enlightenment thinking”, and rightly noted that “Rosie Redfield is a peer, and her blog is peer review”.
  • Chris Rowan agreed, saying that what happens after publication is what he considers to be “real peer review”. Rowan said, “The pre-publication stuff is just a quality filter, a check that the paper is not obviously wrong – and an imperfect filter at that. The real test is what happens in the months and years after publication.”Grant Jacobs and others post similar thoughts, while Nature and the Columbia Journalism Review both cover the fracas.
  • Jack Gilbert at the University of Chicago said that impatient though he is, peer-reviewed journals are the proper forum for criticism. Others were not so kind. At the Guardian, Martin Robbins says that “at almost every stage of this story the actors involved were collapsing under the weight of their own slavish obedience to a fundamentally broken… well… ’system’” And Ivan Oransky noted that NASA failed to follow its own code of conduct when announcing the study.
  • Dr Isis said, “If question remains about the voracity of these authors findings, then the only thing that is going to answer that doubt is data.&nbsp; Data cannot be generated by blog discussion… Talking about digging a ditch never got it dug.”
  • it is astonishing how quickly these events unfolded and the sheer number of bloggers and media outlets that became involved in the criticism. This is indeed a brave new world, and one in which we are all the infamous Third Reviewer.
  • I tried to quell the hype around the study as best I could. I had the paper and I think that what I wrote was a fair representation of it. But, of course, that’s not necessarily enough. I’ve argued before that journalists should not be merely messengers – we should make the best possible efforts to cut through what’s being said in an attempt to uncover what’s actually true. Arguably, that didn’t happen although to clarify, I am not saying that the paper is rubbish or untrue. Despite the criticisms, I want to see the authors respond in a thorough way or to see another lab attempt replicate the experiments before jumping to conclusions.
  • the sheer amount of negative comment indicates that I could have been more critical of the paper in my piece. Others have been supportive in suggesting that this was more egg on the face of the peer reviewers and indeed, several practicing scientists took the findings on face value, speculating about everything from the implications for chemotherapy to whether the bacteria have special viruses. The counter-argument, which I have no good retort to, is that peer review is no guarantee of quality, and that writers should be able to see through the fog of whatever topic they write about.
  • my response was that we should expect people to make reasonable efforts to uncover truth and be skeptical, while appreciating that people can and will make mistakes.
  • it comes down to this: did I do enough? I was certainly cautious. I said that “there is room for doubt” and I brought up the fact that the arsenic-loving bacteria still contain measurable levels of phosphorus. But I didn’t run the paper past other sources for comment, which I typically do it for stories that contain extraordinary claims. There was certainly plenty of time to do so here and while there were various reasons that I didn’t, the bottom line is that I could have done more. That doesn’t always help, of course, but it was an important missed step. A lesson for next time.
  • I do believe that it you’re going to try to hold your profession to a higher standard, you have to be honest and open when you’ve made mistakes yourself. I also think that if you cover a story that turns out to be a bit dodgy, you have a certain responsibility in covering the follow-up
  • A basic problem with is the embargo. Specifically that journalists get early access, while peers – other specialists in the field – do not. It means that the journalist, like yourself, can rely only on the original authors, with no way of getting other views on the findings. And it means that peers can’t write about the paper when the journalists (who, inevitably, do a positive-only coverage due to the lack of other viewpoints) do, but will be able to voice only after they’ve been able to digest the paper and formulate a response.
  • No, that’s not true. The embargo doens’t preclude journalists from sending papers out to other authors for review and comment. I do this a lot and I have been critical about new papers as a result, but that’s the step that I missed for this story.
Weiye Loh

To Die of Having Lived: an article by Richard Rapport | The American Scholar - 0 views

  • Although it may be a form of arrogance to attempt the management of one’s own death, is it better to surrender that management to the arrogance of someone else? We know we can’t avoid dying, but perhaps we can avoid dying badly.
  • Dodging a bad death has become more complicated over the past 30 or 40 years. Before the advent of technological creations that permit vital functions to be sustained so well artificially, medical ethics were less obstructed by abstract definitions of death.
  • generally agreed upon criteria for brain death have simplified some of these confusions, but they have not solved them. The broad middle ground between our usual health and consciousness as the expected norm on the one hand, and clear death of the brain on the other, lacks certainty.
    • Weiye Loh
       
      Isn't it always the case? That dichotomous relationships aren't clearly and equally demarcated but some how we attempt to split them up... through polemical discourses and rhetorics...
  • ...13 more annotations...
  • Doctors and other health-care workers can provide patients and families with probabilities for improvement or recovery, but statistics are hardly what is wanted. Even after profound injury or the diagnosis of an illness that statistically is nearly certain to be fatal, what people hear is the word nearly. How do we not allow the death of someone who might be saved? How do we avoid the equally intolerable salvation of a clinically dead person?
    • Weiye Loh
       
      In what situations do we hear the word "nearly" and in what situations do we hear the word "certain"? When we're dealing with a person's life, we hear "nearly", but when we're dealing with climate science we hear "certain"? 
  • Injecting political agendas into these end-of-life complexities only confuses the problem without providing a solution.
  • The questions are how, when, and on whose terms we depart. It is curious that people might be convinced to avoid confronting death while they are healthy, and that society tolerates ad hominem arguments that obstruct rational debate over an authentic problem of ethics in an uncertain world.
  • Any seriously ill older person who winds up in a modern CCU immediately yields his autonomy. Even if the doctors, nurses, and staff caring for him are intelligent, properly educated, humanistically motivated, and correct in the diagnosis, they are manipulated not only by the tyranny of technology but also by the rules established in their hospital. In addition, regulations of local and state licensing agencies and the federal government dictate the parameters of what the hospital workers do and how they do it, and every action taken is heavily influenced by legal experts committed to their client’s best interest—values frequently different from the patient’s. Once an acutely ill patient finds himself in this situation, everything possible will be done to save him; he is in no position to offer an opinion.
  • Eventually, after hours or days (depending on the illness and who is involved in the care), the wisdom of continuing treatment may come into question. But by then the patient will likely have been intubated and placed on a ventilator, a feeding tube may have been inserted, a catheter placed in the bladder, IVs started in peripheral veins or threaded through a major blood vessel near the heart, and monitors attached to record an EKG, arterial blood pressure, temperature, respirations, oxygen saturation, even pressure inside the skull. Sequential pressure devices will have been wrapped around the legs. All the digital marvels have alarms, so if one isn’t working properly, an annoying beep, like the sound of a backing truck, will fill the patient’s room. Vigilant nurses will add drugs by the dozens to the IV or push them into ports. Families will hover uncertainly. Meanwhile, tens and perhaps hundreds of thousands of dollars will have been transferred from one large corporation—an insurer of some kind—to another large corporation—a health care delivery system of some kind.
    • Weiye Loh
       
      Perhaps then, the value of life is not so much life in itself per se, but rather the transactive amount it generates. 
  • While the expense of the drugs, manpower, and technology required to make a diagnosis and deliver therapy does sop up resources and thereby deny treatment that might be more fruitful for others, including the 46.3 million Americans who, according to the Census Bureau, have no health insurance, that isn’t the real dilemma of the critical care unit.
  • the problem isn’t getting into or out of a CCU; the predicament is in knowing who should be there in the first place.
  • Before we become ill, we tend to assume that everything can be treated and treated successfully. The prelate in Willa Cather’s Death Comes for the Archbishop was wiser. Approaching the end, he said to a younger priest, “I shall not die of a cold, my son. I shall die of having lived.”
  • best way to avoid unwanted admission to a critical care unit at or near the end of life is to write an advance directive (a living will or durable power of attorney for health care) when healthy.
  • , not many people do this and, more regrettably, often the document is not included in the patient’s chart or it goes unnoticed.
  • Since we are sure to die of having lived, we should prepare for death before the last minute. Entire corporations are dedicated to teaching people how to retire well. All of their written materials, Web sites, and seminars begin with the same advice: start planning early. Shouldn’t we at least occasionally think about how we want to leave our lives?
  • Flannery O’Connor, who died young of systemic lupus, wrote, “Sickness before death is a very appropriate thing and I think those who don’t have it miss one of God’s mercies.”
  • Because we understand the metaphor of conflict so well, we are easily sold on the idea that we must resolutely fight against our afflictions (although there was once an article in The Onion titled “Man Loses Cowardly Battle With Cancer”). And there is a place to contest an abnormal metabolism, a mutation, a trauma, or an infection. But there is also a place to surrender. When the organs have failed, when the mind has dissolved, when the body that has faithfully housed us for our lifetime has abandoned us, what’s wrong with giving up?
  •  
    Spring 2010 To Die of Having Lived A neurological surgeon reflects on what patients and their families should and should not do when the end draws near
kenneth yang

SD ballot measure would ease restrictions on stem cell research - 1 views

PIERRE, S.D. (AP) - A proposed ballot issue to ease restrictions on stem cell research will strike a chord with South Dakotans because nearly everyone has had a serious disease or knows someone who...

ethics rights stem cell

started by kenneth yang on 21 Oct 09 no follow-up yet
Weiye Loh

Hacktivists as Gadflies - NYTimes.com - 0 views

  •  
    "Consider the case of Andrew Auernheimer, better known as "Weev." When Weev discovered in 2010 that AT&T had left private information about its customers vulnerable on the Internet, he and a colleague wrote a script to access it. Technically, he did not "hack" anything; he merely executed a simple version of what Google Web crawlers do every second of every day - sequentially walk through public URLs and extract the content. When he got the information (the e-mail addresses of 114,000 iPad users, including Mayor Michael Bloomberg and Rahm Emanuel, then the White House chief of staff), Weev did not try to profit from it; he notified the blog Gawker of the security hole. For this service Weev might have asked for free dinners for life, but instead he was recently sentenced to 41 months in prison and ordered to pay a fine of more than $73,000 in damages to AT&T to cover the cost of notifying its customers of its own security failure. When the federal judge Susan Wigenton sentenced Weev on March 18, she described him with prose that could have been lifted from the prosecutor Meletus in Plato's "Apology." "You consider yourself a hero of sorts," she said, and noted that Weev's "special skills" in computer coding called for a more draconian sentence. I was reminded of a line from an essay written in 1986 by a hacker called the Mentor: "My crime is that of outsmarting you, something that you will never forgive me for." When offered the chance to speak, Weev, like Socrates, did not back down: "I don't come here today to ask for forgiveness. I'm here to tell this court, if it has any foresight at all, that it should be thinking about what it can do to make amends to me for the harm and the violence that has been inflicted upon my life." He then went on to heap scorn upon the law being used to put him away - the Computer Fraud and Abuse Act, the same law that prosecutors used to go after the 26-year-old Internet activist Aaron Swart
Weiye Loh

Libertarianism Is Marxism of the Right - 4 views

http://www.commongroundcommonsense.org/forums/lofiversion/index.php/t21933.html "Because 95 percent of the libertarianism one encounters at cocktail parties, on editorial pages, and on Capitol Hil...

Libertarianism Marxism

started by Weiye Loh on 28 Aug 09 no follow-up yet
Weiye Loh

MacIntyre on money « Prospect Magazine - 0 views

  • MacIntyre has often given the impression of a robe-ripping Savonarola. He has lambasted the heirs to the principal western ethical schools: John Locke’s social contract, Immanuel Kant’s categorical imperative, Jeremy Bentham’s utilitarian “the greatest happiness for the greatest number.” Yet his is not a lone voice in the wilderness. He can claim connections with a trio of 20th-century intellectual heavyweights: the late Elizabeth Anscombe, her surviving husband, Peter Geach, and the Canadian philosopher Charles Taylor, winner in 2007 of the Templeton prize. What all four have in common is their Catholic faith, enthusiasm for Aristotle’s telos (life goals), and promotion of Thomism, the philosophy of St Thomas Aquinas who married Christianity and Aristotle. Leo XIII (pope from 1878 to 1903), who revived Thomism while condemning communism and unfettered capitalism, is also an influence.
  • MacIntyre’s key moral and political idea is that to be human is to be an Aristotelian goal-driven, social animal. Being good, according to Aristotle, consists in a creature (whether plant, animal, or human) acting according to its nature—its telos, or purpose. The telos for human beings is to generate a communal life with others; and the good society is composed of many independent, self-reliant groups.
  • MacIntyre differs from all these influences and alliances, from Leo XIII onwards, in his residual respect for Marx’s critique of capitalism.
  • ...6 more annotations...
  • MacIntyre begins his Cambridge talk by asserting that the 2008 economic crisis was not due to a failure of business ethics.
  • he has argued that moral behaviour begins with the good practice of a profession, trade, or art: playing the violin, cutting hair, brick-laying, teaching philosophy.
  • In other words, the virtues necessary for human flourishing are not a result of the top-down application of abstract ethical principles, but the development of good character in everyday life.
  • After Virtue, which is in essence an attack on the failings of the Enlightenment, has in its sights a catalogue of modern assumptions of beneficence: liberalism, humanism, individualism, capitalism. MacIntyre yearns for a single, shared view of the good life as opposed to modern pluralism’s assumption that there can be many competing views of how to live well.
  • In philosophy he attacks consequentialism, the view that what matters about an action is its consequences, which is usually coupled with utilitarianism’s “greatest happiness” principle. He also rejects Kantianism—the identification of universal ethical maxims based on reason and applied to circumstances top down. MacIntyre’s critique routinely cites the contradictory moral principles adopted by the allies in the second world war. Britain invoked a Kantian reason for declaring war on Germany: that Hitler could not be allowed to invade his neighbours. But the bombing of Dresden (which for a Kantian involved the treatment of people as a means to an end, something that should never be countenanced) was justified under consequentialist or utilitarian arguments: to bring the war to a swift end.
  • MacIntyre seeks to oppose utilitarianism on the grounds that people are called on by their very nature to be good, not merely to perform acts that can be interpreted as good. The most damaging consequence of the Enlightenment, for MacIntyre, is the decline of the idea of a tradition within which an individual’s desires are disciplined by virtue. And that means being guided by internal rather than external “goods.” So the point of being a good footballer is the internal good of playing beautifully and scoring lots of goals, not the external good of earning a lot of money. The trend away from an Aristotelian perspective has been inexorable: from the empiricism of David Hume, to Darwin’s account of nature driven forward without a purpose, to the sterile analytical philosophy of AJ Ayer and the “demolition of metaphysics” in his 1936 book Language, Truth and Logic.
  •  
    The influential moral philosopher Alasdair MacIntyre has long stood outside the mainstream. Has the financial crisis finally vindicated his critique of global capitalism?
1 - 20 of 190 Next › Last »
Showing 20 items per page